Diwa
Lightweight implementation of Artificial Neural Network for resource-constrained environments
Loading...
Searching...
No Matches
Diwa Class Referencefinal

Lightweight Feedforward Artificial Neural Network (ANN) library tailored for microcontrollers. More...

#include <diwa.h>

Public Member Functions

 Diwa ()
 Default constructor for the Diwa class.
 
 ~Diwa ()
 Destructor for the Diwa class.
 
DiwaError initialize (int inputNeurons, int hiddenLayers, int hiddenNeurons, int outputNeurons, bool randomizeWeights=true)
 Initializes the Diwa neural network with specified parameters.
 
double * inference (double *inputs)
 Perform inference on the neural network.
 
void train (double learningRate, double *inputNeurons, double *outputNeurons)
 Train the neural network using backpropagation.
 
double calculateAccuracy (double *testInput, double *testExpectedOutput, int epoch)
 Calculates the accuracy of the neural network on test data.
 
double calculateLoss (double *testInput, double *testExpectedOutput, int epoch)
 Calculates the loss of the neural network on test data.
 
void setActivationFunction (diwa_activation activation)
 Sets the activation function for the neural network.
 
diwa_activation getActivationFunction () const
 Retrieves the current activation function used by the neural network.
 
int recommendedHiddenNeuronCount ()
 Calculates the recommended number of hidden neurons based on the input and output neurons.
 
int recommendedHiddenLayerCount (int numSamples, int alpha)
 Calculates the recommended number of hidden layers based on the dataset size and complexity.
 

Detailed Description

Lightweight Feedforward Artificial Neural Network (ANN) library tailored for microcontrollers.

The Diwa library is designed to provide a simple yet effective implementation of a Feedforward Artificial Neural Network (ANN) for resource-constrained microcontroller environments such as ESP8266, ESP32, and similar development boards.

Note
This library is primarily intended for lightweight applications. For more intricate tasks, consider using advanced machine learning libraries on more powerful platforms.

Constructor & Destructor Documentation

◆ Diwa()

Diwa::Diwa ( )

Default constructor for the Diwa class.

This constructor initializes a new instance of the Diwa class. It sets up the neural network with default value 0 on parameters.

◆ ~Diwa()

Diwa::~Diwa ( )

Destructor for the Diwa class.

This destructor releases resources associated with the Diwa object upon its destruction. It ensures proper cleanup to prevent memory leaks.

Member Function Documentation

◆ calculateAccuracy()

double Diwa::calculateAccuracy ( double *  testInput,
double *  testExpectedOutput,
int  epoch 
)

Calculates the accuracy of the neural network on test data.

This function calculates the accuracy of the neural network on a given set of test data. It compares the inferred output with the expected output for each test sample and calculates the percentage of correct inferences.

Parameters
testInputPointer to the input values of the test data.
testExpectedOutputPointer to the expected output values of the test data.
epochTotal number of test samples in the test data.
Returns
The accuracy of the neural network on the test data as a percentage.

◆ calculateLoss()

double Diwa::calculateLoss ( double *  testInput,
double *  testExpectedOutput,
int  epoch 
)

Calculates the loss of the neural network on test data.

This function calculates the loss of the neural network on a given set of test data. It computes the percentage of test samples for which the inferred output does not match the expected output.

Parameters
testInputPointer to the input values of the test data.
testExpectedOutputPointer to the expected output values of the test data.
epochTotal number of test samples in the test data.
Returns
The loss of the neural network on the test data as a percentage.

◆ getActivationFunction()

diwa_activation Diwa::getActivationFunction ( ) const

Retrieves the current activation function used by the neural network.

This method returns the activation function currently set for the neurons in the neural network. It allows the user to query the current activation function being used for inference and training purposes. The activation function determines the output of a neuron based on its input. Different activation functions can be used depending on the nature of the problem being solved and the characteristics of the dataset. Common activation functions include sigmoid, ReLU, and tanh.

Returns
The activation function currently set for the neural network.
See also
Diwa::setActivationFunction()

◆ inference()

double * Diwa::inference ( double *  inputs)

Perform inference on the neural network.

Given an array of input values, this method computes and returns an array of output values through the neural network.

Parameters
inputsArray of input values for the neural network.
Returns
Array of output values after inference.

◆ initialize()

DiwaError Diwa::initialize ( int  inputNeurons,
int  hiddenLayers,
int  hiddenNeurons,
int  outputNeurons,
bool  randomizeWeights = true 
)

Initializes the Diwa neural network with specified parameters.

This method initializes the Diwa neural network with the given parameters, including the number of input neurons, hidden layers, hidden neurons per layer, and output neurons. Additionally, it allows the option to randomize the weights in the network if desired.

Parameters
inputNeuronsNumber of input neurons in the neural network.
hiddenLayersNumber of hidden layers in the neural network.
hiddenNeuronsNumber of neurons in each hidden layer.
outputNeuronsNumber of output neurons in the neural network.
randomizeWeightsFlag indicating whether to randomize weights in the network (default is true).
Returns
DiwaError indicating the initialization status.

◆ recommendedHiddenLayerCount()

int Diwa::recommendedHiddenLayerCount ( int  numSamples,
int  alpha 
)

Calculates the recommended number of hidden layers based on the dataset size and complexity.

This function computes the recommended number of hidden layers for a neural network based on the size and complexity of the dataset. The recommendation is calculated using a heuristic formula that takes into account the number of samples, input neurons, output neurons, and a scaling factor alpha. The recommended number of hidden layers is determined as the total number of samples divided by (alpha times the sum of input and output neurons).

Parameters
numSamplesThe total number of samples in the dataset.
alphaA scaling factor used to adjust the recommendation based on dataset complexity.
Returns
The recommended number of hidden layers, or -1 if any of the input parameters are non-positive.

◆ recommendedHiddenNeuronCount()

int Diwa::recommendedHiddenNeuronCount ( )

Calculates the recommended number of hidden neurons based on the input and output neurons.

This function computes the recommended number of hidden neurons for a neural network based on the number of input and output neurons. The recommendation is calculated using a heuristic formula that aims to strike a balance between model complexity and generalization ability. The recommended number of hidden neurons is determined as the square root of the product of the input and output neurons.

Returns
The recommended number of hidden neurons, or -1 if the input or output neurons are non-positive.

◆ setActivationFunction()

void Diwa::setActivationFunction ( diwa_activation  activation)

Sets the activation function for the neural network.

This method allows the user to set the activation function used by the neurons in the neural network. The activation function determines the output of a neuron based on its input. Different activation functions can be used depending on the nature of the problem being solved and the characteristics of the dataset. Common activation functions include sigmoid, ReLU, and tanh.

Parameters
activationThe activation function to be set for the neural network.
See also
Diwa::getActivationFunction()

◆ train()

void Diwa::train ( double  learningRate,
double *  inputNeurons,
double *  outputNeurons 
)

Train the neural network using backpropagation.

This method facilitates the training of the neural network by adjusting its weights based on the provided input and target output values.

Parameters
learningRateLearning rate for the training process.
inputNeuronsArray of input values for training.
outputNeuronsArray of target output values for training.

The documentation for this class was generated from the following files: