Artificial Neural Network C++ Source Code

Posted by Gideon Pertzov
ANNAPP-Train.jpg

The neural-network code used in NeuroDriver is comprised of the following classes:
  • BPNode - Represents a single node in the network ( "neuron" ).
  • BPLink - Represents a link connecting two nodes.
  • BPNet - Represents a Backpropagation Artificial Neural Network. It uses the BPNode and BPLink classes to construct the network. All network operations (running, training) are performed via the interface of this class.
  • Pattern - Represents a training pattern. This is a helper class which is used to present the network with patterns during training. Each pattern contains a set of input values and a set of desired output values.

Creating the network

The easiest way to create the neural-network is by using the special BPNet constructor:

//                      Learning    Momentum    Num. Layers     Nodes   
// Rate per Layer
// L1 L2 L3
BPNet *bpnet = new BPNet(0.3, 0.7, 3, 2, 12, 3);

The first parameter is the network's Learning Rate (also called step-size).
The learning-rate (lr) is usually in the range 0.05 < lr < 0.75
It controls the rate by which the weights change in each step.
Large values can lead to faster convergence, but may also cause the network to
Jitter around the minimum and fail to converge.
Small values give a smoother error-gradient descent but can take more time to converge.

The learning rate can also be set anytime after the network is constructed
by using the BPNet::setLearningRate() method.
This can be used to adjust the learning rate during training as a function of
time, network performance, etc (see sample application source code).

The second parameter is the network's Momentum.
Typical values for momentum (mt) are 0 <= mt < 0.9
When using the momentum term - in each weight change we also add a fraction
of the previous weight change, the fraction is determined by the momentum value.
This gives the system a certain amount of inertia since the weight vector will tend
to continue moving in the same direction, unless opposed by the gradient term.

The third parameter is the total number of layers (including the input and output layers).

The following series of arguments are the number of nodes in each layer.
The number of arguments should correspond to the number of layers specified
in the third parameter.

For example, the following code creates a network with 4 layers:

  • 5 nodes in the first layer (input layer)
  • 8 nodes in the second layer (1st hidden layer)
  • 12 nodes in the third layer (2nd hidden layer)
  • 6 nodes in the fourth layer (output layer)
BPNet *bpnet = new BPNet(0.25, 0.5, 4,  5, 8, 12, 6);

An alternate method for creating the network is by using the BPNet::createNetwork() method directly (see sample application source code).


Training

To train the network we need to perform the following steps:

  1. Feed the network with a set of inputs.
  2. Propagate these inputs through the network layers (forward-pass).
  3. Set our desired outputs for these inputs.
  4. Let the network compute the error, propagate it backwards through the layers and adjust the link's weights to minimize the error (backward-pass).

The following code illustrates the above steps on a network with 2 input nodes and 3 output nodes:

/* 1. Set input values */
bpnet->setInput( 0.4, 0 ); // set value 0.4 for first input node
bpnet->setInput( 0.6, 1 ); // set value 0.6 for second input node


/* 2. Forward pass */
bpnet->run();


/* 3. Set desired output values for error computation */
bpnet->setError( 0.1, 0 ); // set desired value 0.1 for first output node
bpnet->setError( 0.8, 1 ); // set desired value 0.8 for second output node
bpnet->setError( 0.1, 2 ); // set desired value 0.1 for third output node


/* 4. Backward pass */
bpnet->learn();

This is done in a loop until the total error of the network's output drops below a certain threshold (tolerance) which we dim accurate enough for our purposes.

Of course, usually we need to present the network with more than one training pattern (input/desired outputs set).

The Pattern class helps simplify the task of presenting training patterns to the network.
Each Pattern contains a set of inputs and a set of desired outputs.
A Pattern can be loaded from a file or created in code during run-time.
We can work with Patterns instead of working with individual inputs and outputs:

bpnet->setInput( pTrainingPattern );// sets input values for all network input nodes
bpnet->setError( pTrainingPattern );// sets the desired output for all output nodes

Testing

After the network is trained (or after we load a trained network from file), we can test the network's performance.

This is done by feeding the network with inputs that were not part of the training set and seeing if the outputs are satisfactory.

(note: a testing set can also be used during training to decide when training should stop).

To test the network we need to:

  1. Set the inputs.
  2. Propagate the inputs through the network (forward-pass).
  3. Get the outputs
/* 1. Set input values */
bpnet->setInput( 0.3, 0 );
bpnet->setInput( 0.763, 1 );


/* 2. Forward pass */
bpnet->run();


/* 3. Get network output values*/
double value0 = bpnet->getOutput(0);// get 1st output
double value1 = bpnet->getOutput(1);// get 2nd output
double value2 = bpnet->getOutput(2);// get 3rd output

Sample Application

ANNAPP-Train


The sample application shows how the BPNet class can be used in a Role Playing Game (RPG).

Say we want a Non Player Character (NPC) to interact with the player's character,
According to the player's character class (Fighter, Wizard or Thief).

We don't want the NPC to know the player's class by "cheating" (e.g. by calling player->getClass() )
The NPC needs to discern the player's class from the player's appearance
(e.g. what the player is wearing, the equipment she's carrying, etc.)

This can add a realistic touch to the game by making the NPC act in a more human-like way.
The NPC can mistake the player for someone that she's not, which can lead to interesting
or amusing events instead of the same scripted response each time.
For example, the player's character can be mistaken for a thief and end up in the city jail -
this provides a twist in the story that was not expected by the player.

It can also be used to enhance game-play since the player can now disguise herself as someone else by
Changing her appearance.
Which may give her access to areas not open to her before, or assist her in evading enemies.

For example: the player (a wizard) attacks an NPC, runs away, and changes clothes.
The NPC tries to search for the wizard that attacked him but the player is now disguised as a thief
Which makes the NPC lose her in the crowd.

Since this is just a sample application, it demonstrates only the very basics of the above concept.
It can be enhanced a great deal by adding more inputs and outputs to the network and by including
more NPC abilities in the equation.

The sample application allows training a neural-network (which represents the NPC's brain) to
classify a player according to their clothing and the weapon they're carrying.

The NPC Intelligence and Experience can be modified and affect the training process.
(e.g. a dumb NPC will make more mistakes than an intelligent one, and an experienced
NPC can identify accurately more clothing/weapon combinations than a less experienced NPC).

After setting the NPC abilities and training the network, you can switch to the Test tab and test
the network's performance by feeding it different clothing/weapon combinations and observing
how the network classifies the inputs into the three classes (Fighter, Wizard, Thief).

(A nice enhancement to this application can be feeding the output of the network to a
fuzzy-logic system which then decides on the NPC response).

More information and additional instructions are included in the Readme.txt file of the sample application.

The code is also available on Github
Download BPNet source code (11 k)
Download sample application w/ source code (MSVC6) (80 k)

References