Tell me more ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.

I am trying to train a simple neural network with Pybrain. After training I want to confirm that the nn is working as intended, so I activate the same data that I used to train it with. However every activation outputs the same result. Am I misunderstanding a basic concept about neural networks or is this by design?

I have tried altering the number of hidden nodes, the hiddenclass type, the bias, the learningrate, the number of training epochs and the momentum to no avail.

This is my code...

from pybrain.tools.shortcuts import buildNetwork                                
from pybrain.datasets import SupervisedDataSet                                  
from pybrain.supervised.trainers import BackpropTrainer

net = buildNetwork(2, 3, 1)  
net.randomize()                                                    

ds = SupervisedDataSet(2, 1)                                                       
ds.addSample([77, 78], 77)                                                         
ds.addSample([78, 76], 76)                                                         
ds.addSample([76, 76], 75)                                                         

trainer = BackpropTrainer(net, ds)                                                 
for epoch in range(0, 1000):                                                                   
    error = trainer.train()                                                                    
    if error < 0.001:                                                                          
        break                                                      

print net.activate([77, 78])                                                       
print net.activate([78, 76])                                                       
print net.activate([76, 76])  

This is an example of what the results can be... As you can see the output is the same even though the activation inputs are different.

[ 75.99893007]
[ 75.99893007]
[ 75.99893007]
share|improve this question
Have you randomized the initial synapsis strength between the nodes? – Renan Jun 10 at 20:16
with net.randomize? I had tried that already but I added it back in just in case and still the same issue is occurring. I've updated my code example to reflect this. – Harley Jun 10 at 20:21
For an ANN to work properly, its synapses must be randomized when its generated. When they all have the same strength, you do get the same output for every neuron on the last layer, so I really, really thought it was that. – Renan Jun 10 at 21:27
1  
In the end I solved this by normalizing the data between 0 and 1 and also training until the error rate hit 0.00001. It takes much longer to train, but I do get accurate results now. – Harley 2 days ago

Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Browse other questions tagged or ask your own question.