I check the calculation with a self-written module, comparing the output with pytorch

I check the calculation with a self-written module, comparing the output with pytorch

The first article from the series is here.

I’ve been trying to pull off one test for the second day: to assemble the same grid with my nn_module model (a class I wrote) and using the PyTorch library. Let’s see how it turns out. So far, I have brought them together so that they have the same structure. Now you need to set these inputs manually and compare the outputs.

What for? And xs. I just want predictability in the results and in what I do. Yes, and check your neural network class, written already figs knows when, without normal knowledge, by touch.

As for the actual selection, the snag with the rotation of creatures, perhaps, should not be solved somehow mathematically, as with sin(): this selection configuration does not force creatures to adapt indefinitely, it is enough to survive at some level, when the bulk of creatures still gives offspring, then selection actually stops. That’s my hypothesis. They run around, find at least a little food, enough to survive until reproduction.

Well, let’s see, we need to expand the functionality of observing creatures to see the situation in detail.

In general, returning to the test, this is what you need to get into:

('linear_relu_stack.0.weight', Parameter containing:
tensor([[-0.3258, -0.0829],
        [-0.2872,  0.4691],
        [-0.5582, -0.3260]], requires_grad=True))
('linear_relu_stack.0.bias', Parameter containing:
tensor([-0.1997, -0.4252,  0.0667], requires_grad=True))    <--- Это видимо биасы для нейронов указанного выше слоя
('linear_relu_stack.2.weight', Parameter containing:
tensor([[-0.5702,  0.5214, -0.4904],
        [ 0.4457,  0.0961, -0.1875]], requires_grad=True))
('linear_relu_stack.2.bias', Parameter containing:
tensor([0.3568, 0.0900], requires_grad=True))        <--- Это видимо биасы для нейронов указанного выше слоя
('linear_relu_stack.4.weight', Parameter containing:
tensor([[0.5713, 0.0773]], requires_grad=True))
('linear_relu_stack.4.bias', Parameter containing:
tensor([-0.2230], requires_grad=True))      <--- Это видимо биасы для нейронов указанного выше слоя

The output is like this:

tensor([[0.5306]], grad_fn=<SigmoidBackward0>)

And it turned out: 0.532279745246279

Hmm.. it’s not even clear how to interpret this. The easiest way is to change the inputs, see how the output will react.

The blockhead,

nn1._inputs = [0.01, 0.95]
nn1.calc()

it was "inputs = ..", instead of "nn1._inputs = .."

Accordingly, the grid started with zero inputs. Corrected, and…

[0.1, 0.5]      0.5306   0.5305758657475955
[0.01, 0.05]    0.5285   0.5285213320457128
[0.91, 0.95]    0.5343   0.5343439532365446
[0.99, 0.01]    0.5310   0.5309853972119467
[0.01, 0.99]    0.5323   0.5323244660220846

It’s strange that the inputs have so little effect on the outputs. Too many layers? Let’s leave one hidden layer and 1 neuron at the output.

[0.01, 0.99]    0.4053    0.40633567109426694
[0.1, 0.5]      0.3966    0.3975956529132423
[0.01, 0.05]    0.3897    0.3907299083682923

So, it’s unclear. Such accuracy in the previous experiment, and such blunders in this one. Yeah, I made a mistake in one of the scales. 0.1045 instead of 0.1802. Let’s check it again.

[0.01, 0.05]    0.3897    0.3896808115540953
[0.1, 0.5]      0.3966    0.3965604421661039
[0.01, 0.99]    0.4053    0.4053252099378104

Okay, I think you can believe it. What do we have? Well, hooray, because my nn_module class, which considers a fully connected direct propagation grid correctly, is exactly the same as

self.linear_relu_stack = nn.Sequential(
    nn.Linear(2, 3, bias=True),
    nn.Sigmoid(),
    nn.Linear(3, 2),
    nn.Sigmoid(),
    nn.Linear(2, 1),
    nn.Sigmoid(),
)

I have long wanted to conduct such an experiment. Now you can replace my class with a construction from pytorch and get something similar (will they behave strangely and spin as well? Or will something change?). And then you can play with different network configurations. LSTM, I’m crawling towards you.

Also, as a side information, I noticed that in three-layer networks, the output somehow weakly correlates with the input, at least in comparison with two-layer networks. Well, or at least you can think about this phenomenon.

Software Outsourcing | Unreal Engine Development

Ready to see us in action:

More To Explore

IWanta.tech
Logo
Enable registration in settings - general
Have any project in mind?

Contact us:

small_c_popup.png