fbpx

Python Tutorial Neural Networks With Backpropagation For XOR Using One – Sem Seo 4 You

In the picture, we used the following definitions for the notations:

Here are the computations represented by the NN picture above:

$$ a_0^{(2)} = g(Theta_{00}^{(1)}x_0 + Theta_{01}^{(1)}x_1 + Theta_{02}^{(1)}x_2) = g(Theta_0^Tx) = g(z_0^{(2)}) $$ $$ a_1^{(2)} = g(Theta_{10}^{(1)}x_0 + Theta_{11}^{(1)}x_1 + Theta_{12}^{(1)}x_2) = g(Theta_1^Tx) = g(z_1^{(2)}) $$ $$ a_2^{(2)} = g(Theta_{20}^{(1)}x_0 + Theta_{21}^{(1)}x_1 + Theta_{22}^{(1)}x_2) = g(Theta_2^Tx) = g(z_2^{(2)}) $$ $$ h_Theta(x) = a_1^{(3)}=g(Theta_{10}^{(2)}a_0^{(2)} + Theta_{11}^{(2)}a_1^{(2)} + Theta_{12}^{(2)}a_2^{(2)}) $$

In the equations, the $g$ is sigmoid function that refers to the special case of the logistic function and defined by the formula:

$$ g(z) = frac{1}{1+e^{-z}} $$

Sigmoid functions

One of the reasons to use the sigmoid function (also called the logistic function) is it was the first one to be used. Its derivative has a very good property. In a lot of weight update algorithms, we need to know a derivative (sometimes even higher order derivatives). These can all be expressed as products of $f$ and $1-f$. In fact, it’s the only class of functions that satisfies $f^{‘}(t)=f(t)(1-f(t))$.

However, usually the weights are much more important than the particular function chosen. These sigmoid functions are very similar, and the output differences are small. Here’s a plot from Wikipedia-Sigmoid function. Note that all functions are normalized in such a way that their slope at the origin is 1.

Forward Propagation

If we use matrix notation, the equations of the previous section become:

$$ x = begin{bmatrix} x_0 \ x_1 \ x_2 \ end{bmatrix} z^{(2)} = begin{bmatrix} z_0^{(2)} \ z_1^{(2)} \ z_2^{(2)} \ end{bmatrix} $$ $$ z^{(2)} = Theta^{(1)}x = Theta^{(1)}a^{(1)} $$ $$ a^{(2)} = g(z^{(2)}) $$ $$ a_0^{(2)} = 1.0 $$ $$ z^{(3)} = Theta^{(2)}a^{(2)} $$ $$ h_Theta(x) = a^{(3)} = g(z^{(3)}) $$

Back Propagation (Gradient computation)

The backpropagation learning algorithm can be divided into two phases: propagation and weight update.
– from wiki – Backpropagatio.

br<

Here is a short insert of the original content, so as to spread the thoughts of the people that “Sem Seo 4 You”, considers enlightened. Content limited to 350 words.

You can read the article in its entirety, on the official website of https://bogotobogo.com/python/python_Neural_Networks_Backpropagation_for_XOR_using_one_hidden_layer.php

Thanks a lot … I hope your thoughts, your geniuses can have as much resonance as possible.

You can read the article in its entirety, on the official website of https://bogotobogo.com/python/python_Neural_Networks_Backpropagation_for_XOR_using_one_hidden_layer.php

Thanks a lot … I hope your thoughts, your geniuses can have as much resonance as possible.

Translate »