Let’s save time and imagine that you heard about neural networks, know about their advantages and want to learn how to use them.

At the same time, you stumbled into educational articles. Everything was incomprehensible. From the very first pages, they began to load with integrals and gradient descents. You decided to go on courses. It’s worse there. The course took 4-12 months. And it was like an airplane wing: 100-200 thousand rubles. Well, such a small wing. Yes, and motivation below the plinth without magic kicks from the teacher.

All you wanted was to get HelloWorldbut for neural networks. That is, the simplest application demonstrates the basic principles of work. Voila! You are on the right track!

**A bit of theory. A neural network is graph.**

But unlike Monte Cristo, he has an input * S-layer (touch)*. There we give numbers in between [0..1]. For example, a picture with a cat. We take each point, convert it to gradation of gray and write to the input node according to the rule, where 0 is white, and the closer to 1, the darker the thoughts at our point.

Further intermediate – * A-associative layers*. They also contain values [0..1]. They are connected by edges with the previous and subsequent layers. Rib weights are within [0..1]set arbitrarily. The value of the A-layer node is the sum of the products of the weights of neurons and the weights of the edges. The amount received can turn out to be more than 1, so it is “normalized” so that it falls into the interval [0..1]. For this use activation function.

Finally, the last, or as the sailors would say, “extreme” – * R-reactive layer*. The calculation rules for him are the same as for the A-layers.

The result is such a guy, or rather, perceptron:

So. We apply tsiferki to the input of the neural network. Further, from input to output, we calculate the values taking into account the fed signals to the nodes, the weights of the edges, and the layer activation function. At the output we get the values. Suppose we have 1 output node and its value is 0.5. And we would like its value to be 1. Because we agreed that 1 on the node means that there is a cat in the photo. How to change the weights of the edges to get 1 at the output?

For this, a particularly strong witchcraft is used – Error Propagation Method. The point is to change the edge weights from right to left (from R-layers to S-layers) so that the desired values are obtained at the output. If you do so many times for different pictures, then gradually, the neural network will “learn” to recognize the image that we want. More details read the wiki and original article “A Step by Step Backpropagation Example” by Matt Mazur. By the way, I also posted a picture with an example implementation on a wiki.

**Here, by the way, and she:**

**Well, in principle, that’s all!** You can consider yourself Junior Data Science and do not agree to a salary of less than 300 thousand rubles! Of course, there are many more things that are useful to know:

- Do I have to write such a graph every time from scratch? Of course not! There are many libraries to help create a 1-click neural network: Keras, Tensorflow, Pytorch, thousands of them! Choose based on functionality, popularity and a familiar programming language.
- Where to get datasets for neural networks? Well, look them up at Kaggle, WikiHabré, assemble yourself, in the end! Hint – the smartphone collects clouds of data about you through mobile applications, the included microphone and GPS-sensor.
- How to process input data for a neural network? Yes, whatever! In any programming language – C #, Java, Javascript … Particularly harsh use Python.
- How to choose the graph architecture – the number of layers, nodes, edges, activation functions and errors, library for processing? Argh! In this and consists work of datasaens!
- I will make an AI application and investors will give me 100 lyam, take me to the valley, will Zuckerberg call? Lol, no! Neural networks are just a tool for a certain class of tasks. Decide real problemand not like last time. Completely fire if get into the trend.

What about the promised HelloWorld? I have them!

- Neural network from an article by Matt Mazur in C #
- Neural network for recognizing handwritten digits of the MNIST base with an accuracy of 91% in C #
- XOR Neural Network in Keras.NET, Numpy.NET, and C #
- MNIST Neural Network in Keras.NET and C #
- MNIST recognition neural network with manual data loading in C #

Well, that’s enough. More examples is in the book. And google in general heaps!

## 0 Comments