Greetings reader. When I was a child and studied at school, my favorite subject was mathematics, my favorite subject was due to the fact that I really love solving problems, at some point in my life I began to compose obviously unsolvable problems for myself and tried them to solve, by completely straining your mind in thinking over an approach for solving an unsolvable problem, sometimes it turned out that an unsolvable problem only seemed to be such because of the omission of some unobvious moments. My love for solving problems greatly influenced me, which is why I constantly solve any problems in my head, not only mathematical problems, but also from other areas. Over the course of my life, I have accumulated many ideas (solutions), from a steel 3D printer to a method for solving the problem of disposing of radioactive waste from nuclear power plants.Surely many ideas are not actually realizable, for one reason or another, and some were probably invented before me, and I just did not know about them (this has already happened). In my lastIn the article, I mentioned (I myself do not know why) that I came up with a new kind of numbers with which you can train neural networks. I wanted to open a service for training neural networks using these numbers, but given the pandemic and my poor health, I thought that suddenly I was really the first who thought of these numbers and it would be extremely bad if I die and the knowledge about these numbers goes away me. Therefore, I decided to write this article, in which I will talk in detail about these numbers and how to use them to train neural networks. I must say right away that I did not work out all the necessary formulas for working with such numbers, since I was busy with my own programming language, this is just an idea, not a ready-made implementation.
In order to fully understand what will be discussed in the article, you need to have knowledge about the structure of simple feedforward neural networks.
Suppose you need to train a feedforward neural network having some kind of training set, in which there are examples of what is fed to the input of the neural network and what is expected to be received at the output. For such a case, you can write a function, let's call it fitness (as in a genetic algorithm), a neural network and a training sample are given as input to such a function, and the function returns a number from 0 to 1, the number corresponds to how much the given neural network is trained by this sample, where 0 is not trained as much as possible, 1 is ideally trained. Using such a fitness function, the neural network can be represented as a mathematical function in which the arguments are the weights of the neural network, and the result is the result of the fitness function applied to the neural network with the given weights and training sample. I started thinking "how to find the maximum of such a function?"In my head, I presented a 3-dimensional graph of a function with 2 arguments and thought that if we add the condition that each weight will be limited to some finite range of possible values, then this graph can be divided into two parts, in one part of the graph the first argument has the same values ββfrom its possible range, and the second part of the graph has all the remaining values ββof the argument, then analyze in which part the maximum is greater, take this part and divide it in the same way, but already relying on another argument, after which the part obtained as a result of the second division is again needed split into two based on the first argument. Such division into parts must be carried out until the values ββof the result of the function in the area obtained from the division have too large fluctuations.Any arguments from the resulting portion of the graph are appropriate weights. For a better understanding, I will explain the above with an example.
y(x) = sin x, x [-4, 4], , , . 2 , x [-4, 0], x [0, 4], , [0, 2] [2, 4]. - , 1, [pi * 999999 / 2000000, pi / 2], x . . - , , , . [0, 1], , . "". , , , , . : sin([-pi, pi]) = [-1, 1]. , , , , . : [-3, 6] - [-12, 7] = [-10, 18]. , [3, 3]. ? , , , , . feedforward :
- ,
fintess
2 , , . , fitness . fitness ,
,
. , . , , - fitness , , , , - . , , , , .
, , , . , . : [1, 2] [4, 5], , [5, 7], 5 1 4, 6 , 6 , 5. , . , , 0, . , [a, b] x , n , x , . f ( 0 1) , x. f f1, f1(x) = f(x) * n. n , f1 , , x . f1 0, , , , , . , f1. , f1 , , youtube (, ), , , - . ( x) , [0, 1], , , .
fitness , . , , , . , , , , , . , , ? ? , , , , . :
- , y(x) = 1
fintess
2 , , . , fitness . fitness ,
,
,
2 , , . , fitness . fitness ,
,
4
5- , 5
, (: 3 ([0, 1], [20, 40], [100, 101]) ), , , 1 2- . , . [1, 2, y(x) = 1], [4, 5, y(x) = 1], [5, 7, y(x) = 1 - |0.5 - x| * 2]. y(x) = 1 - |0.5 - x| * 2? , , , . y(x) = x, , 2 . , , 2- , , . , , , .. [a1, b1, y1(x) = f1(x)] + [a2, b2, y2(x) = f2(x)] = [a3, b3, y3(x) = f3(x)], f3 a1, b1, a2, b2, f1 f2, , . , , .
P. S. , . , . , , . , 80%. , , ( , , ). ( ), - ( ), , . , () , - , , . , , . 200 , . , ( ""). .