Neuron simulation method

FIELD: physics.

SUBSTANCE: neuron simulation method is based on calculation of squares of Euclidean distance from the input vector to each of 2n vertices of a unit n-dimensional cube in weighting units, and multiplication of values inverse to these distance values with components of the target vector respectively, and then summation in an adder and conversion in the activation unit through an activation function.

EFFECT: possibility of simulating a neuron of any given Boolean function from a complete set of from n variables.

6 dwg, 1 tbl

 

The invention relates to neurocybernetic and can be used in artificial neural networks (NS) for solving various data processing tasks such as image processing and pattern recognition, prediction signals, etc.

As is known [1-5], the traditional technical and mathematical models of neurons [1-5], used in the national Assembly for various purposes, are limited in the approximation (approximation, modeling or feasibility) of some Boolean functions. Thus unrealizable neuron function is called linearly inseparable, i.e. for which insufficient linear separating hyperplanes [1], and it is necessary to use nonlinear. For example, the classic model of a neuron [1-5] with two inputs (n=2), the multiplier products, adder and an activation unit, in which the scalar product of the input vector by a vector of weights (VK) is transformed by the activation function (F)is able to implement the (quite accurately approximated) from the full set=16 for n=2, only 14 Boolean functions [1-5]. The class of linearly inseparable functions are two logical functions disparities (or "addition modulo 2n) and equivalence (equivalence) [1-5]. In the case of three inputs (n=3), the situation is more complex: the number of implemented options is th - 104 from the full set=256 [1]. With further increase in the number of variables n, the number of implemented functions to the number of linearly inseparable drops sharply [1]. The inability to realize (quite accurately approximated) linearly inseparable Boolean function is a significant drawback of the model neuron [1-5] and is widely known as the problem of functional or approximation of the incompleteness of the model neuron [1-5]. A special case of this problem, in the case of two variables and the function implementation disparities, is widely known [1-5], the problem of function "Exclusive OR".

It should be noted that functional completeness is discussed in terms of Boolean variables and function values - binary, 0 and 1) logical functions, but the approximation model of a neuron these functions are continuous functions, scope (input neuron) and values (output neuron) which is a continuous period of, typically, from 0 to 1 [1-5]. Thus, a Boolean function will be private (binary) case corresponding neural network approximations [1-5].

In addition, another serious problem models of neurons and NA is the convergence of the learning process with the teacher (gradient) and bypass local minima [1-5]. Teaching involves [1-5], that for each input the th vector there is a target vector, representing the desired output. Together they are called a training pair. Typically, the network is trained on a number of training pairs. Presented input vector, calculates the output of the network and compared with the corresponding target vector, the difference (gradient error) using fed back into the network and VC change in the direction of antigradient seeking to minimize the error. The vectors of the training set are sequentially and repeatedly calculates the error and VK are adjusted for each vector until then, until the error throughout training the array reaches an acceptably low level [1-5]. However, even if the model neuron provides theoretically the feasibility of the specified logical function (for certain values VK), there is no guarantee that the procedure of training (end values VK) model will provide practical predstavlyaemoi given function [1]. There is no guarantee that the error will decrease with each step training [1]. Surface errors (the dependence of the error on the values VK) model of a neuron with n inputs is highly indented and consists of hills and ravines in n-dimensional space. The neuron can fall into a local minimum (a shallow ravine), when the surface has a much deeper minimum. At the point of local minimum in all directions are VVER is, and the model is unable to get out of it [1].

To ensure the implementation of (approximations of) the full set ofBoolean functions on n variables x1x2,..., xnknown method of modeling the NA containing multiple n-vchodove neurons in the first layer and one second [1-5]. However, with the appropriate settings, VK, neurons of the first layer and the settings VK neuron of the second layer, offers the additional possibility of approximation of the national Assembly linearly inseparable functions [1-5]. The disadvantage of this method is the necessity of using multiple neurons, combined into two layers, the large number and the complexity of setting up the VC during gradient learning, and the learning mechanism can be time-consuming and does not guarantee convergence and bypass local minima (especially when the number of inputs n≥3).

Known another method [6] modeling of neuron - building high neuron (second) order to implement logic functions. In this case, the neuron, in addition to inputs x1x2,..., xnhas additional inputs, which serves pairwise compositions x1x2,...,x1xnx2x3,...,x3xnxn-1xn[6]. The sum s at the output of the adder generates the desired approximation for linearly inseparable functions, complex is olynomials dependence of the output from the inputs of this model neuron [6]. The disadvantage of this method is the presence of many additional inputs in the model (especially when n≥3), quite a large number of computational operations during training and operational modes, the duration of the procedure of gradient learning, convergence and avoiding local minima.

It should be noted another way neuron simulation proposed in [7] based on the introduction of the two-threshold equilibrium F of this neuron. The disadvantages of this method are: 4 configurable parameter FA instead of 2 (VK) with two inputs x1and x2the absence of mechanisms for configuration 2 of 4 configurable parameters FA.

Also known method [8] neuron simulation, ensuring implementation of the (approximation) linearly inseparable functions disparities (but not functionally complete). The disadvantages of the method [8] are: the inability of the other approximation, linearly separable Boolean functions (in particular, functions Sheffer's Stroke", "Arrow Pier"), lack of training opportunities (due to the presence of "steps" F).

The closest to the technical nature of the claimed method is a method of modeling a neuron [1-5], providing for the implementation of (an approximation of) a limited set of Boolean functions (especially for a large number of variables (n): 14 out of 16 for n=2, 104 of 256 for n=3, 1882 of 65536 is La n=4, etc. [1]. The method is [1-5]that the input signals fed to the inputs x1x2,...,xntaking continuous values from "0" to "1", Peremohy in the weight blocks with the corresponding VK w1, w2,...,wnwith the first step of training a small random values and changes during the learning process in accordance with a gradient rule learning, and summed in the adder together with VK wn+1(also known as offset), where wn+1has a small initial random value and changing in accordance with rule learning, s=x1w1+x2w2+...xnwn+wn+1where s is the sum output of the adder. VK wn+1necessary to offset FA s when teaching approximation of the desired logical function. The multiplier products and the adder take into account the signs of the values. Then the sum s to gradient learning, is converted into an activation unit smooth (usually sigmoid) F f(s), receiving the function y(x1x2)=f(s) at the output of the model neuron. Thus, the model contains n inputs x1x2,..., xnthat are the inputs of the respective weighing units (multiplier products), where x1x2,...,xnare multiplied with VK w1, w2,...,wn. The outputs of multiplier products, as well as the weight wn+lare the inputs of the adder, in the course of which is connected to the input of the activation unit. Output activation block is the output of the model neuron, which receives the function y(x1x2,...,xn). As adder, multiplier products and the activation unit can be used various known technical devices, for example [9]that implement the specified mathematical function. During the process of learning (the specified logical function) on inputs x1x2,...,xnconsistently and repeatedly served the input vectors X=[x1x2...xn]T(where the upper index T here and below denotes transposition), calculate each time the value of the output neuron y(x1x2,...,xnand compare them with the desired or target output (the components of the target vector T=[t1, t2...tm]T) [1-5]. Each time carry out the adjustment of values VK gradient rule [1-5]. This gradient rule learning prototype containing the derivative f'(s) F, justified in detail in the works [1-5] and is widely used currently in training many technical models of the national Assembly.

Meanwhile, the method [1-5] has two drawbacks, the first of which is associated with the impossibility of implementation (i.e. approximation of) the full set ofBoolean Boolean functions on n variables, especially when a large number of variables n, i.e. now is the possibility of providing functional completeness. The second one concerns the learning mechanism, which can be time-consuming and does not guarantee the convergence of the model neuron and bypass local minima [1-5].

The proposed method aims to achieve the following technical effects:

1) enable the implementation of the model neuron any given Boolean logic functions of the full set ofn variables, i.e. providing functional or approximation of the completeness of the model neuron;

2) no need for a fairly long learning mechanisms (with repeated use of the target vector T)no guarantee, moreover, convergence and successfully overcome local minima; the replacement of these learning mechanism operation single use target vector T;

3) expanding Arsenal of technical solutions [1-6] to ensure the functional completeness of the models of neural structures.

Consider the essence of the proposed method. The set of actions that defines how a neuron simulation presented in figure 1. According to the proposed method, the vector of input signals x1x2,...,xnthat takes continuous values from "0" to "1") served on the input X=[x1x2...xn]Tconvert weight units (blocks R1...R2nand W1...W2nfigure 1) and summarize summator (block S in figure 1). When the output s of the adder is input activation block (block F in figure 1), where s transform F f(s). The output of block F is the output y of the model neuron, which receives the function y(x1x2,...,xn). Unlike the prototype of the input signals x1x2,...,xnconvert weight units according to the following rule: in blocks R1...R2nfigure 1 calculates the squares of the Euclidean distances rj(j=1...2nfrom the vector X to each of the n-dimensional vectors P1, P2, ...,representing the coordinates of all 2nthe vertices of the unit n-dimensional cube:,, ...,. Or in General:

Under the unit n-dimensional cube will understand a cube with n non-parallel edges (measurements), the length of which is equal to one [1, 4, 5]. So when n=2 cube degenerates into a flat square (2 dimension); for n=3 are of the usual geometric cubic Obviously, the number of vertices of the cube is 2n.

Then in blocks W1...W2nin figure 1 each of the 2ndistances rjsummarize with a factor α, where α=0.05, erected in degree (-1) and Peremohy respectively with t1, t2,...,t2n(components of the target vector T, which represents the set of required testing the response of the output neuron in the 2 nthe vertices of the unit n-dimensional cube for a given function can be realized): w1=t1(r1+α)-1, w2=t2(r2+α)-1,...,where t∈0,1. Or in General:

After a set of actions (1) and (2) the resulting coefficients wj(from the works of "proximity"the input combination X to each vertex of the n-dimensional cube on the elements of the target vector tjat the vertices of an n-dimensional cube, respectively) were summed up in the adder S (1):

a s is converted into an activation unit F (Fig 1) function

At the same time as the weight of the blocks (R1...R2nand W1...W2n), adder (S) and the activation of the block (F) figure 1 may be used various known technical devices, for example [9]that implement the specified mathematical function(1), (2), (3) and (4) taking into account the sign of the signal and continuity from "0" to "1," x1x2,...,xn.

A comparative analysis of the claimed method and prototype [1-5] shows that the claimed method differs in that the modified set of actions when converting the input signals x1x2,...,xnthe output y(x1x2,...,xn)as the summation and transformation is receiving FA input signals are not Peremohy with VK w 1, w2,...,wnas in the prototype, and used to determine the blocks R1...R2nsquares of the Euclidean distances rj(1) from the vector X to each of the n-dimensional vectors P1, P2, ...,representing the coordinates of all 2nthe vertices of the unit n-dimensional cube, then in blocks W1...W2neach of the 2ndistances rjsummarize with factor a (a=0.05), erected in degree (-1) and Peremohy respectively t1, t2,...,(in the formula (2)).

Consider the proposed method of modeling a neuron. The purpose of simplicity, consider the case when the model neuron has two inputs (the number of variables n=2, a complete set of Boolean functions).

The required value of the output y(x1x2) (i.e. 16 four-dimensional target vectors T) model neuron for any approximation from the full set of 16 Boolean logic functions, depending on the binary combinations of the input x1and x2in table 1 (highlighted two vectors corresponding to the linearly inseparable functions - disparities and equivalence).

x1x2The full set of target vectors T clavichord y(x 1x2)
00001100011001011
01011001100011010
1001011010001001
11111100010111000

For the implementation of (approximations of) any of the 16 functions (get the closest value of the output y(x1x2to one of the vectors T), are shown in table 1, the simulation of a neuron involves the presence of a number of the following steps. The input signals x1and x2transform in blocks R1, R2, R3, R4 (1) according to the following rule: according to the formula (1) calculates the squared Euclidean distance rj(j=1,2,3,4) from vectorto each of the vectorsrepresenting the coordinates of all 2n=4 vertices of a single two-dimensional cube (i.e. square): Then in blocks W1, W2, W3, W4 (figure 1) in accordance with formula (2) each of the four distances rjsummarize with a factor a, where a=0.05, erected in degree (-1) and Peremohy respectively with t1, t2, t3and t4(components of the target vector T, which is a set of desired values of the output neuron in the 4 vertices of a single two-dimensional cube (square) for a given function can be realized): w1=t1(r1=a)-1, w2=t2(r2+a)-1, w3=t3(r3+a)-1, w4=t4(r4+a)-1where t∈0,1. This dependence of coefficients of wj(j=1,2,3,4) as a function of two variables x1and x2for the case of, for example, linearly inseparable functions disparities(highlighted in table 1)will have the form shown in Fig 2-4. From figure 2-4 shows that in cases where any component of the vector T is equal to zero (t1and t4), the zero is the corresponding coefficient (w1and w4in this case, figure 2). In those cases, when the component of the vector T is equal to (t2and t3), there is a smooth hill in the corresponding vertex of the two-dimensional cube (square), since exponentiation (-1) kV is dratha Euclidean distance r 2(figure 3) and r3(figure 4). When this term a=0.05 is needed to avoid the situation of division by zero in the formula (2).

After these steps, the coefficients of wj(y=1,2,3,4) are summed in the adder S (figure 1) by the formula (3):a s is converted into an activation unit F (Fig 1) according to the formula (4) function: y=f(s)=as. The activation function f(s)=as plays the role of a normalization factor limiting the output values of the model neuron. Besides, the dependence of the output y(x1x2model of a neuron as a function of two variables x1and x2for the case of a required function disparities will have the form shown in figure 5. At the output of a neuron receive the following values, depending on the marginal (binary 0 and 1) input: y(0,0)≈0.095, y(0,1)≈1.024, y(1,0)≈1.024, y(1,1)≈0.095. Thus, the dependence of the output y(x1x2) quite accurately approximates the desired linearly inseparable function disparities

Substituting in the formula (2) any other elements (of the remainder, 15) of the target vector from table 1, it is possible to ensure that, in accordance with this method, all of them quite accurately approximated (i.e. implemented) model neuron presented in figure 1. Thus, figure 6 presents the dependence of the output y(x1x2model of a neuron to implement in the ora of linearly inseparable functions equivalence (equivalence, also highlighted in table 1), i.e. when the target vectorIn this case, (6) at the output of a neuron receive the following values, depending on the marginal (binary 0 and 1) input: y(0,0)≈1.024, y(0,1)≈0.095, y(1,0)≈0.095, y(1,1)≈1.024.

Considering the case of several variables (x1x2,...,xnyou can verify that the proposed method neuron simulation allows to approximate any Boolean function on n variables, because the components of the target vector T is specified at each vertex of the n-dimensional cube.

Thus, the proposed method neuron simulation allows to implement the model neuron 1 of any desired logical function of the full set ofn variables, i.e. to provide functional completeness of a neuron with n inputs.

In addition, to implement the necessary logic functions of the proposed method it is not necessary to carry out the procedure of training the model neuron, which does not always guarantee convergence and successfully overcome local minima. In the proposed method, simulation of neuron operation instruction is replaced with a single use of the vector of the desired values of T in the blocks W1...W2n(figure 1). The target vector T is used once (and know what the shaft gradient procedures training many times), and the method provides at the same time and functional completeness of a neuron with inputs, and ensures that the state of "training" the model neuron to one of the target vectors t

Figure 1 shows the functional diagram of the device (model neuron)that implements the proposed method neuron simulation. The device 1 includes: an input that receives the n-dimensional vectorThis vector is simultaneously fed to the inputs of blocks R1...R2n(figure 1), which calculates the squared Euclidean distance from the vector X to each vertex of the n-dimensional cube, respectively. The outputs of blocks R1...R2nconnected with the inputs of the blocks W1...W2nrespectively (figure 1). The blocks W1...W2nhave a second input which serves the components of the target vectorrespectively.

The outputs of blocks W1...W2nare the inputs of the adder S (figure 1), the output of which is connected to the input of the activation of the block F, the output of which, in turn, is the output of the model neuron (figure 1), which are function y[x1x2,...,xn). As the weight of the blocks (R1...R2nand W1...W2n), adder (S), and an activation unit (F) figure 1, can be used various known technical devices, for example [9]that implement the specified mathematical function(1), (2), (3) and (4) with the neighbors of the sign signal and continuity from "0" to "1," x 1x2,...,xn.

The model neuron functioning of the proposed method is as follows. Input device serves the input vectorThis vector concurrently served on each of the blocks R1...R2n(figure 1), in which in the formula (1) calculates the squared Euclidean distance rj(j=1...2nfrom the vector X to each of the n-dimensional vectorsrepresenting the coordinates of all 2nthe vertices of the unit n-dimensional cube. Outputs of blocks R1...R2n(1) the squares of the distances serves to corresponding inputs of blocks W1...W2nin which each of the 2ndistances rjtransform in accordance with formula (2). On the second inputs of the blocks W1...W2nserved respectively the components of the vector of the desired valuesimplemented logic functions. Outputs of blocks W1...W2n(1) the signals fed to the input of the adder S, where they are summed up by the formula (3). From the output of the adder signal fed to the input of the activation of the block F, which is its transform F (4). From the output of block F signal fed to the output y, which receive the desired function y(x1x2,...,xn). In the case of two variables (n=2) the model of a neuron of the proposed method is as follows. Sosaku the efficiency of actions (1)-(4) defines a mathematical model of a neuron in the form:

In this case, for example linearly inseparable functions disparities (T=[0110]Thighlighted in table 1), a mathematical model of a neuron (5) takes the form:

Substituting in (6) binary (0 and 1) input combinations (two-dimensional vectors, X)have the following exact output values: y(0,0)=0.095238, y(0,1)=1.0243902, y(1,0)=1.0243902, y(1,1)=0.095238, i.e. the values of the function output y(x1x2model of a neuron, shown in figure 5.

Substituting in the mathematical model of a neuron (5) any other elements (of the remainder, 15) of the target vector from a table, you can verify that in accordance with this method, all of them quite accurately approximated (i.e. implemented) this model of a neuron.

Forming a mathematical model of a neuron for the case of several variables (x1x2,...,xnyou can verify that the proposed

the way the neuron simulation allows to approximate any Boolean function on n variables, because the components of the target vector T is specified at each vertex of the n-dimensional cube. However, with the growth of n, the accuracy of the approximation will depend on the choice of the value of the coefficient a.

Thus, the model neuron also shows that it is implemented (approximated) any Boolean function on n ne the temporal for a single vector of the desired values for the T without any learning mechanism.

Thus, the proposed method allows to obtain the following technical effects:

1) enable the implementation of the model neuron any given Boolean logic functions from their complete set 2 variables, i.e. providing functional or approximation of the completeness of the model neuron;

2) no need for a fairly long learning mechanisms (with repeated use of the target vector T)no guarantee, moreover, convergence and successfully overcome local minima; the replacement of these learning mechanism operation single use target vector T;

3) expanding Arsenal of technical solutions [1-6] to ensure the functional completeness of the models of neural structures.

These technical effects will allow you to extend the approximation (functional) features of the neural elements and to replace time-consuming, and not always guarantee the success of the procedure of the learning operation, a single use of the vector of the desired values for the specified function.

Sources of information

1. Wasserman P. Neurocomputing. Theory and practice, Nostram Reinhold, 1990. (Eng. translation: Wasserman F. Neurocomputer technique: theory and practice. / Lane. from English. Washes, Vahtacen. - M.: Mir, 1992. - 240 S.).

2. Kruglov V.V., Borisov V.V. Artificial neural network. Theory and practice. - M: Th is ACA line - Telecom, 2001. - 382 S.

3. Deacons Century, Kruglov Century Mathematical expansion packs MATLAB. Special guide. - SPb.: Peter, 2001. - 480 S.

4. Danuta Rutkowska, Maciej Pilinski, Leszek Rutkowski. Sieci neuronowe, al-gorytmy genetyczne I systemy rozmyte. Wydawnictwo naukowe pwn. Warszawa LODZ, 1999. (Eng. translation: D. Rutkowski, Pilinski M., Rutkowski L., Neural networks, genetic algorithms and fuzzy systems. / Lane. Polish Idedisk. - M.: Hot line - Telecom, 2004. - 452 S.).

5. Stanislaw Osowski. Sieci neuronowe do przetwarzania informacji. // Oficyna mydawnicza politechniki Warszawskiej, Warszawa, 2000. (Eng. translation: osowski Snejanna network for information processing. / Lane. Polish Idedisk. - M.: Finance and statistics, 2004. - 344 S.

6. Terekhov S.A. Neural network information models of complex engineering systems. // Sat. Neuroinformatics / A.N. Gorban, Vilmunen-Barkovsky, Antigen etc. - Novosibirsk: Nauka, Siberian publishing firm Russian Academy of Sciences, 1998. S-141.

7. Terechkov A.M. Homogeneous multilayer neural network of direct distribution with local ties to the conditioned-reflex learning mechanism based on two-threshold equilibrium of the neural elements. "Izv. Tomsk Polytechnic Institute, Univ. 2007. T. No. 1. S-211.

8. The M. I. Nedelin EA Model neuron that implements the logical function disparities. // Patent RF №2269155, IPC G06G 7/06. Will declare. and patentable. The M. I. Nedelin E.A. No. 2003125926/09; Appl. 22.08.03; publ. 27.01.06. B. I. 2006. No. 3. - 7 S.

9. Galushkin I Neurocomputers. CN: Textbook. manual for schools / General editor A.I. Galushkina. - M.: IPGR, 2000. - 528 S.

The method of modeling a neuron with inputs x1x2,..., xnconsists in the fact that the input signals x1x2,..., xnthe components of the vector X and the receiving continuous values from "0" to "1", convert weight units and summed in the adder, the output s of which is the input activation of the block in which s transform activation function y=f(s), where y is the output of the model neuron, characterized in that, to ensure the implementation (obtain at the output at any given Boolean logic functions of the full set offrom n variables, and to replace time-consuming and not always convergent learning mechanism operation single use target vector T for a given function can be realized, the input signal is converted into the weight of the blocks according to the following rule: compute the squared Euclidean distance rjfrom the vector X to each of the n-dimensional vectorsrepresenting the coordinates of all 2nthe vertices of the unit n-dimensional cube:j=1...2nthen each of the 2ndistances summarize with a factor a, where a=0,05, erected in degree -1 and Peremohy respectively with where the target vector T represents the set of required values (tj∈0,1) 2nthe vertices of the unit n-dimensional cube for a given function can be realized:
wj=tj(rj+a)-1then the coefficients of wjsummed in adder:and convert s in the activation block function: y=f(s)=as.



 

Same patents:

FIELD: information technology.

SUBSTANCE: multilayer modular computer system has several layers, including a neural network layer, a transport layer and a processor layer, wherein the transport layer contains network controller-router modules, the processor layer contains processor modules, and all the said modules have multiple inputs and outputs connected to each other and connected to the inputs and outputs of the system. The processor modules train neural network domain modules.

EFFECT: high decision speed, possibility of grafting layers and modules in each layer during operation of the system with a complex task, high reliability of the computer system.

3 cl, 1 dwg

FIELD: information technologies.

SUBSTANCE: invention may be used for building of modular neural computers, which function in symmetrical system of residual classes. Stated neuron network comprises unit of neuron network of end ring of senior coefficient generation for generalised positional system of numeration, unit of polarity shift, unit of error detection, buses "with errors" and "without errors".

EFFECT: reduced hardware complexity.

3 dwg

FIELD: physics; computer engineering.

SUBSTANCE: present invention pertains to neurocomputers. The device has a unit for storing a binary input signal, a logic AND-OR circuit, internal memory unit, unit for generating the output string of codes, a generator of synchronising pulses, control unit, a unit for selecting duration and extracting information, analysis block and a corrector unit.

EFFECT: increased rate of operation, providing for the possibility of distinguishing change in state of processed signals, increased noise immunity, possibility of making super-complex neural networks, and simplification of design.

9 cl, 1 dwg

FIELD: modular neuro-computing systems.

SUBSTANCE: neuron network contains input layer of neurons, at inputs of which residuals of number being divided are received through system of modules, (n-1) neuron networks of finite ring for addition, (n-1) neuron networks of finite ring for multiplication, neuron network for expanding a tuple of numerical system of residues, and as output of neuron network for dividing numbers represented in system of residual classes are outputs of neuron network of finite ring for multiplication and output of neuron network for expansion of tuple of numerical system of residues.

EFFECT: expanded functional capabilities, increased speed of division, reduced volume of equipment.

1 dwg

FIELD: neuron-like computing structures, possible use as processor for high speed computer systems.

SUBSTANCE: device contains artificial neuron network composed of analog neurons, at least one controllable voltage block, a group of long neuron-like nonlinear communication units, each one of which contains serially connected circuit for synchronization and selection of radio impulse envelope, auto-generator with self-suppression circuit, a length of coaxial line, realizing functions of antenna, additional circuit for synchronization and selection of radio-impulse envelope.

EFFECT: increased information processing speed due to increased paralleling degree of computing processes.

2 dwg

FIELD: neuro-cybernetics, possible use in artificial neuron networks for solving various problems of logical processing of binary data.

SUBSTANCE: method for realization of logical nonequivalence function by neuron with two inputs is based on multiplication of input signals with corresponding weight coefficients and summing them, after that the total is transformed in activation block firstly by quadratic transfer function, and then by threshold function at neuron output.

EFFECT: realization by one neuron of first order of logical nonequivalence function of two variables.

5 dwg, 1 tbl

FIELD: computer engineering, possible use in modular neuro-computer systems.

SUBSTANCE: in accordance to invention, neuron network contains input layer, neuron nets of finite ring for determining errors syndrome, memory block for storing constants, neuron nets for computing correct result and OR element for determining whether an error is present.

EFFECT: increased error correction speed, decreased amount of equipment, expanded functional capabilities.

1 dwg, 3 tbl

Neuron-like element // 2295769

FIELD: cybernetics, possible use as a cell for neuron networks.

SUBSTANCE: neuron-like element may be used for realization on its basis of neuron network for solving problems of estimation of functioning of complicated open systems, estimation of degree of optimality of various solutions by ensuring possible construction of model of researched system, both hierarchical and recurrent, with consideration of varying original and working condition of its elements and variants of their functioning, during modeling taking into consideration the level of self-sufficiency of neuron-like elements, susceptibility to effect of external signals, type and errors of setting of their parameters and parameters of input signals, and also provision of given precision of self-teaching of neuron network. Device contains input block, block for setting and normalizing weight coefficients, block for computing parameters of input signals, adder, signals share limiter, block for computing input part of condition, block for setting internal state, block for computing internal part of distance, block for counting distance, memory block, analyzer of state change value, block for determining precision of self-teaching of neuron network, block of determined dependencies, switch, output block, control block, random numbers generator.

EFFECT: creation of neuron-like element.

2 cl, 1 dwg

The invention relates to the field of computer engineering and can be used in neural networks

The invention relates to the field of computer engineering and can be used in neural computers

Neuron-like element // 2295769

FIELD: cybernetics, possible use as a cell for neuron networks.

SUBSTANCE: neuron-like element may be used for realization on its basis of neuron network for solving problems of estimation of functioning of complicated open systems, estimation of degree of optimality of various solutions by ensuring possible construction of model of researched system, both hierarchical and recurrent, with consideration of varying original and working condition of its elements and variants of their functioning, during modeling taking into consideration the level of self-sufficiency of neuron-like elements, susceptibility to effect of external signals, type and errors of setting of their parameters and parameters of input signals, and also provision of given precision of self-teaching of neuron network. Device contains input block, block for setting and normalizing weight coefficients, block for computing parameters of input signals, adder, signals share limiter, block for computing input part of condition, block for setting internal state, block for computing internal part of distance, block for counting distance, memory block, analyzer of state change value, block for determining precision of self-teaching of neuron network, block of determined dependencies, switch, output block, control block, random numbers generator.

EFFECT: creation of neuron-like element.

2 cl, 1 dwg

FIELD: computer engineering, possible use in modular neuro-computer systems.

SUBSTANCE: in accordance to invention, neuron network contains input layer, neuron nets of finite ring for determining errors syndrome, memory block for storing constants, neuron nets for computing correct result and OR element for determining whether an error is present.

EFFECT: increased error correction speed, decreased amount of equipment, expanded functional capabilities.

1 dwg, 3 tbl

FIELD: neuro-cybernetics, possible use in artificial neuron networks for solving various problems of logical processing of binary data.

SUBSTANCE: method for realization of logical nonequivalence function by neuron with two inputs is based on multiplication of input signals with corresponding weight coefficients and summing them, after that the total is transformed in activation block firstly by quadratic transfer function, and then by threshold function at neuron output.

EFFECT: realization by one neuron of first order of logical nonequivalence function of two variables.

5 dwg, 1 tbl

FIELD: neuron-like computing structures, possible use as processor for high speed computer systems.

SUBSTANCE: device contains artificial neuron network composed of analog neurons, at least one controllable voltage block, a group of long neuron-like nonlinear communication units, each one of which contains serially connected circuit for synchronization and selection of radio impulse envelope, auto-generator with self-suppression circuit, a length of coaxial line, realizing functions of antenna, additional circuit for synchronization and selection of radio-impulse envelope.

EFFECT: increased information processing speed due to increased paralleling degree of computing processes.

2 dwg

FIELD: modular neuro-computing systems.

SUBSTANCE: neuron network contains input layer of neurons, at inputs of which residuals of number being divided are received through system of modules, (n-1) neuron networks of finite ring for addition, (n-1) neuron networks of finite ring for multiplication, neuron network for expanding a tuple of numerical system of residues, and as output of neuron network for dividing numbers represented in system of residual classes are outputs of neuron network of finite ring for multiplication and output of neuron network for expansion of tuple of numerical system of residues.

EFFECT: expanded functional capabilities, increased speed of division, reduced volume of equipment.

1 dwg

FIELD: physics; computer engineering.

SUBSTANCE: present invention pertains to neurocomputers. The device has a unit for storing a binary input signal, a logic AND-OR circuit, internal memory unit, unit for generating the output string of codes, a generator of synchronising pulses, control unit, a unit for selecting duration and extracting information, analysis block and a corrector unit.

EFFECT: increased rate of operation, providing for the possibility of distinguishing change in state of processed signals, increased noise immunity, possibility of making super-complex neural networks, and simplification of design.

9 cl, 1 dwg

FIELD: information technologies.

SUBSTANCE: invention may be used for building of modular neural computers, which function in symmetrical system of residual classes. Stated neuron network comprises unit of neuron network of end ring of senior coefficient generation for generalised positional system of numeration, unit of polarity shift, unit of error detection, buses "with errors" and "without errors".

EFFECT: reduced hardware complexity.

3 dwg

FIELD: information technology.

SUBSTANCE: multilayer modular computer system has several layers, including a neural network layer, a transport layer and a processor layer, wherein the transport layer contains network controller-router modules, the processor layer contains processor modules, and all the said modules have multiple inputs and outputs connected to each other and connected to the inputs and outputs of the system. The processor modules train neural network domain modules.

EFFECT: high decision speed, possibility of grafting layers and modules in each layer during operation of the system with a complex task, high reliability of the computer system.

3 cl, 1 dwg

FIELD: physics.

SUBSTANCE: neuron simulation method is based on calculation of squares of Euclidean distance from the input vector to each of 2n vertices of a unit n-dimensional cube in weighting units, and multiplication of values inverse to these distance values with components of the target vector respectively, and then summation in an adder and conversion in the activation unit through an activation function.

EFFECT: possibility of simulating a neuron of any given Boolean function from a complete set of from n variables.

6 dwg, 1 tbl

FIELD: information technology.

SUBSTANCE: in an ophthalmic-microsurgical computer local area network for vitreoretinal operations, formatting devices are in form of a radial-annular structure consisting of a single set of automated workstations (AWS), which synchronously or asynchronously functioning, processing, converting, transmitting, analysing, synthesising hierarchical structures of an artificial neural network: diagnosis AWS (DAWS), ophthalmic-microsurgical AWS (OMAWS), subsequent operation stages AWS (SOSAWS), component AWS (CAWS), surgeon's operating unit (SOUAWS), with opposite forward and reverse flow of information in between, where each AWS has at least one neural circuit, interconnected identification units (IU), an interpolation unit (INU), an extrapolation unit (EU), which are the neural network converting and transmitting elements (NNCTE), a decision unit (DU), which is the neural network analysis and synthesis element (NNASE).

EFFECT: simultaneous improvement of accuracy of determination and quality of identifying diagnoses, determining indications for conducting operations, high selectivity when conducting operations, accuracy in determining the sequence of operations, simulating operations, accuracy in choosing the anaesthetic method, accuracy of providing implants and expendable materials, optimisation of flow of information and necessities during vitreoretinal ophthalmic-microsurgical operations.

1 dwg

Up!