RussianPatents.com
|
Method for intelligent information processing in neural network. RU patent 2483356. |
|||||||||||||||||||||
IPC classes for russian patent Method for intelligent information processing in neural network. RU patent 2483356. (RU 2483356):
|
FIELD: information technology. SUBSTANCE: during signal processing in a multilayer recurrent network with feedback which close a loop with time delay of individual characters shorter than the irresponsiveness time network neurons after excitation thereof, with shift of pluralities of individual characters along layers, during transmission of the pluralities of individual characters from layer to layer, said pluralities of individual characters are partially reflected from the receiving layer to the transmitting layer and stored recognition results are partially deleted from network entities by these partially reflected pluralities of individual characters. EFFECT: high intelligence and stability of information processing. 5 dwg
The invention relates to bionics, modeling functional aspects of the person and can be used in computing when you build intelligent machines and systems. There are many known ways of intelligent information processing in neural networks used for approximation, classification, pattern recognition, speech processing, forecasting, identification, assessment of production processes, the Association of management and to solve other creative tasks (Khaikin S. Neural networks: a comprehensive course, 2nd ed.: Lane. from English. - M: Publishing house «Williams», 2006. - 1104 C.; A.I. Galushkin Theory of neural networks. KN. 1: textbook for universities / Total amended ... - M: , 2000. - 416 C.). However, all of them characterized by narrow opportunities for memorizing structurally complex time-varying signals, their recognition, Association with other signals on the recovery of the memory of the network and reproduced in its original form. These known methods do not allow the processing of information to solve a wide range of tasks of the same neural network. They also have a low resistance. The closest analogue of the invention is a method of intelligent information processing in multi-layer recurrent neural network with feedbacks, trailing path with time delay individual images less time immunity of neurons in the network after their excitement. According to him the signal is served in the network after decomposition into components in the basis agreed with the input layer of the network. In this case, each component before serving in the network is converted into a sequence of single images with frequency of recurrence as a preset function of the amplitude of the component. Signal present in the network in the form of consecutive aggregates of individual images in accordance with pre-defined rules his recognition with regard to reverse the recognition results. When transferring the aggregates of individual images from layer to layer their shift along the layers taking into account the current state of the latter. Recognition results to remember the elements of the network. As the results of processing using a serial number of single images in the output layer of the network after you convert back to their corresponding initial signals (Osipov V.Y. Associative intelligent machine / Information technologies and computing systems. №2, 2010, p.59-67). The disadvantage of this method is not high resistance processing of a continuous flow of signals, possibility of network congestion information with the loss of all stored recognition results. This limits the functionality of the method on intellectual information processing. The objective of the invention is to expand the functional capabilities on intellectual information processing in neural networks. The technical result from the use of the invention consists in increasing the intelligence and sustainability of information processing in neural networks. Task decides that in the known method of intelligent information processing in neural network, which consists in applying a multi-layer recurrent network with feedbacks, trailing path with time delay individual images less time immunity of neurons in the network after its institution, signal laid out on components in the basis agreed with the input layer of the network, with each pillar is converted into a sequence of single images with frequency of recurrence as a preset function of the amplitude of the component, view the signal in the form of consecutive aggregates of individual images in accordance with pre-defined rules his recognition with regard to reverse the recognition results, shifts aggregates of individual images along the layers taking into account the current layer States, memorizing the recognition results on the elements of the network, used as processing results consecutive aggregates of individual images in the output layer of the network after you convert back to their corresponding initial signals, according to the invention, if the transfer of the aggregates of individual images from layer to layer them partly reflect from host layer to the transmitting layer and partially washed with network elements in these partially reflected array of individual images saved recognition results. To extend the functionality of intellectual information processing in neural network offers the author of the invention when transferring the aggregates of individual images from layer to layer them partly reflect from the host layer to the transmitting layer and partially erased network elements in these partially reflected array of individual images saved recognition results, which is a significant indication invention. The essence of the invention is illustrated by figure 1 to 4. Figure 1 shows a block diagram of two-layer recurrent neural network that implements the proposed method, where 1, 5 - first, second layers of neurons; 2, 6 first, second units of single delays; 3, 4 - first, second dynamic blocks synapses; 7 - control unit synapses. Figure 2 shows a figure that reveals the structural peculiarities of the units of the neural network (figure 1), where 3.1 to 3.m - dynamic synapses first block 3 dynamic synapses. Figure 3 shows the logical structure of the first layer of the network (broken down by 50 Boolean fields for 25 per line) the processing of information by the proposed method. The arrows indicate the direction of promotion of the aggregates of individual images along the layer. On fig.4 for the absence of a partial erase memorized the recognition results from network elements shows the dependencies of the total weight W S (t) synapses from the current time t given that every four bars in the network introduces another set of single images from a defined sequence, where 1 for t=300 to 600; 2 - for t=0 to 300. On fig.46 shows the dependencies of the total weight W S (t) synapses network with the same input stream, as in the previous case, but in case of partial erased from network elements memorized the recognition results, where 1 for t=300 to 600; 2 for t=0 to 300. The method is as follows. Consider it on the example of a neural network, structural diagram of which is shown in figure 1. The input signal can be decomposed into components in the basis agreed with the first layer 1 network, each element is converted into a sequence of single images with frequency of recurrence, as a preset function of the amplitude of the component, and is served at this layer. After the filing of the first entrance of the first layer of neurons such a signal at its output a place serial number of single images, carrying all the information on input signal. After a delay in the first block of single delay 2-to-serial number of single images goes to the first dynamic block 3 synapses. Every single image from the current aggregate is presented simultaneously in the first block of dynamic synapses 3 on the totality of its dynamic synapses, providing the connection of each neuron that spawned the single image, in the General case, with all neurons of the second layer of neurons 5. With the passage of individual images through these synapses in the forward direction they take with them the information about the previous impacts and leave information about your appearance through the increase of scales of synapses. Outputs of synapses single images are fed to the inputs of neurons of the second layer. Structural peculiarities of the units of the neural network (figure 1) is shown in figure 2. Received aggregate of individual images on the inputs of the neurons of the second layer 5 according to the invention partially reflected from them. This is due to the charge and discharge of capacitors With 5.1 up With 5.n (figure 2). These partially covered by aggregate come back on synapses of the first unit of dynamic synapses and according to the invention partially washed with them remembered recognition results, that is the reduction of scales of synapses. Not reflected part of the aggregates of individual images directly affect the neurons of the second layer and agitate them that if the conditions of excitation. Suppose that the amount of the charge Q that is migrated from each element of a single delay 2.1 to 2.n (figure 2), which receive single images (impulses), to the first block of dynamic synapses 3 is constant, not dependent on how the weights change (conduction) of synapses. Then you can operate some average amplitude of the current implementing the charge Q for some average time span ,Taking into account the amplitude of the pulse current passing forward through every synapse first block of dynamic synapses 3 in relation to the moment of time t can be defined as where w ij (t) - direct weight (conductivity) synapses connecting the ith neurons of the first layer j-mi neurons second layer; δ i (t) is a function, taking the value 1 when the i-th neuron excited, and 0 in other cases; n - number of neurons in each layer. At the entrance of each of the jth neuron second layer of neurons from 5 excited neurons of the first layer 1 enters the total discharge current Arriving at the inputs of j-x neurons isolated images (impulses) partially perceived these neurons, with input conductivity w I , and are partly to the transmitting layer with the reflection coefficient where- total current reflection individual images from the jth neuron; z JH - integrated input impedance of a neuron. In the particular case it can be set active resistance R I z jc (t) is the wave resistance of a circuit, connecting the input impedance of the jth neuron with neurons transmitting layer. In General it equal where R jΣ (t) - total resistance of synapses in the chain connecting the input resistance Rin jth neuron with neurons transmitting layer, - check the weight of (conduction) of synapses; G jΣ - total conductivity, typical of this chain, as long line; L, C is the capacitance and inductance of this chain, how long line; f - frequency oscillations. In the particular case when G jΣ =0 and L=0, τ u - duration single image (pulse). Taking into account the coefficient of reflection of current can be equal to whereFrom (5) we see that the change of K j (t) time depends only on changes in Knowing K j (t) and I jΣ (t), can be find total current reflection individual images from the jth neuron, Accordingly, the current flowing through the input impedance of the jth neuron, is equal to When current flows through the synapses in the opposite direction partially erased saved them recognition results. Call current erase. Feature dynamic network of synapses in the following. Weight (with electric in terms of conductivity) w ij (t) synapses are determined through the work of their weight coefficients k ij (t) and functions weakening b(r ij (t)). Thus differ direct and inverse weight synapses. Each of the connections (synapses) has a value reduction features β ij (r ij ) individual images, depending on r ij - remoteness linked through the synapses of neurons (the distances between them on the plane from one of the layers. It is believed that the distance between the interacting layers of the neural network tends to zero. Function b(r ij ) may be defined as: where h is the degree of root, the higher it is, the wider the associative spatial interaction in the network; α - positive coefficient; N - number of neurons in each layer of the network. Included in (9) the amount of r ij in units of neurons, taking into account the possible spatial shifts aggregates of individual images along the layers of the network can be expressed as: n ij =±0, 1, ..., L-1; m ij =±0, 1, ..., M/-1; G ij , Δy ij - projection communication of the jth neuron with i-m on the axis X, Y, excluding spatial shifts; d, q - values of the individual shifts, respectively, by coordinates X, Y; L, M - number, respectively, columns, and rows, which split each layer of a neural network by shifts. Changing G ij , Δy ij on corresponding values of n ij ·d and m ij ·q, you can change r ij and the direction of the flow of aggregates of individual images along the layers of the network, perform spatial shifts of these populations along the layers. Such shifts aggregates of individual images along the layers are implemented through the management of dynamic synapses with control unit synapses 7 taking into account the current layer States. Neurons layers can be found in the States of waiting, excitement and immunity after excitation. When transferring the aggregates of individual images from the layer of the sending to the receiving layer their spatial shifts perform when there is a conflict between having the same ordinal numbers of neurons interacting layers. A conflict occurs when the individual images from the excited neurons are fed to the inputs of neurons, are in a state of immunity after excitation. Weighting coefficients k ij (t) and in (7) and (8) change depending on impacts on synapses individual images and act as elements of long-term memory network. For direct weight (direct conductivity) w ij (t) each synapse, the weighting factor can be defined as as for the reverse weight (inverse conductivity) for the case g ij (t) & GE g 0 , in the form of: where y is a constant factor, g 0 =const. In other cases, when g ij (t)<g 0 reverse weight (conductivity) The value of g ij (t) in (11) and (12) can be interpreted as the number of the memorized the synapse individual images at the moment of time t, and (g ij (t)-g 0 ) - as the number of the memorized individual images that can be erased with the synapse. Taking into account (11) and (12) direct and inverse weight (conductivity) synapses can be defined by the formulas: These weights are uniquely related to each other: On the value of w ij (t)can be determined conversely, for find w ij (t). When current flows from the excited neurons through some synapses in the forward direction at them remembered the current recognition results. Such memorization corresponds to an increase of the direct and inverse conductivity of these synapses. In this case, the value of g ij (t), which is included in (13), (14), can be calculated as where I ij (t) is the current flowing through a synapse in the forward direction, determined by (1). When current flows back through the other synapses reduced their direct and inverse weight (inverse and direct conduction). The value of g ij (t), which is included in (13), (14), depending on the current erase (6), provided that can be calculated as otherwise g ij (t)=g ij (t-Dt). In the particular case of B(ij (t)) and you can define as B(ij (t))=A·ij (t) and where A, D are constants, and D must be greater A. The more current excited neurons through some synapses in the forward direction, the higher the level of partial erase previously memorized the recognition results with other synapses partially reflected currents flowing through them in the opposite direction. The level of partial erase the recognition results also depends on the input conductivity w I =1/R I excited neurons. In cases where the current I, j I (t), flowing through the input impedance of the jth neuron second layer 5, exceeds the threshold I then , the neuron is excited. Outputs its single image (pulse) and neuron enters the state of immunity. In a state of immunity, each neuron is the same amount of time that is longer than the time delay individual images in double-layer circuits network block diagram of which is shown in figure 1. All these provisions are valid for the transfer of aggregates of individual images from the second layer to the first. Serial number of single images with the release of the second layer 5, after a delay in block 6, coming in the second block of dynamic synapses 4. In section 4, they are similarly as in block 3, processed and shifted along the first layer depending on the condition of the first and second layers, proceed to the second input of the first layer of neurons 1. They, as in the case with the second layer are partly from the first layer. These partially covered by aggregate partially washed with the synapses of the second block of dynamic synapses 4 remembered them recognition results. With this in mind coming to the first layer of neurons 1 direct and inverse number of single images in it correctly linked, are recognized and generate output new aggregate of individual images that carry information, on both ongoing and previously memorized network signals, connected with the first. Due to the respective shifts aggregates of individual images along the layers excludes the imposition reverse the recognition results to direct the aggregate and aggregate moving along the layers in the pre direction. Note that when processing the signals in the network is constantly change of one excited neurons, and accordingly there is a change of memorizing the partial erase from synapses network recognition results. Continuous information processing in neural network method proposed starting from a certain moment, the number of current individual images becomes equal to the number of erasable previously memorized individual images. After saving the current information on some elements of the network, partly washed remembered recognition results with other synapses, freeing the memory for storing the follow-up information. This results in a steady uninterrupted functioning of the network. Due to the proposed partial erase previously memorized the recognition results are completely erased particularly better long-term results. In addition, the proposed partial erase previously memorized results stimulates the search for the missing information, in order to balance the level of memorizing with its level of abrasion. This, along with increased stability, increases the level of intelligence information processing in neural networks. Due to the priority of short links in a neural network between input and output network easy-to-one correspondence is established between the components of input and output signals. With regard to the compliance by the numbers of neurons generated sequence of single images on the output device, determine the frequency and spatial characteristics of the components of the original signal. On the frequencies of the sequence and relative delays individual images, respectively, set the amplitude and phase components of the original signal. Then reproduce the components of the initial signals and by their addition to restore the original, such as verbal, visual and other signals. To determine the amplitudes of the components of the original signal determine the current number of individual images that fall within a predefined time interval. To play the components of the initial signals applicable, for example, their digital synthesis on known parameters. For the proof of the advantages of the proposed method in comparison with the known solutions was developed a software model of a neural network that implements this method. For development of this model was used MatLab environment. Each layer of a neural network consisted of 2100 neurons. Through spatial shifts transmitted aggregates of individual images from layer to layer, each layer is divided on the 50 Boolean fields, 42 neuron in each. Serial number of single images were injected into the network through the first logical field and filmed with the last field (figure 3). The sequence of these aggregates contain lines from the novel by Alexander Pushkin «Eugene Onegin»: «My uncle is the most honest principles, when really sick, he's forced to respect himself and better invent couldn't...». Just a network incrementally introduced 150 aggregates of individual images, including empty aggregate corresponding to the spacing between words. Enter the network populations was carried out with the period of four bars. At each time step was estimated total weight W S (t) synapses network, as where w kij (t)- direct weight ij-th synapse in k-block dynamic network of synapses (Fig.1); n - number of neurons in each layer of the network. The results of the estimates are presented in fig.4, b. From the analysis fig.4 visible, that at absence of a partial erase from synapses memorized the recognition results are partially reflected from the host layers array of individual images, the total weight of synapses is continuously growing. He seeks to limit the total weight W Σ before , when all the weight coefficients kij k (t - > the 1, where β kij - function weakening for the ij-th synapse in k-block dynamic network of synapses. In this limiting case is not storing and retrieving information from the network. The network is congested. According to the proposed method, providing for a partial erase memorized the recognition results are partially reflected from the host layers array of individual images, the total weight W S (f) synapses over time will stabilize at a level ensuring continuous stable intellectual information processing in neural networks. From fig.4 shows that the increase in total weight W S (t) synapses over time slows down, in contrast to the previous case (fig.4). It is advisable that weight leveled W Σ before /2, providing the greatest memory capacity on synapses. This is feasible, for example, preliminary setting the appropriate values for the input conductivity of neurons. Analysis of the state transition layers of the network that implements the proposed method, also confirmed the increase of stability and intelligence information processing, broader functionality of the proposed method in comparison with the known solutions. The way of intelligent information processing in neural network can be implemented using the well-known element base. As the neurons layers and elements delays in blocks delays applicable waiting . While waiting in blocks of single delay must be run on the front and trailing edge of the input pulse. Blocks dynamic synapses can be implemented with the use of memristors and managed attenuators. To ensure a partial reflection of aggregates of individual images from the host network layers, chain, including the elements of delays, synapses and the input circuits of neurons must have the characteristics of long lines. The control unit can be implemented specialized processor programmable integrated circuits, operating in accordance with the above rules. The method can be implemented through emulation of a two-layer recurrent neural network with managed synapses on modern computing machines. The way of intelligent information processing in neural network, which consists in applying a multi-layer recurrent network with feedbacks, trailing path with time delay individual images less time immunity of neurons in the network after they excitation signal laid out on components in the basis agreed with the input layer of the network, with each pillar is converted into a sequence of single images with a frequency of repetition as a preset function of the amplitude of the component, view the signal in the form of consecutive aggregates of individual images in accordance with pre-defined rules his recognition with regard to reverse the recognition results, shifts of populations of single images along the layers taking into account the current layer States, memorizing the recognition results on the elements of the network, using as the results of processing consecutive aggregates of individual images in the output layer of the network after you convert back to their corresponding initial signals characterized in that the transfer of the aggregates of individual images from layer to layer them partly reflect from the host layer to the transmitting layer and partially washed with network elements in these partially reflected array of individual images saved recognition results.
|
© 2013-2014 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English. |