RussianPatents.com

Method for intelligent information processing in neural network. RU patent 2483356.

Method for intelligent information processing in neural network. RU patent 2483356.
IPC classes for russian patent Method for intelligent information processing in neural network. RU patent 2483356. (RU 2483356):

G06N3/02 - using neural network models (for adaptive control G05B0013000000; for image pattern matching G06K0009000000; for image data processing G06T0001400000; for phonetic pattern matching G10L0015160000)
Another patents in same IPC classes:
Method of predicting time of onset and level of floods Method of predicting time of onset and level of floods / 2480825
Method is realised by a mathematical model of a neural network, which is a hybrid network with cascade connection of the Kohonen distribution layer and a predicting two-layer perceptron network; the input vector of the mathematical model of the neural network includes daily values of the gradient of local temperature field expressed by coordinates, and corresponding values of the water level over the past eight days at the prediction point; before use, the mathematical model is trained on daily 20-year data, as a result of which the Kohonen layer accumulates information on classes of the course of the values of the gradient of the local temperature field and the level of water at the prediction point by selecting clusters of values corresponding to the observed weather patterns; during prediction, an input vector is transmitted to the input of the Kohonen layer, the input vector including daily values of the gradient of the local temperature field and corresponding values of the water level over the past eight days; at the output of the Kohonen layer, a vector of values is formed, which corresponds to a specified cluster, which is then transmitted to the input of the perceptron network which, based on approximation of the complex nonlinear relationship between values of the gradient of the local temperature field at the prediction point and the level of water, calculates predicted values of the gradient of the local temperature field and the level of water.
Local computer-aided ophthalmic microsurgical network for enucleation and evisceration operations Local computer-aided ophthalmic microsurgical network for enucleation and evisceration operations / 2460117
Local computer-aided ophthalmic microsurgical network for enucleation and evisceration operations has formatting devices in form of a radial-annular structure consisting of a single set of automated workstations (AWS), which are synchronously and asynchronously operating, processing, converting, transmitting, analysing, synthesising hierarchical structures of an artificial neural network: diagnostic AWS, ophthalmic microsurgical AWS, AWS for successive operation steps, AWS for components, surgical operation AWS with anti-parallel forward and reverse flow of information in between. All anti-parallel main forward and refining reverse flow of information form one multigraph with not less than fourteen vertices, consisting of AWS, which function in parallel and synchronously, connected by not less than ninety six directed edges.
Automated ophthalmic microsurgeon workstation for pediatric surgery Automated ophthalmic microsurgeon workstation for pediatric surgery / 2460116
Automated ophthalmic microsurgeon workstation for pediatric surgery has formatting devices in form of closed neural chains consisting of interconnected identification unit, interpolation unit, extrapolation unit, a unit for estimating the next values of identified parameters, a unit for analysing amblyopia and strabismus, a decision unit, wherein inside each neural chain, each unit is connected to other units of that chain in series and in parallel, and each unit of one neural chain is connected to each of the units of other neural chains via forward and reverse data flow; wherein all opposite forward and reverse data streams form a single multigraph with not less than eighteen peaks joined by not less than one hundred and fifty three directed edges.
Local computer-aided ophthalmic microsurgical network for keratoplasty operations Local computer-aided ophthalmic microsurgical network for keratoplasty operations / 2459235
Local computer-aided ophthalmic microsurgical network for keratoplasty operations has formatting devices in form of a radial-annular structure consisting of a single set of automated workstations (AWS), which are synchronously and asynchronously operating, processing, converting, transmitting, analysing, synthesising hierarchical structures of an artificial neural network: diagnostic AWS, ophthalmic microsurgical AWS, AWS for successive operation steps, AWS for components, surgical operation AWS with anti-parallel forward and reverse flow of information in between, which form one multigraph with not less than fourteen vertices which function in parallel and synchronously with possibility of magnifying the structure and functional connections, connected by not less than nine oriented edges.
Method for automated robust classification radio signals according to structure-time parameters Method for automated robust classification radio signals according to structure-time parameters / 2450356
Method comprises steps where educational data are clustered based on a competition principle which enables to build a set of standard vectors of structure-time parameters of known classes of radio signals with automatic estimation of a value σ for each standard vector; using these data to build a probabilistic neural network; using the built probabilistic neural network to estimate probability density distribution of known classes of radio signals in the region of an unknown (classified) signal and the unknown signal is identified with a class having the highest probability density distribution in the region of the unknown signal, characterised by that before building the probabilistic neural network.
Method of intellectual information processing in neuron network Method of intellectual information processing in neuron network / 2427914
Method of intellectual information processing in a neuron network, which consists in supplying a signal into a double-layer network with feedbacks, and the signal is divided into components in the basis aligned with an input layer of the network, signal representation in the form of serial combinations of single images in compliance with the preset rules of its recognition with account of reverse recognition results, shifts of single images combinations along the layers with account of their current conditions, memorising results of recognition on network elements, using serial combinations of single images as processing results on the output layer of the network after the reverse conversion into corresponding initial signals, besides, shifts of combinations of single images along the layers are carried out with varied parameters of shifts, and combinations of single images are moved on along the layers following a spiral with variable diameter.
Local computer ophthalmic microsurgical operating network Local computer ophthalmic microsurgical operating network / 2417441
Local computer ophthalmic microsurgical operating system has formatting devices which are in form of radial-annular structures of an artificial neural network consisting of automated workstations with direct and reverse flow of information between them, where each automated workstation is fitted with at least one identification unit, each of which is a converting and transmitting element of the neural network, and less than one decision unit, each of which is an analysis and synthesis element of the neural network.
Method and device of intellectual processing of information in neural network Method and device of intellectual processing of information in neural network / 2413304
Before setting a signal to network each of its constituent parts is transformed into sequence of single images with frequency of repetition, as preset function of function of constituent part, signal is send to network with feedbacks, closing contours with time of single image delay shorter than time of unreceptiveness of network neurons after their excitation, in transmission of packages of single images from layer to layer they are displaced along layers taking into account current state of layers. Device of intellectual processing of information in neural network is represented by the following: into double recurrent network with two blocks of dynamic synapses additionally introduced is block of synapse control, whose first inlet is connected with outlet of first layer, which is device outlet, second inlet is connected with outlet of second layer, first outlet - with second inlet of second block of dynamic synapses, second outlet - with second inlet of first block of dynamic synapses, first and second blocks of single delays, providing leeway for production in block of synapse control of timely control impacts on blocks of dynamic synapses.
Neutron network for main division of modular numbers Neutron network for main division of modular numbers / 2400813
Device comprises inlet registers of dividend and divisor, unit of division with zero balance, unit for conversion of residual code into code of generalised position system, read-only memory, unit of subtractor, multiplication unit, prohibition unit, units of comparison, key, summator.
Neural network operational device Neural network operational device / 2394274
Neural network operational device has a first ring bus for transmitting instruction packets, local data memory units, local data availability memory, local instruction memory and local logic-in memory, which form local instruction packet memory units, operational units, engaged trigger circuits, a second ring bus for transmitting data packets, data input and output units, local results stacks, parallel input/output registers with availability bits, NAND logic element and the same number of inhibition gates as operational units, where the parallel data packet registers with availability bits and parallel input/output registers with availability bits form the second ring bus.
Method for distributing keys in large system distributed over significant space / 2273877
For realization of method to key distribution center biometric images of user are delivered once and recorded, source artificial neural network is generated and user key as well, aforementioned neural network is taught so, that during its input to biometric image of user it spawned user key at outputs, after that neural network taught in such a way is delivered to user object via open communication channels, where it is recorded and utilized for forming key by feeding to its input by user of his biometric image, and if it is necessary to change the key new key is generated in center, source artificial neural network is taught again on new key and transformed neural network is delivered along open communication channel to user object, where it is recorded and utilized for receiving new key.
Device for correcting errors in polynomial system of residue classes with usage of pseudo-orthogonal polynomials Device for correcting errors in polynomial system of residue classes with usage of pseudo-orthogonal polynomials / 2294529
Device contains memory block, adder, error syndrome computation block represented by two-layered neuron network, containing neurons forming first layer and second layer of neurons.
Device for finding and correcting errors in codes of polynomial system of residue classes based on zeroing Device for finding and correcting errors in codes of polynomial system of residue classes based on zeroing / 2300801
In accordance to invention, polynomial residual class system is used, in which as system base minimal polynomials pi(z), i=1,2,...,7, are used, determined in extended Galois fields GF(25) and neuron network technologies, and also modified zeroing constants determined in current polynomial residual class system are used in parallel.
Neuron network for dividing numbers, represented in remainder class system Neuron network for dividing numbers, represented in remainder class system / 2305312
In accordance to the invention, neuron network contains input layer, neuron networks of end ring for determining remainder of denominator and neuron networks of end ring for determining quotient of division of two numbers.
New mutation profiles of reverse transcriptase hiv-1 that correlate with phenotypic resistance to medicinal preparations New mutation profiles of reverse transcriptase hiv-1 that correlate with phenotypic resistance to medicinal preparations / 2318213
The present innovation refers to new mutations, mutation combinations or those of mutation profiles of the genes of reverse transcriptase HIV-1 and/or protease that correlate with phenotypic resistance to medicinal preparations against HIV. To be more concrete, the present innovation deals with applying genotypic characteristics of HIV target population and subsequent correlation of the present information at phenotypic interpretation to establish the correlation of viral mutation profiles at resistance to medicinal preparations. The innovation in question enables to apply the data obtained in screening medicinal preparations applied against HIV-infection.
Device for modulo multiplication of polynomials Device for modulo multiplication of polynomials / 2321883
In accordance to the invention the effect is achieved due to application of neuron network basis and conduction of operations in polynomial system of residue classes of expanded Galois field GF(2V). The device contains eight neurons in first layer, where the first four neurons of first layer are meant for receiving binary code of first operand, and the second four neurons of first layer are meant for receiving binary code of second operand, sixteen neutrons in second layer, which realize the AND operation, four neurons in third layer, which realize the modulo 2 addition operation.
Multivariable predicative control of direct reduction process Multivariable predicative control of direct reduction process / 2346327
Invention relates to control systems. To predict value of parameter of the product being manufactured by means of neuronal network, it is suggested to account for the product manufacturing history in order to determine initial value of input neuron of neuronal network.
Method of protecting personal biometrical identification and authentification data Method of protecting personal biometrical identification and authentification data / 2346397
Invention deals with techniques employed for protection of data in the course of biometrical identification and authentification and is highly relevant for e-transaction conclusion, e-trade and Internet-banking. In case of neuron network storage of the user personal key and biometrical parameters encryption of the neuron network description tables is done with segments of the key located within the neuron network proper.
Method of stabilising communication networks in conditions of disruptive external effects Method of stabilising communication networks in conditions of disruptive external effects / 2379753
Invention relates to inspection technology and can be used in telecommunication systems. Values of disruptive effects on a communication line are monitored, while simultaneously evaluating transmission capacity of each type of communication line. The obtained values are scaled relative maximum values for each class of parametres. The given values are used to train artificial neural networks with radial basic elements for approximating dependency of efficiency of each type of communication line on values of disruptive effects. Matrices of synaptic weights of trained neural networks are filled and further installed in accordance with a specific designed communication network for evaluating transmission capacity from forecast values of disruptive effects, obtained with time delay. Based on the forecast values of transmission capacity for each communication line, the available network resource is allocated between subscribers taking into account their priority categories.
Neural network unit for interval determination Neural network unit for interval determination / 2382413
Invention relates to automation and computer engineering, particularly to devices for determining the interval for breaking down the argument when designing pulse-frequency functional generators, and can be used in measuring equipment as a device for selecting frequency variation range. Availability of a memory unit for weight coefficients of an artificial neural network allows for using one neural network unit to determine the interval for realising several functional relationships between the output code and intervals for breaking down input frequency.

FIELD: information technology.

SUBSTANCE: during signal processing in a multilayer recurrent network with feedback which close a loop with time delay of individual characters shorter than the irresponsiveness time network neurons after excitation thereof, with shift of pluralities of individual characters along layers, during transmission of the pluralities of individual characters from layer to layer, said pluralities of individual characters are partially reflected from the receiving layer to the transmitting layer and stored recognition results are partially deleted from network entities by these partially reflected pluralities of individual characters.

EFFECT: high intelligence and stability of information processing.

5 dwg

 

The invention relates to bionics, modeling functional aspects of the person and can be used in computing when you build intelligent machines and systems.

There are many known ways of intelligent information processing in neural networks used for approximation, classification, pattern recognition, speech processing, forecasting, identification, assessment of production processes, the Association of management and to solve other creative tasks (Khaikin S. Neural networks: a comprehensive course, 2nd ed.: Lane. from English. - M: Publishing house «Williams», 2006. - 1104 C.; A.I. Galushkin Theory of neural networks. KN. 1: textbook for universities / Total amended ... - M: , 2000. - 416 C.).

However, all of them characterized by narrow opportunities for memorizing structurally complex time-varying signals, their recognition, Association with other signals on the recovery of the memory of the network and reproduced in its original form. These known methods do not allow the processing of information to solve a wide range of tasks of the same neural network. They also have a low resistance.

The closest analogue of the invention is a method of intelligent information processing in multi-layer recurrent neural network with feedbacks, trailing path with time delay individual images less time immunity of neurons in the network after their excitement. According to him the signal is served in the network after decomposition into components in the basis agreed with the input layer of the network. In this case, each component before serving in the network is converted into a sequence of single images with frequency of recurrence as a preset function of the amplitude of the component. Signal present in the network in the form of consecutive aggregates of individual images in accordance with pre-defined rules his recognition with regard to reverse the recognition results. When transferring the aggregates of individual images from layer to layer their shift along the layers taking into account the current state of the latter. Recognition results to remember the elements of the network. As the results of processing using a serial number of single images in the output layer of the network after you convert back to their corresponding initial signals (Osipov V.Y. Associative intelligent machine / Information technologies and computing systems. №2, 2010, p.59-67).

The disadvantage of this method is not high resistance processing of a continuous flow of signals, possibility of network congestion information with the loss of all stored recognition results. This limits the functionality of the method on intellectual information processing.

The objective of the invention is to expand the functional capabilities on intellectual information processing in neural networks.

The technical result from the use of the invention consists in increasing the intelligence and sustainability of information processing in neural networks.

Task decides that in the known method of intelligent information processing in neural network, which consists in applying a multi-layer recurrent network with feedbacks, trailing path with time delay individual images less time immunity of neurons in the network after its institution, signal laid out on components in the basis agreed with the input layer of the network, with each pillar is converted into a sequence of single images with frequency of recurrence as a preset function of the amplitude of the component, view the signal in the form of consecutive aggregates of individual images in accordance with pre-defined rules his recognition with regard to reverse the recognition results, shifts aggregates of individual images along the layers taking into account the current layer States, memorizing the recognition results on the elements of the network, used as processing results consecutive aggregates of individual images in the output layer of the network after you convert back to their corresponding initial signals, according to the invention, if the transfer of the aggregates of individual images from layer to layer them partly reflect from host layer to the transmitting layer and partially washed with network elements in these partially reflected array of individual images saved recognition results.

To extend the functionality of intellectual information processing in neural network offers the author of the invention when transferring the aggregates of individual images from layer to layer them partly reflect from the host layer to the transmitting layer and partially erased network elements in these partially reflected array of individual images saved recognition results, which is a significant indication invention.

The essence of the invention is illustrated by figure 1 to 4.

Figure 1 shows a block diagram of two-layer recurrent neural network that implements the proposed method, where 1, 5 - first, second layers of neurons; 2, 6 first, second units of single delays; 3, 4 - first, second dynamic blocks synapses; 7 - control unit synapses.

Figure 2 shows a figure that reveals the structural peculiarities of the units of the neural network (figure 1), where 3.1 to 3.m - dynamic synapses first block 3 dynamic synapses.

Figure 3 shows the logical structure of the first layer of the network (broken down by 50 Boolean fields for 25 per line) the processing of information by the proposed method. The arrows indicate the direction of promotion of the aggregates of individual images along the layer.

On fig.4 for the absence of a partial erase memorized the recognition results from network elements shows the dependencies of the total weight W S (t) synapses from the current time t given that every four bars in the network introduces another set of single images from a defined sequence, where 1 for t=300 to 600; 2 - for t=0 to 300.

On fig.46 shows the dependencies of the total weight W S (t) synapses network with the same input stream, as in the previous case, but in case of partial erased from network elements memorized the recognition results, where 1 for t=300 to 600; 2 for t=0 to 300.

The method is as follows. Consider it on the example of a neural network, structural diagram of which is shown in figure 1.

The input signal can be decomposed into components in the basis agreed with the first layer 1 network, each element is converted into a sequence of single images with frequency of recurrence, as a preset function of the amplitude of the component, and is served at this layer.

After the filing of the first entrance of the first layer of neurons such a signal at its output a place serial number of single images, carrying all the information on input signal.

After a delay in the first block of single delay 2-to-serial number of single images goes to the first dynamic block 3 synapses.

Every single image from the current aggregate is presented simultaneously in the first block of dynamic synapses 3 on the totality of its dynamic synapses, providing the connection of each neuron that spawned the single image, in the General case, with all neurons of the second layer of neurons 5. With the passage of individual images through these synapses in the forward direction they take with them the information about the previous impacts and leave information about your appearance through the increase of scales of synapses.

Outputs of synapses single images are fed to the inputs of neurons of the second layer. Structural peculiarities of the units of the neural network (figure 1) is shown in figure 2.

Received aggregate of individual images on the inputs of the neurons of the second layer 5 according to the invention partially reflected from them. This is due to the charge and discharge of capacitors With 5.1 up With 5.n (figure 2). These partially covered by aggregate come back on synapses of the first unit of dynamic synapses and according to the invention partially washed with them remembered recognition results, that is the reduction of scales of synapses. Not reflected part of the aggregates of individual images directly affect the neurons of the second layer and agitate them that if the conditions of excitation.

Suppose that the amount of the charge Q that is migrated from each element of a single delay 2.1 to 2.n (figure 2), which receive single images (impulses), to the first block of dynamic synapses 3 is constant, not dependent on how the weights change (conduction) of synapses.

Then you can operate some average amplitude of the current

implementing the charge Q for some average time span

,

Taking into account the amplitude of the pulse current passing forward through every synapse first block of dynamic synapses 3 in relation to the moment of time t can be defined as

where w ij (t) - direct weight (conductivity) synapses connecting the ith neurons of the first layer j-mi neurons second layer; δ i (t) is a function, taking the value 1 when the i-th neuron excited, and 0 in other cases; n - number of neurons in each layer.

At the entrance of each of the jth neuron second layer of neurons from 5 excited neurons of the first layer 1 enters the total discharge current

Arriving at the inputs of j-x neurons isolated images (impulses) partially perceived these neurons, with input conductivity w I , and are partly to the transmitting layer with the reflection coefficient

where

- total current reflection individual images from the jth neuron; z JH - integrated input impedance of a neuron. In the particular case it can be set active resistance R I z jc (t) is the wave resistance of a circuit, connecting the input impedance of the jth neuron with neurons transmitting layer. In General it equal

where R jΣ (t) - total resistance of synapses in the chain connecting the input resistance Rin jth neuron with neurons transmitting layer,

- check the weight of (conduction) of synapses; G jΣ - total conductivity, typical of this chain, as long line; L, C is the capacitance and inductance of this chain, how long line; f - frequency oscillations.

In the particular case when G jΣ =0 and L=0,

τ u - duration single image (pulse).

Taking into account the coefficient of reflection of current can be equal to

where

From (5) we see that the change of K j (t) time depends only on changes in

Knowing K j (t) and I jΣ (t), can be find total current

reflection individual images from the jth neuron,

Accordingly, the current flowing through the input impedance of the jth neuron, is equal to

When current flows

through the synapses in the opposite direction partially erased saved them recognition results. Call

current erase.

Feature dynamic network of synapses in the following. Weight (with electric in terms of conductivity) w ij (t) synapses are determined through the work of their weight coefficients k ij (t) and functions weakening b(r ij (t)). Thus differ direct

and inverse weight

synapses.

Each of the connections (synapses) has a value reduction features β ij (r ij ) individual images, depending on r ij - remoteness linked through the synapses of neurons (the distances between them on the plane from one of the layers. It is believed that the distance between the interacting layers of the neural network tends to zero.

Function b(r ij ) may be defined as:

where h is the degree of root, the higher it is, the wider the associative spatial interaction in the network; α - positive coefficient; N - number of neurons in each layer of the network.

Included in (9) the amount of r ij in units of neurons, taking into account the possible spatial shifts aggregates of individual images along the layers of the network can be expressed as:

n ij =±0, 1, ..., L-1; m ij =±0, 1, ..., M/-1;

G ij , Δy ij - projection communication of the jth neuron with i-m on the axis X, Y, excluding spatial shifts; d, q - values of the individual shifts, respectively, by coordinates X, Y; L, M - number, respectively, columns, and rows, which split each layer of a neural network by shifts.

Changing G ij , Δy ij on corresponding values of n ij ·d and m ij ·q, you can change r ij and the direction of the flow of aggregates of individual images along the layers of the network, perform spatial shifts of these populations along the layers.

Such shifts aggregates of individual images along the layers are implemented through the management of dynamic synapses with control unit synapses 7 taking into account the current layer States. Neurons layers can be found in the States of waiting, excitement and immunity after excitation. When transferring the aggregates of individual images from the layer of the sending to the receiving layer their spatial shifts perform when there is a conflict between having the same ordinal numbers of neurons interacting layers. A conflict occurs when the individual images from the excited neurons are fed to the inputs of neurons, are in a state of immunity after excitation.

Weighting coefficients k ij (t) and

in (7) and (8) change depending on impacts on synapses individual images and act as elements of long-term memory network. For direct weight (direct conductivity) w ij (t) each synapse, the weighting factor can be defined as

as for the reverse weight (inverse conductivity)

for the case g ij (t) & GE g 0 , in the form of:

where y is a constant factor, g 0 =const.

In other cases, when g ij (t)<g 0 reverse weight (conductivity)

The value of g ij (t) in (11) and (12) can be interpreted as the number of the memorized the synapse individual images at the moment of time t, and (g ij (t)-g 0 ) - as the number of the memorized individual images that can be erased with the synapse.

Taking into account (11) and (12) direct and inverse weight (conductivity) synapses can be defined by the formulas:

These weights are uniquely related to each other:

On the value of w ij (t)can be determined

conversely, for

find w ij (t).

When current flows from the excited neurons through some synapses in the forward direction at them remembered the current recognition results. Such memorization corresponds to an increase of the direct and inverse conductivity of these synapses. In this case, the value of g ij (t), which is included in (13), (14), can be calculated as

where I ij (t) is the current flowing through a synapse in the forward direction, determined by (1).

When current flows back through the other synapses reduced their direct and inverse weight (inverse and direct conduction). The value of g ij (t), which is included in (13), (14), depending on the current

erase (6), provided that

can be calculated as

otherwise g ij (t)=g ij (t-Dt).

In the particular case of B(ij (t)) and

you can define as B(ij (t))=A·ij (t) and

where A, D are constants, and D must be greater A.

The more current excited neurons through some synapses in the forward direction, the higher the level of partial erase previously memorized the recognition results with other synapses partially reflected currents flowing through them in the opposite direction. The level of partial erase the recognition results also depends on the input conductivity w I =1/R I excited neurons.

In cases where the current I, j I (t), flowing through the input impedance of the jth neuron second layer 5, exceeds the threshold I then , the neuron is excited. Outputs its single image (pulse) and neuron enters the state of immunity. In a state of immunity, each neuron is the same amount of time that is longer than the time delay individual images in double-layer circuits network block diagram of which is shown in figure 1.

All these provisions are valid for the transfer of aggregates of individual images from the second layer to the first. Serial number of single images with the release of the second layer 5, after a delay in block 6, coming in the second block of dynamic synapses 4. In section 4, they are similarly as in block 3, processed and shifted along the first layer depending on the condition of the first and second layers, proceed to the second input of the first layer of neurons 1. They, as in the case with the second layer are partly from the first layer. These partially covered by aggregate partially washed with the synapses of the second block of dynamic synapses 4 remembered them recognition results.

With this in mind coming to the first layer of neurons 1 direct and inverse number of single images in it correctly linked, are recognized and generate output new aggregate of individual images that carry information, on both ongoing and previously memorized network signals, connected with the first. Due to the respective shifts aggregates of individual images along the layers excludes the imposition reverse the recognition results to direct the aggregate and aggregate moving along the layers in the pre direction.

Note that when processing the signals in the network is constantly change of one excited neurons, and accordingly there is a change of memorizing the partial erase from synapses network recognition results.

Continuous information processing in neural network method proposed starting from a certain moment, the number of current individual images becomes equal to the number of erasable previously memorized individual images. After saving the current information on some elements of the network, partly washed remembered recognition results with other synapses, freeing the memory for storing the follow-up information. This results in a steady uninterrupted functioning of the network. Due to the proposed partial erase previously memorized the recognition results are completely erased particularly better long-term results. In addition, the proposed partial erase previously memorized results stimulates the search for the missing information, in order to balance the level of memorizing with its level of abrasion. This, along with increased stability, increases the level of intelligence information processing in neural networks.

Due to the priority of short links in a neural network between input and output network easy-to-one correspondence is established between the components of input and output signals.

With regard to the compliance by the numbers of neurons generated sequence of single images on the output device, determine the frequency and spatial characteristics of the components of the original signal. On the frequencies of the sequence and relative delays individual images, respectively, set the amplitude and phase components of the original signal. Then reproduce the components of the initial signals and by their addition to restore the original, such as verbal, visual and other signals. To determine the amplitudes of the components of the original signal determine the current number of individual images that fall within a predefined time interval. To play the components of the initial signals applicable, for example, their digital synthesis on known parameters.

For the proof of the advantages of the proposed method in comparison with the known solutions was developed a software model of a neural network that implements this method. For development of this model was used MatLab environment. Each layer of a neural network consisted of 2100 neurons. Through spatial shifts transmitted aggregates of individual images from layer to layer, each layer is divided on the 50 Boolean fields, 42 neuron in each. Serial number of single images were injected into the network through the first logical field and filmed with the last field (figure 3). The sequence of these aggregates contain lines from the novel by Alexander Pushkin «Eugene Onegin»: «My uncle is the most honest principles, when really sick, he's forced to respect himself and better invent couldn't...». Just a network incrementally introduced 150 aggregates of individual images, including empty aggregate corresponding to the spacing between words. Enter the network populations was carried out with the period of four bars. At each time step was estimated total weight W S (t) synapses network, as

where w kij (t)- direct weight ij-th synapse in k-block dynamic network of synapses (Fig.1); n - number of neurons in each layer of the network.

The results of the estimates are presented in fig.4, b. From the analysis fig.4 visible, that at absence of a partial erase from synapses memorized the recognition results are partially reflected from the host layers array of individual images, the total weight of synapses is continuously growing. He seeks to limit the total weight W Σ before ,

when all the weight coefficients kij k (t - > the 1, where β kij - function weakening for the ij-th synapse in k-block dynamic network of synapses. In this limiting case is not storing and retrieving information from the network. The network is congested.

According to the proposed method, providing for a partial erase memorized the recognition results are partially reflected from the host layers array of individual images, the total weight W S (f) synapses over time will stabilize at a level ensuring continuous stable intellectual information processing in neural networks. From fig.4 shows that the increase in total weight W S (t) synapses over time slows down, in contrast to the previous case (fig.4). It is advisable that weight leveled W Σ before /2, providing the greatest memory capacity on synapses. This is feasible, for example, preliminary setting the appropriate values for the input conductivity of neurons.

Analysis of the state transition layers of the network that implements the proposed method, also confirmed the increase of stability and intelligence information processing, broader functionality of the proposed method in comparison with the known solutions.

The way of intelligent information processing in neural network can be implemented using the well-known element base. As the neurons layers and elements delays in blocks delays applicable waiting . While waiting in blocks of single delay must be run on the front and trailing edge of the input pulse. Blocks dynamic synapses can be implemented with the use of memristors and managed attenuators. To ensure a partial reflection of aggregates of individual images from the host network layers, chain, including the elements of delays, synapses and the input circuits of neurons must have the characteristics of long lines. The control unit can be implemented specialized processor programmable integrated circuits, operating in accordance with the above rules.

The method can be implemented through emulation of a two-layer recurrent neural network with managed synapses on modern computing machines.

The way of intelligent information processing in neural network, which consists in applying a multi-layer recurrent network with feedbacks, trailing path with time delay individual images less time immunity of neurons in the network after they excitation signal laid out on components in the basis agreed with the input layer of the network, with each pillar is converted into a sequence of single images with a frequency of repetition as a preset function of the amplitude of the component, view the signal in the form of consecutive aggregates of individual images in accordance with pre-defined rules his recognition with regard to reverse the recognition results, shifts of populations of single images along the layers taking into account the current layer States, memorizing the recognition results on the elements of the network, using as the results of processing consecutive aggregates of individual images in the output layer of the network after you convert back to their corresponding initial signals characterized in that the transfer of the aggregates of individual images from layer to layer them partly reflect from the host layer to the transmitting layer and partially washed with network elements in these partially reflected array of individual images saved recognition results.

 

© 2013-2014 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English.