Method of detecting emotions, method and system for generating sensitivity, machine-readable carrier for realizing them

FIELD: emotion detection.

SUBSTANCE: on the basis of input voice signal intensity, tempo and inflexion are recognized for every word in voice, values of changes are derived for the detected contents correspondingly and also signals which express the condition of each emotion of anger, sadness and satisfaction are derived on the basis of values of changes. Partner's emotion and information about the situation are introduced and thus information of instinctive motivation is generated. Besides, emotional information is generated which includes basic emotional parameters of satisfaction, anger and sadness, which is controlled on the basis of information about individual.

EFFECT: accurate detection of human emotion and the ability to generate sensitivity, close to human's sensitivity.

21 cl, 11 dwg

 

The technical field to which the invention relates.

This invention relates to a method for detecting emotions, the way to generate sensitivity to the system and machine-readable media for their implementation. The method for detecting emotions, according to this invention, can be used to detect emotions in the field of medicine and for the various systems as part of artificial intelligence and artificial sentience. Furthermore, the method of generating the sensitivity, according to this invention, can be used in different systems used in different ways to control the sensitivity of virtual humans and robots.

The level of technology

The prior art relating to a method for detecting emotions, according to this invention, disclosed, for example, untrusted applications to the Japan patent No. Hei5-12203, Hei9-22296, Hei11-119791.

In an untested application for Japan patent No. Hei5-12203 revealed that the duration of the voice, the formant frequency of the voice and the intensity of votes for each frequency are detected, respectively, as the characteristic values of the voice. In addition, this publication also revealed that detects the difference between the reference signal and the corresponding characteristic values and detecting emotions with fuzzy in the water based on the detected value of the differences.

In an untested application for Japan patent No. Hei9-22296 revealed that the rate of generation of the votes (the number of pauses per time unit), the frequency of the fundamental tone of voice, volume and range of voices are determined on the basis of the detected characteristic value of the vote. In addition, this publication also revealed that emotions detect on the basis of the detected characteristic values, and the results are obtained by statistical analysis of hmm (hidden Markov model).

In an untested application for Japan patent No. Hei11-119791 revealed that emotions detect, based on the probability spectrum of the phoneme in its transition state using hidden Markov models.

On the other hand, the prior art relating to a method for generating sensitivity, according to this invention, disclosed, for example, in an untested application for Japan patent No. Hei11-265239 entitled "System for generating emotions and the way to generate emotions".

Emotions that Express the inner state of people, etc. are subjected to changes depending on the situation at this time. In an untested application for Japan patent No. Hei11-265239 disclosed technology implementation generation of emotions in unpredictable situations.

In particular, assess the situation from the point of view of predictable situations and generate its own the public emotion system. Additionally, analyze the emotions generated in the past, and the situation at that time, and perform training unpredictable joint conditions peculiar to the respective situations and related emotions. When re-introduced the situation meets joint conditions, issued emotions, appropriate joint conditions.

The state of the emotions generated by such a system are displayed on the voices and images that are issued.

The invention

However, the conventional method for detecting emotions has a low accuracy for the detection of emotions and does not provide accurate detection of the actual human emotions, even if he provides detection of emotions for private limited languages. Accordingly, the method for detecting emotions practically is limited, for example, in a relatively simple slot machines.

The objective of the invention is to provide a method for detecting emotions, providing accurate detection of human emotions, which is the subject.

In addition, the usual method of generating emotions provides only generate emotions directly based on the information relating to the situations. For the actual people of different settings, including instinct, intelligence, personality, etc. that affect each other, challenging the way leading to different actions, speech, expression, etc. the Usual way to generate emotions in the result may not accurately display the instinct, intelligence, personality, etc.

Instinct and emotions can be seen as anxiety. Additionally, the instinct becomes the basis of biological excitability and motivations for generating emotions. Also, take into account that people do not give emotions directly, and taking into account the sensitivity controlled by the mind and personality.

Another objective of this invention is to provide a method for generating sensitivity, providing for the issuance of sensitivity that is closer to the sensitivity of the person.

According to the first aspect of the invention, a method for detecting emotions to detect the emotions of the subject contains the following stages: input voice signal; detecting the intensity of the voice, tempo, expressing the rate at which you receive voice and intonation, expressing the pattern of intensity changes in each word of the voice, based on input voice signal; obtaining values of the intensity of the detected voice, pace of voice and intonation, respectively; and generating signals expressing the state of emotions, including at least the anger, sadness and pleasure, respectively, on the basis of the obtained Majesty the n changes.

According to the first aspect of the invention, the emotion find out by taking into account the corresponding values of the intensity, pace and intonation, with input from subject to compliance with the conditions of emotions, including anger, sadness and pleasure, respectively. When using this method can more accurately detect emotions than in the prior art.

According to the second aspect of the invention, the detection system of emotions to detect emotions of the subject contains: block input voice input voice signal; a detection unit intensity for detecting the intensity of the voice based on the voice signal entered by the block of the input voice; a detection unit rate for detecting the speed at which you receive a voice, a rate based on a voice signal entered by the block of the input voice signal; a detection unit intonation to detect intonation, expressing the pattern of change of intensity in the word voice based on a voice signal entered by the block of the input voice signal; a detection unit change value to obtain the values of the intensity of the voice detected by the detection unit intensity, and tempo of the voice detected by the detection unit pace, and intonation detected by the detection unit intonation, respectively; and the block detected by the surveillance emotions for issuing signals, expressing emotional States, including at least the anger, sadness and pleasure, respectively, on the basis of the magnitudes of the changes detected by the detection unit value changes.

In the detection system of emotions, according to the second aspect of the invention, is provided by the block of input voice, the detection unit intensity, the detection unit pace, the detection unit intonation, the detection unit value changes and the detection unit of the emotions with which can be implemented by the specified method for detecting emotions.

According to a third aspect of the invention, the detection system of emotions, according to the second aspect of the invention, the detection unit intonation contains: block bandpass filters for extracting specific frequency components of the voice signal, which is entered separately for each word; a separation unit to separate the power spectrum of the signal that is extracted by block band-pass filters into several zones based on the intensity of the power spectrum; and a computing unit intonation for calculation of intonation based on the time intervals between the respective centers of the areas highlighted by the separation unit into zones.

Block bandpass filter extracts a specific frequency components of the voice signal, separate for each word and put in the block. Blo the separation zone separates the detected power spectrum into multiple zones based on their intensity. The computing unit intonation calculates the intensity value based on the time intervals between the respective centers of many of the areas highlighted by the separation unit into zones.

According to a third aspect of the invention, the scheme of distribution of energy in the word pertaining to a specific frequency components of the voice is detected as the amount of time expressing the intervals many areas, and the length of time is used as the intonation.

According to a fourth aspect of the invention, the detection system of emotions, according to the second aspect of the invention, further comprises: a unit image to receive an image of at least one face of the subject; the recognition block image for detecting positional information related to each part of the face, on the basis of image information, adopted by the unit image; block retention reference information image retention reference information related to the magnitude of the characteristics in each part of the face; and a detection unit characteristic value of the image to detect the characteristic values of the image based on the positional information detected by the distribution unit image, and the reference information held the block holding the reference picture information. The block found is ugenia emotions evaluates the status of emotions in accordance with the change of the characteristic value of the image, detected by the detection unit characteristic value of the image.

According to a fourth aspect of the invention, in addition to voice, the state of emotions estimate based on the facial expressions of the subject. Usually, because of the emotional state of people appear on the expression on their faces, the emotional state can be grasped by detecting the expression on their faces. In line with this, according to a fourth aspect of the invention, the detection unit emotions assesses emotional state based on the change of the characteristic value of the image detected by the detection unit characteristic value of the image.

According to the fifth aspect of the invention, the detection system of emotions, according to the second aspect of the invention, further comprises: a unit of preserving emotional information for serial reception of portions of information related to the emotional state detected by the detection unit of emotions and store portions of information; and block erase to erase the information that has been stored for a specified period of time after the initial saving information, among the pieces of information related to emotional States stored in the last block of preserving emotional information, except at least the information pokazyvayushiy the change in emotions, than the given value, and information corresponding to the specified schema changes from the information to be erased.

According to the fifth aspect of the invention, it is possible to store information related to the detected last emotional States, in the power saving emotional information. In addition, because the old information stored for a long period of time after its discovery, is automatically deleted from the save block emotional information, then we can reduce the storage capacity required for the save block emotional information.

However, the characteristic information such as information showing a large change in emotions than the given value, and information corresponding to the specified schema changes automatically excluded from the information to be erased. Therefore, the characteristic information is kept as it is in the power conservation emotional information, even if it is outdated. Accordingly, similarly to the memory of man, the characteristic information that may be useful in the future, can be read from the save block emotional information to play, even if it is outdated.

According to the sixth aspect of the invention, the detection system of emotions, according to the fifth is the SPECTA of the invention, further comprises: a recognition block proposals to perform a grammatical analysis with processing information related to the voice, published by the subject or characters entered by the subject, and to generate voice data expressing the value of the proposal; and a control unit saving for storing voice information generated by the recognition block proposals in the save block emotional information synchronously with information related to emotional States.

The recognition block proposals process information relating to the voice produced by the subject or characters entered by the subject using the keypad or the like, and performs grammatical analysis to generate voice data expressing the meaning of the sentence.

Grammatical analysis provides the ability to receive voice data expressing, for example, "5W3H", that is "who", "what", "when", "where", "why", "how", "how long, how far, how big, etc." and "how much".

The control unit save saves the speech information generated by the recognition block proposals in the save block emotional information in a state in which speech information is synchronous with information related to emotional States.

According to the sixth aspect of the invention, what redstem treatment in the power saving emotional information can be extracted not only information, related to emotions at any time in the past, but also speech information expressing the situation at this time.

Information held in the unit maintain emotional information can be used for various applications. For example, when the evaluation function of the emotions of the detection of emotions is inaccurate, the database used for the evaluation of emotions, can be adjusted on the basis of past discovery, held at the save block emotional information.

According to the seventh aspect of the invention, the detection system of emotions, according to the second aspect of the invention, further comprises: a unit for determining the time of silence for determining a reference time of silence time based on the emotional state of the detected emotional state; and a detection unit segmentation phrases to detect segmentation voice phrase by using a time reference of silence, a specific unit for determining the time of silence.

When performing voice recognition and detection of emotions is necessary to detect the segmentation for each of the phrases, and each phrase should be removed. In General, since there is a silent part in the segmentation between phrases, then you can select multiple phrases in a time when there BESM lunie part.

However, the length of the silent parts is not constant. In particular, the length of the silent portion is changed in accordance with the emotional state of the speaker. Therefore, when a certain set threshold value to determine a silent part, the probability of error detection segmentation phrase becomes high.

According to the seventh aspect of the invention, the reference time of silence is determined, for example, based on the emotional state detected just before the definition and segmentation of phrase speech to be detected in accordance with the reference time of silence. Accordingly, it is possible to correctly detect the segmentation phrase, even when you change the emotional state of the speaker.

According to the eighth aspect of the invention, the computer-readable medium containing a program for detecting emotions, allows the computer to perform a method for detecting emotions of the subject, including: the stage of the input voice signal in the detection program of the emotions; the stage of detection of the intensity of voice, pace, expressing the rate of the emergence of voice and intonation, expressing the pattern of intensity changes in each word of the voice, based on input voice signal; a step for value changes detected intensity of voice, pace of voice and intonation; and stage gene is investing signals, expressing the emotional state of at least anger, sorrow and pleasure, respectively, on the basis of the obtained value changes.

According to the ninth aspect of the invention, a method of generating sensitivity contains stage: holding in advance of the portions of information about identity, defining at least the mind set of characteristics and the will of the subject, which generates sensitivity; generating information instinctive motivation, including at least first instinct parameter expressing a degree of pleasure, the second instinct parameter expressing a degree of danger, and the third instinct parameter expressing a degree of achievements and changes, based on input information about the situation, which indicates the state of the emotions of a partner or environment in which the partner is located; generating emotional information, including the emotional parameter at as pleasure, anger, and sadness, on the basis of the generated information instinctive motivation; and management of the generated emotional information based on the information about identity.

According to the ninth aspect of the invention, information instinctive motivation, which motivates the generation of emotion is generated based on the entered information about the situation (the tip of the and will and environment partner). Namely, information instinctive motivation is generated from information about the situation, and emotional information is generated based on the information instinctive motivation. In addition, the emotional information that is subject to generation, is controlled on the basis of information about identity. Thus, it can be emotion-driven mind and will of the individual, that is, information sensitivity.

Additionally, because emotional information is generated through information instinctive motivation, emotions, subject to generation, can be controlled more precisely and simply.

For example, the emotion generated when a man meets a dangerous situation in the state recognition of a dangerous situation, and emotion generated when a person suddenly encounters a dangerous situation in a state of misunderstanding the danger of the situation are different. This difference of emotions can be played.

Preferably, if the parameter instinct further comprises the degree of attention (power failure), confidence (degree of confusion), the degree sequence (degree of persistence), etc. in addition to the above elements. In addition, preferably, if the emotional parameter that makes emotional information, in addition to who holds the surprise, fear, distress, disgust, contempt, zoom-in, zoom, jealousy, envy, addiction, anger, anxiety, etc. in addition to the specified elements.

According to the tenth aspect of the invention, the generator sensitivity contains: block instinctive determination to enter information about episodic situations involving emotional state partner, the environment in which the partner is located, and the will of the partner, and to generate information instinctive motivation, including at least first instinct parameter expressing a degree of pleasure, the second instinct parameter expressing a degree of danger, and the third instinct parameter expressing a degree of achievement or changes, based on the information episodic situation; block generating emotions to generate emotional information, including basic emotional parameters at least pleasure, anger, and sadness, based on the information instinctive motivation issued from the block instinctive determination; block provide information about an individual to provide information about identity, which defines at least the mind and will of the sensitivity of the subject, which generates sensitivity; and a control unit emotions to control emotional information is iej, generated by the block generating emotions, on the basis of information about identity supplied from the block provide information about identity.

According to the tenth aspect of the invention, it is possible to perform the method of generating the sensitivity in accordance with paragraph 9 of the claims by attributing block instinctive determination unit generating emotions, block provide information about identity and control emotions.

In line with this, you can give emotions, controlled by the mind and will of the individual, that is, information about sensitivity. In addition, because emotional information is generated through information instinctive motivation, emotions, subject to generation, can be controlled more precisely and simply.

According to the eleventh aspect of the invention, the power generation of emotions, according to the tenth aspect of the invention, includes: a generating unit's rhythm for generating information expressing periodically changing the environment or the life rhythm of the living body; and a control unit random emotions to control arbitrary emotions in the block generating emotions in accordance with information about the rhythm of life generated by the block generating the rhythm of life. Also, take into account that meet the following people have individual rhythm of the body, rhythm emotion, rhythm, intelligence, etc. is Considered that periodically varying the rhythm has a different effect on the actual human emotions.

According to the eleventh aspect of the invention, the control unit random emotions manages arbitrary emotions in the block generating emotions in accordance with the information generated by the block generating rhythms. Accordingly, subject to the issuance of emotions can be changed in accordance with the environment or the life rhythm of the living body.

According to the twelfth aspect of the invention, the generator sensitivity, according to the tenth aspect of the invention, the power generation of emotions contains: unit holding information instinct-emotion for holding templates information that ensure the compliance of each other primary emotional parameter and information instinctive motivation; and unit training information coordination for delivery of information, expressing agreement/disagreement information instinctive motivation pattern of the block information holding information instinct, emotions, and the information instinctive motivation is issued from the block instinctive determination.

According to the twelfth aspect of the invention, it is possible to obtain the probability of matching information instinctive motivation with the template information is from a unit of study information the probability of approval for use as a determining factor of emotion.

For example, when a person's mental state is changed from the first state into the second state, the mental state goes through a third current path from the first state into the second state. In line with this, there is the likelihood that the mental condition temporarily coordinated with a specific template information in the third state. However, the template information, consistent with the mental state in the third state, it does not matter a great utility. Through the use of probability matching, received by a unit of study probability of concordance, it is possible to suppress the generation of emotions template information with a low probability.

According to the thirteenth aspect of the invention, the generator sensitivity, according to the tenth aspect of the invention, the power generation of emotions contains the control unit feedback emotions to enter into the power generation of emotions at least his own emotional information generated at the end, and to display the target generated information on his own emotional information to be in the next generation.

It is believed that the input of various motivations led to a cyclical changes in human emotions. For example, the degree of anger, which is the emotion generated when the motive is s given to the person in a normal state to invoke his wrath, and the degree of anger generated when the motivation is given to a person who was already in anger, so that he becomes more angry, very different from each other.

According to the thirteenth aspect of the invention, the presence control unit feedback of emotions provides the state migration just generated emotions back to the input, and display the status of emotions generated on the following emotions. Accordingly, it is possible to generate emotion that is closer to human emotions.

According to the fourteenth aspect of the invention, the generator sensitivity, according to the tenth aspect of the invention, has the characteristic in accordance with which the control unit of emotions displays information about the rhythm of life that is the individuality of the subject, which generates a sensitivity to emotional information to be entered.

According to the fourteenth aspect of the invention, information about the rhythm of life can be displayed on the sensitivity. For example, the difference occurs in the result of the determination performed by the mind, etc. depending on the willingness of people to do something.

According to the fifteenth aspect of the invention, the generator sensitivity, according to the tenth aspect of the invention, further comprises: a database of knowledge to store information about situations, shows the expansion of the past situation, last episode and the previous situation and the episode; the finisher knowledge to search for and retrieve information about past situations similar to the new information about the situation, from a database of knowledge, and to issue information about the previous situation in the control unit of emotions; and the control unit updating data to update the content database of knowledge on the basis of information about the situation, showing once again entered the situation, and the situation, and to automatically erase time from the database of knowledge information about the situation with low priority in accordance with a weight content.

According to the fifteenth aspect of the invention, information about the situation, showing the past situation, and the situation is saved in the database knowledge. For example, stores the information indicating the situation of a particular episode and the beginning of the final result of the episode. Therefore, information about the previous situation, similar information about the current situation, can be obtained from the database of knowledge to use to control the emotion. Over time, the newly generated information should then be added randomly to the database of knowledge. However, the storage capacity of the system, forming a database of knowledge is limited. In addition, increasing the number of the number of stored information decreases the processing speed.

However, according to the fifteenth aspect of the invention, information about the situation of low priority with time is automatically deleted from the database of knowledge under the control of the control unit updates the data. Thus it can be implemented result, similar to the forgetting of man, and thereby prevented the decrease in storage capacity and processing speed.

According to the sixteenth aspect of the invention, the generator sensitivity, according to the tenth aspect of the invention, further comprises: a block of input voice input voice signal; a detection unit intensity for detecting the intensity of the voice based on the voice signal entered by the block of the input voice; a detection unit rate for detection as the rate of speed at which you receive a voice based on the voice signal entered by the block of the input voice; a detection unit intonation to detect intonation, expressing the schema changes of intensity in the word voice based on a voice signal entered by the block of the input voice; a detection unit change value for the detection value changes in the voice intensity detected by the detection unit intensity, and tempo of the voice detected by the detection unit pace, and intonation detected by the detection unit intonation, respectively; and the block found is ugenia emotions for issuing signals, expressing the emotional state of at least anger, sorrow and pleasure, respectively, on the basis of the magnitudes of the changes detected by the detection unit value changes.

According to the sixteenth aspect of the invention, the emotional state of the partner can be detected on the basis of the characteristic quantities extracted from the voice. In line with this, you can generate your own emotion in accordance with the emotion partner.

According to the seventeenth aspect of the invention, the generator sensitivity, according to the sixteenth aspect of the invention, further comprises: a unit voice recognition voice recognition entered by the input block votes, and for issuing a landmark information; and a processing unit for natural language processing natural language vocal information recognized by the recognition unit of the voice, and for generating semantic information expressing the sense of the entered voice.

According to the seventeenth aspect of the invention, receiving semantic information related to the word, spoken by a partner and, thus, the result obtained through the understanding of semantic information, can be displayed on its own sensitivity.

According to the eighteenth aspect of the invention, the computer-readable medium containing a program and data executed by the computer, used to implement the method of controlling the generation of sensitivity, including: the stage of generating information instinctive motivation, including at least first instinct parameter expressing a degree of pleasure, the second instinct parameter expressing a degree of danger, and the third instinct parameter expressing a degree of achievement or changes, based on input information about the situation, which shows the emotional state of the partner, or the environment in which the partner is located; the stage of generating emotional information, including the emotional parameter of at least pleasure, anger, or sadness, on the basis of the generated information instinctive motivation; stage information about identity, defining at least the mind and will of the subject, which generates sensitivity; and stage management generated emotional information based on the information about identity.

Machine-readable medium, according to the eighteenth aspect of the invention, is introduced into a given computer to perform the recorded programs on it, and thereby ensure the implementation of the method of generating the sensitivity according to the ninth aspect of the invention.

Brief description of drawings

The essence, the principles of the usefulness of this invention follow from the following detailed description with reference to the accompanying drawings, in which similar parts are denoted by the same positions, while the drawings shows:

figure 1 - block diagram of options for performing detection of emotions;

figure 2 - block diagram of the detection unit intonation;

figure 3 is a graph of the correlation between changes in emotional state and intensity, tempo and tone of voice;

4 is a time chart of processing of the voice signal in the detection unit intonation;

5 is a graphical diagram of the operation of the unit process of forgetting;

6 is a configuration example of information stored in the database emotions and sensitivity;

7 is a block diagram of example system configuration with the use of generator sensitivity;

Fig - block diagram block generation instinctive information;

Fig.9 is a block diagram of the power generating emotional information;

figure 10 - example of a model template reactions in the database for patterns of emotional reactions; and

11 is a block diagram of the recognition block sensitivity and thought.

Description of the preferred embodiments

The first version of the runtime

Below is a description of one possible implementation of the method for detecting emotions, according to this invention, with reference to figure 1-6.

Figure 1 shows a block diagram of the detection system of emotions, according to this variant done the deposits. Figure 2 shows the block diagram of the detection unit of intonation. Figure 3 shows a graph of the relationship between changes in emotional state and intensity, tempo and tone of voice. Figure 4 shows time diagrams of the processing of the voice signal in the detection unit of intonation. Figure 5 shows the graphical scheme of work unit of the process of forgetting. Figure 6 schematically shows an example of information stored in the database memory, emotion and sensitivity.

As shown in figure 1, the detection system contains emotions: the microphone 11; an analog-to-digital Converter 12; section 13 of the signal processing; block 20 voice recognition; section 17 of the detection intensity; block 18 detection rate; block 19 detection of intonation; block 21 for temporary storage of data; block 22 change detection emotions; unit 23 detecting emotion in voice; base 24 data templates emotions; the keyboard 25; section 26 recognition of phrases; a television camera 31; section 32 image recognition; base 32 data face templates; block 34 detection of emotions in the face; block 39 character recognition; base 41 data memory, emotions and sensitivities; section 42 of the process of forgetting; block 43 synchronous processing; database 44 data about human nature; base 45 data about identity; base 46 data information professions; and the block 60 of the detection sense is.

In addition, in block 20 voice recognition provided by the block 13 signal processing; block 14 detection of phonemes; a detection unit 15 words; and the block 16 detection of phrases. Block 20 voice recognition has the function of the device voice recognition (processing in natural language), which is available separately.

As shown in figure 1, the block 20 voice recognition unit 17 of the detection intensity, block 18 detection rate, block 19 detection of intonation, block 21 of the temporary data storage unit 22 change detection emotions and unit 23 detecting emotion in voice are circuits for detecting emotion from his voice.

The detection system of emotions contains a microphone 11, a keyboard 25 and a TV camera 31 as the input block for reading information from a person who is a partner, for which detecting emotions. In particular, the emotions of a person who is a partner, find using voice input through a microphone 11, a landmark information entered through the keyboard 25, and information, including facial expression, etc. which is entered by a television camera 31.

It should be noted that emotions can be detected based only on the voice input from the microphone 11, or iconic information entered from the keyboard 25 or expression, which is introduced from the television camera 31. However, from the point of view of improving the accuracy of detection of emotions, it is more efficient to comprehensively evaluate information obtained from multiple sources of information.

First is a description of the processing related to the voice. Voice signal entered from the microphone 11, quantized by an analog-digital Converter 12, and then converted into a digital signal. Digital voice signal obtained at the output of analog-to-digital Converter 12, is served in the block 20 voice recognition.

Unit 13 signal processing extracts the frequency components necessary for detecting the intensity of the voice. Unit 17 of the detection intensity detects the intensity of the signal extracted by the block 13 of the signal processing. For example, as the intensity, you can use the result obtained by averaging the amplitude of the voice signal.

The cycle averaging to detect the intensity of the voice is set, for example, is 10 seconds. It should be noted that when segmentation for relevant phrases find, despite the cycle averaging, shorter than 10 seconds, the average time periods from the beginning of the phrase to detect segmentation. In particular, the detected intensity of votes for each phrase.

Unit 14 detection of phonemes provided in the block arsphenamine voice, detects segmentation for each phoneme of the voice entered into it. For example, when you enter a phrase, expressed as "kyou wa ii tenki desune" (in Japanese) in the form of voices, then found a segmentation for each phoneme as "kyo/u/wa/i/i/ te/n/ki/de/su/ne" (in Japanese).

The detection unit 15 words provided in the block 20 voice recognition, detects segmentation for each word the voice entered into it. For example, when you enter a phrase, expressed as "kyou wa ii tenki desune" (in Japanese) in the form of votes, the detected segmentation for each word as "kyou/wa/ii/ tenki/desune" (in Japanese).

Block 16 detection of phrases provided in the block 20 voice recognition, detects segmentation for each phrase voice entered into it. When the detected silent state of a particular duration or more, it is considered that there is segmentation for each phrase. As a threshold duration of the silent state adopted a value of from about 0.1 to 0.2 C. in Addition, this threshold value is not constant and can be changed automatically so that it displays the emotional state detected just before that.

The block 12 discovery rate receives the signal segmentation for each phoneme, issued by the unit 14 detection of phonemes, and detects the number of phonemes that appear in the unit of time as a cycle detection rate sets the time equal to, for example, 10 C. However, if the detected segmentation phrases, counting phonemes stops until the detection time segmentation phrase, even if the segmentation phrases are detected within 10 seconds, and calculates the amount of pace. In particular, the rate is determined for each phrase.

The digital signal from the analog-to-digital Converter 12 is divided for each word that contains the segmentation unit 15 detection of words, and the digital signal is entered in block 19 detection of intonation. Of the block 19 detection intonation of voice input unit 19 detects the tone, expressing the pattern of changes in the intensity of the voices in the word and segmentation for each phrase in the block 16 detection of phrases. Thus, the block 19 detection intonation detects the characteristic pattern intensity segmentation.

As shown in figure 2, in block 19 of the detection intonation provided by the bandpass filter 51, block 52 transformation of the absolute value, the block 53 comparison unit 54 of the detection center of the zone and block 55 detection interval zones. Examples of waveforms of the signals SG1, SG2, SG3 and SG4 on the corresponding input or output terminals of the block 19 detection intonation is shown in figure 4. It should be noted that on the y axis for each signal figure 4 shows the amplitude or intensity. In addition, shown in Fi is .4 examples, the length of one word, taken from the voice, is about 1.2 C.

Band-pass filter 51 extracts only the frequency components required to detect the tone of the signal SG1 entered into it. In this embodiment, the output of the band pass filter 51 appear only frequency components within a range from 800 to 1200 Hz signal SG2. As shown in figure 4, the pattern of intensity changes caused by the intonation of the word found in the signal SG2.

To simplify the calculation process of the signal in the block 19 detection intonation a block 52 conversion of absolute magnitude. Block 52 convert the absolute value converts the amplitude of the injected signal in its absolute value. Accordingly, the output of block 52 transformation of the absolute value of a signal SG4, shown in figure 4.

Block 53 comparison compares the signal value SG2 with a threshold value and generates components exceeding the threshold value, as a signal SG4. In particular, the block 53 comparison produces only components with large values in the power spectrum of the signal SG3. The threshold value used in block 53 comparison is determined appropriately by using a technique called analysis of decision-making.

As shown in figure 4, the signal SG4 appear two areas A1 and A2, for the matter of the pattern of intonation in speech. Block 54 of the detection center of the zone detects the points in time t1 and t2, in which the centers of the two areas A1 and A2.

Block 55 detection interval zones detects the time difference relating to the two times t1 and t2, which were detected by the block 54 discovery centre zone as The interval zones. The magnitude of this interval The zones corresponds to the pattern of intonation in speech voices. The result obtained by averaging the values of the intervals of The areas is used as the value of intonation.

In one word, can appear in the signal SG4 three or more zones. When there are three or more zones, the intervals That are calculated, respectively, for the two areas adjacent to each other, and the result obtained by averaging several intervals obtained The zones, is used as the value of intonation.

The emotional state of the person is changed, for example, as shown in figure 3. In addition, to correctly capture emotions, including anger, sadness, pleasure and the like, it is necessary to detect a change of the characteristic values, such as the intensity, pace and intonation.

In the detection system of the emotions shown in figure 1, to ensure the possibility of relying on the values of characteristics in the past, the magnitude of the issued block is m 17 detection intensity, the magnitude of the rate given by the block 18 detection rate, and the value of the tone generated by the block 19 detection of intonation, temporarily stored in the block 21 for temporary storage of data.

In addition, the block 22 change detection emotions takes the existing value of the intensity generated by the block 17 of the detection intensity, the existing value of the rate given by the block 18 detection rate, and the value of the tone generated by the block 19 detection of intonation. Block 22 change detection emotions also accepts previous (shortly before this time) values of the intensity, pace and intonation, which are stored in the block 21 for temporary storage of data. Thus, the block 21 change detection emotions detects changes in the intensity, pace and intonation, respectively.

Unit 23 detecting emotions voice takes changes in the intensity, pace and intonation that gives the block 22 change detection emotions, and evaluates the present emotional state. Block 23 change detection emotions evaluates three States, including anger, sadness and pleasure, as an emotional state in this embodiment.

In the 24 data templates emotions pre-stored information to match the anger condition with drawings changes intensives and, tempo and intonation, information, allowing you to match the state of sorrow with pictures of the changes in the intensity, pace and intonation, and information to match the state of pleasure with drawings of changes in the intensity, pace and intonation.

Block 23 change detection emotions evaluates the present emotional state on the basis of drawings intensity changes, tempo changes and changes in intonation, which are issued by the unit 22 change detection emotions, based on information held in the database 24 of these schemes emotions, as the evaluation criterion.

Information expressing the three types of States, including anger, sadness and pleasure, which were estimated by block 23 change detection emotions, are entered in block 60 the recognition of emotions and in the base 41 of the data memory, emotion and sensitivity. The base 41 of the data memory, emotions and sensitivity sequentially receives and retains existing emotional state, which is entered from block 23 detection of emotions in the voice.

In line with this, the past state of emotions can be reproduced by reading the information stored in the database 41 data memory, emotion and sensitivity.

At the same time, the contents of the phrases entered from the microphone 11 in the form of voice (the content of the speech partner)is detected in the block 26 razpoznavam what I phrases. Landmark information corresponding to the respective phonemes recognized by the unit 20 voice recognition, and information expressing the word segmentation and segmentation phrases are entered in block 26 recognition of phrases. In addition, significant information entered through the keyboard 25, is also entered in block 26 recognition of phrases.

Block 26 recognition of phrases recognizes the entered string for each word and analyzes the syntax for capturing the contents of a phrase in the form of natural speech. In fact, the block 26 recognition phrase recognized by the speech information, expressing, for example, "5W3H", that is "who", "what", "when", "where", "why", "how", "how long, how far, how big, etc." and "how much". Speech information recognized by the unit 26 recognition of phrases entered in block 60 the recognition of emotions.

Below is a description of the process for detecting emotion-based view of the face of the partner. Television camera 30 takes at least the front part of the person who is the subject of detection systems, emotions, shown in figure 1. The image captured by the television camera 31, i.e., the image comprising a view of the person is entered in block 32 of image recognition.

It should be noted that the image captured by the television camera 31 is entered in block 39 character recognition. In particular, block 39 whic is navane characters recognize appropriate signs proposals from the captured image, when the image of the proposals is taken by the television camera 31. Landmark information detected by the block 39 character recognition, is entered in block 26 recognition of phrases.

Unit 32 of the image recognition recognizes the characteristic elements of the entered image. Namely, the block 32 of image recognition to recognize the relevant parts of the eyes, mouth, eyebrows and cheekbones on the face of the subject, and detects the corresponding relative position of the eyes, mouth, eyebrows and cheekbones on her face. In addition, the block 32 recognition image always keeps track of the relevant provisions of the eyes, mouth, eyebrows and cheekbones to detect relevant changes in their position after changing facial expressions and to detect movements such as nodding his head.

Information relating to the reference to the provisions with regard to the relevant provisions of the eyes, mouth, eyebrows and cheekbones in the face (information equivalent to the subject's facial expression in his normal state) is stored in advance in the database 33 of the data templates of the face. It should be noted that it is also possible to arbitrarily change the contents of the base 33 of the data templates of the face. In addition, information rules, expressing the relationship between changes in facial expressions and six types of emotions (pleasure, anger, sorrow, fear, joy, and surprise) is stored in advance in the database 33 data face templates.

the lock 34 detect emotions of a person detects a characteristic facial expression, that is the difference from the normal state on the basis of information relating to the relevant provisions of the eyes, mouth, eyebrows and cheekbones, which recognizes the block 32 of image recognition and reference positions stored in the database 33 data face templates.

In addition, the detection unit 34 of the emotions of the person evaluates the States of the six types of emotions (pleasure, anger, sorrow, fear, joy, and surprise) on the basis of the magnitudes of the changes and rate of change of the detected characteristic values and information about the criteria that are held in the database 33 of the data templates of the face. Information expressing the estimated state of the six types of emotions, is given by the block 34 detection of emotion faces, and is entered in block 60 the recognition of emotions and in the base 41 of the data memory, emotion and sensitivity.

Block 60 recognition of emotions comprehensively assesses information expressing the emotional state (anger, sadness or happiness)that are entered from block 23 detection of emotions in the voice, the voice information input from the block 26 recognition of phrases and information expressing the emotional state (happy, anger, sadness, fear, joy, or surprise), input from the detection unit 34 of the emotions of the person. Then block 60 recognition of emotions evaluates the final emotional state. As for the voice information, the emotional state is (fun, anger, sadness, fear, joy, or surprise), contained therein, can be estimated by determining the content (5W3H) phrases in the speech in accordance with a given rule.

Information expressing the emotional state, estimated on the basis of the voice unit 23 detecting emotions of voice, information related to the content of the speech recognized by the unit 26 recognition of phrases based on the voices or characters entered from the keyboard 25, and information expressing the emotional state, estimated according to the expression unit 34 detect emotions of a person are entered, respectively, in the base 41 of the data memory, emotion and sensitivity and sequentially stored in it. Defines the time and date save the relevant portions of the information in the database 41 data memory, emotion and sensitivity and added to the information.

Among the information entered into the database 41 data memory, emotions and sensitivity information related to emotions, which is entered from block 23 detection of emotion voice, information related to speech content, which is entered from block 26 recognition of phrases, and information related to emotions, which is entered from block 34 detect emotions of the person should be viewed in conjunction with each other.

In line with this, the block 43 synchronous processing integrates several types of data, x is anamesa in the 41 data memory, emotions and sensitivity, with each other in accordance with the time (time of entry) and the date of discovery of these pieces of information. For example, information expressing emotional States, including anger, sadness and pleasure, which were assessed by the unit 23 detecting emotions votes, and information relating to the content of speech (5W3H), are connected to each other in accordance with their time, as shown in Fig.6.

For this purpose, the base 41 of the data memory, emotions and sensitivity contains sufficient storage capacity, ensuring the preservation of relatively large amounts of information. However, since there are limitations in storage capacity, it is necessary to limit the amount of information to be saved in it, with the aim of continuous use of this system for a long time.

For this purpose the block 42 of the process of forgetting. The block 42 of the process of forgetting automatically erase the old information from the database 41 data memory, emotion and sensitivity. However, the information related to the special condition is not erased and remains, even if it gets old.

Below is a description of the operation unit 42 of the process of forgetting with reference to figure 5.

At the stage S11 shown in figure 5, all the data from a large number of data stored in the database 41 data memory, emotions and sensitivity, anywayse information related to the time and date when the data was saved (or detected).

On stage S12 determines passed or not a specified certain period of time after you save the relevant data, based on the current time and the time, according to the stage S11. In the case of processing of old data that was stored a certain period of time after the moment of their preservation, the process goes to the step S13 and the next. Relatively new data that is not stored for a certain period of time, continue to keep as they are.

On stage S13, when the data is information expressing the emotional state, exploring the value of information changes (the difference of emotions before and after the event). As the process goes from step S13 to the step S17, when the change in emotions exceeds a predetermined threshold value, the data is stored as it is, even when the data are old. When the value changes of emotion is equal to or less than the threshold value, the process proceeds from step S13 to the step S14.

On stage S14 find the picture of the emotions related to the data and determine the same as or not the corresponding figure with the given specific template. In particular, investigate, match, or there are several combinations of emotional States and the content of the speech with specific sablone is, representing highly expressive state. As the process goes from step S14 to the step S17, when the detected pattern matches a specific pattern, the data are stored as they are, even if the data are old. When the patterns do not coincide with each other, the process proceeds from step S14 to the step S15.

On stage S15, when the data are the content of speech, define, match or not content with the specified content of speech (very expressive speech). Even if both content do not match with each other completely, they can also be considered as a match, when the similarity between them is great. As the process proceeds from step S15 to the step S17, when the content of the speech relevant data coincides with the predetermined content of the speech, the data are stored as they are, even if the data are old.

When both content do not coincide with each other on stage S15, the corresponding data erase on stage S16.

The above process is performed for all data in the database 41 data memory, emotion and sensitivity. In addition, the process of forgetting, shown in figure 5, periodically perform again. The execution cycle of the process of forgetting can be arbitrarily changed depending on the individual personality. It should be noted that the process at stages S14 and S15 is performed is sometimes a pre-prepared template database (not illustrated). As for this database templates, its content is updated automatically by the insertion in it of information learning.

Figure 5 shows a simplified process. In fact comprehensively define all the change in emotions, template emotions and the content of the speech. In particular, when there is information in which the change in emotions is large, the information in which drawing emotions coincides with the specific pattern, and the information content of speech of the same or similar to the given content of speech, comprehensively define their priorities. Namely, the information content of speech of the same or similar to the given content of the speech receives the highest priority information, in which drawing emotions coincides with the specific pattern, receives the second highest priority, and information in which the change in emotion is great, gets the lowest priority. In line with this, the probability of Erasure in the process of forgetting information in which the content of speech of the same or similar to the given content of the speech, is zero, and it remains in memory, even if it gets old.

As for the old data in the database 41 data memory, emotions and sensitivity, then only the data in which the change of emotions is large, data, ilusionismo, which is regarded as highly expressive data entered again several times, and the data in which the content of speech is highly expressive, added with priorities in accordance with their strength and the content are saved as they are by the processing unit 42 of the process of forgetting. Therefore, the old data in the database 41 data memory, emotions and sensitivity become incomplete data, with only the remaining part. Such data have content similar to the last vague memory in the memory of man.

Past emotional state and past the contents of the speech that are stored in the database 41 data memory, emotions and sensitivity, read to perform their analysis, which provides, for example, determine whether the detection of emotions is normal, and to update the databases of the respective blocks used to assess emotions, with the aim of improving their content.

The data stored in the database 41 data memory, emotions and sensitivity, optionally sorted according to their contents, and stored in the database 44 data information on human qualities, 45 data base of information about identity or base 46 data information fields.

In the 44 data information about the human qualities of stored information is Oia, determining the nature of the subject, such as gender, age, aggression, compatibility and current emotion, and information related to the decision to perform the action. In the 45 data information about an individual is stored information such as the address of the current situation, the current environment and the content of the speech (5W3H). In the 46 data specific information is stored information such as profession, career, professional suitability and decision tree in a professional activity.

What is issued from the base 44 of the data information about the human qualities, 45 data base of information about identity and the base 46 of the data specific information is information of the moral model of the individual. Sensitivity partner can be attained on the basis of information moral model and past the emotions of a partner.

When realizing the functions of the detection system, emotions, shown in figure 1, using computer software, sufficiently in advance to write a program that runs on a computer, and the necessary data on the recording media such as CD-ROM.

It should be noted that the microphone 11, is shown in figure 1, you can replace the receiver or telephone and that can be provided by the mouse as a unit for inputting information, such as signs.

In addition, the TV to the measure 31, shown in figure 1, can be replaced by any other unit receiving the image, such as an optical camera, digital camera and camera-based charge-coupled devices.

Human emotions can be detected more accurately than usual, using the above described method for detecting emotions.

The second embodiment of the

Below is a description of a method for generating sensitivity, according to this invention, with reference to Fig.7-11.

7 shows a block diagram of a system that uses the generator sensitivity. On Fig shows the block diagram of the power generation instinctive information. Figure 9 shows the block diagram of the power generating emotional information. Figure 10 schematically shows a model template reactions in the database of patterns of emotional reaction. Figure 11 shows the block diagram of the recognition block sensitivity and thought.

The system, shown in Fig.7, is made on the assumption that implements a natural and rational dialogue between arbitrary man and computer (virtual person). In this example system 200 of detecting emotions for detection of human emotions, which is a partner of the computer, and a generator 100 sensitivity to display in the dialog individuality and sensitivity of the computer itself.

p> In addition, the device 300 enter information about the environment to enter various environmental information generator 100 sensitivity. The device 300 input environmental information provides information relating to, for example, to the date, time, weather, location and image.

The generator 100 sensitivity can also be used for systems that are working offline. For example, when the generator 100 sensitivity enter information related to the created scenario, the reaction in accordance with the scenario can be obtained at the output of the generator 100 sensitivity. In this case there is no need in the system 200 of detecting emotions.

Although the devices required for the implementation of the dialogue, is connected in the example shown in Fig.7, with the output of the generator 100 sensitivity sensitivity data generated by the generator 100 sensitivity, can be used for various purposes.

For example, in the case of generator 100 sensitivity in communication, there is no need to issue voice, as may be sufficient to issue a landmark information. In addition, sensitivity data generated by the generator 100 sensitivity can also be displayed on the image, music, information retrieval and machine control.

Below is a description of the gene of the operator 100 sensitivity. As in this embodiment, the system 200 of detecting emotions assumes the use of detection systems, emotions, shown in figure 1, described above, its description is not repeated.

In fact, the system shown in Fig.7, may consist of computer systems and the software running in it, or may be implemented as an exclusive hardware. In addition, software and data to be used can be saved in advance in an arbitrary recording medium, and read into the computer from the recording medium for execution. It should be noted that in the following description of the system shown in Fig.7, is called a computer.

According to a rough separation, to the input of the generator 100 sensitivity serves two types of data, i.e. data D1 and data D2. The data D1 is information expressing emotion partner. Data D2 are significant information subjected to processing natural language, and include information related to the will, the situation and environment partner. By processing natural language, data D2 inputted as information expressing "5W3H", that is "who", "what", "when", "where", "why", "how", "how long, how far, how big, etc." and "how much".

In fact, you can IP alsowhat as input to the generator 100 of the sensitivity of different information.

(A) Patterns of voice changing related to temporary properties, which include tension, rhythm, tempo, pause, musicality, musical interval, melody, harmony, frequency, and so on; and the degree of the basic emotions (anger, pleasure, sadness, disgust, surprise, fear, and so on).

(B) Information relating to the tonal properties of speech, including pronunciation, depth, intensity, brightness, roughness, tonal coloration (JIS-Z8109), formento, intonation, bulge to highlight a particular part of speech for clarification, etc.

(C) Distribution of words of content segments, accents in the phrase, nadsegmentarnae characteristic information, the characteristic information generated by artificial intelligence, which belong to the property of the stress.

(D) Text information is subjected dialog analysis, anecdotal information (including semantic information, identifiable artificial intelligence), etc.

In these portions of the information on the information (A) information (B) is affected by the intention and emotion of the speaker. This emotion can be detected by the system 200 of detecting emotions.

As shown in Fig.7, the generator 100 sensitivity contains unit 110 generating instinctive information database 121 data metric templates, dictionary 122 specifying instinctive the language, block 130 generate emotional information database 141 data patterns of emotional reaction, the base 142 of the data temporary storage unit 150 detection sensitivity and thoughts, the base 160 of the data in the knowledge base 162 data sensitivity, base 163 data individuality and base 164 moral hazard data.

The function generator 100 sensitivity can mainly be divided into three functional element unit 110 generating instinctive information unit 130 generating emotional information and block 150 detection sensitivity and thought. First is a description of the unit 110 generating instinctive information.

As shown in Fig, block 110 generate instinctive information contains the block 111 recognition matching metric templates, block 112 generate instinctive parameters and unit 113 searches the dictionary.

Dictionary metric templates, entered into the computer (virtual person)in advance is stored in the database 121 data metric templates, which supports the block 11 recognition matching metric templates. Meter is the rhythmic element of speech is a phonetic and phonological characteristics that occur in syllables, words, phrases, sentences, and entire speech (continuous voice long words). In particular, information about the templates to the mputer, which is equivalent to the input information (a) and (b), is stored as information of individuality in the 121 data metric templates.

Block 111 recognition matching metric templates compares the data D1 of the analysis of the emotions of the partner, entered from the system 200 of detecting emotions, with metric template stored in the database 121 data metric patterns, and detects the degree of synchronization and coordination of both. Information expressing the existence of strong tone and emotional changes appears on the output unit 111 recognition matching metric templates.

At the same time, information related to the stimulation of instinct, pre-registered in the dictionary 122 that defines an instinctive language. Namely, various information, expressing templates location of accents and nadsegmentarnae characteristics in the word or phrase that refers to the properties of the stress, are stored as dictionary in connection with the promotion of instinct.

Unit 113 searches the dictionary compares the data D2, introduced in the form of symbolic information (will and situation of the partner), with the contents of the dictionary 122 that defines an instinctive language, and generates information instinctive reactions of the content of the conversation.

Block 112 generate instinctive parameters generates information D4 instinctive motivation on the basis of the information, entered from block 111 recognition matching metric templates, information entered from block 113 search in the library and the data D3. Data D3 is information fed back from the output of the generator 100 sensitivity, and has templates episode and the desired reaction offered by the computer.

In this example, information D4 instinctive motivation includes six instinctive parameters: the degree of confidence (or the degree of confusion); the degree of pleasure (or the degree of dissatisfaction); the degree of hazard (or security); the extent or degree of rejection); the degree of achievement (or rate of change); and the degree of repetition (or degree of confidence). Block 112 generate instinctive parameter defines the value of the underlying instinctive parameters as follows.

The degree of pleasure-displeasure): when the computer comes close to the proposed content or to the desired episode of the situation, the degree of pleasure increased, otherwise the degree is reduced. In addition, when the computer comes close to the meter, which is defined as being fun, the degree of pleasure increases, otherwise it decreases.

The degree of hazard (safety level): when the computer comes close in content, consider the earlier as dangerous and the episode of the situation it is supposed to be dangerous, the danger increases, otherwise it decreases. In addition, when the computer is approaching least a pre-defined as hazardous, then the degree of danger increases, otherwise it decreases.

The degree of achievement (degree of change): when the computer is coming to the content defined in advance as a successful/achievements, and the episode situation is assumed as previously successful/achieved, the degree of achievement increases, otherwise it decreases. In addition, when the computer is coming to a specific extent, be considered as radically modulated, then the rate of change increases, otherwise it decreases.

The degree of attention (the degree of rejection): when the computer is approaching content, which was previously regarded as reject/deny, and the episode of the situation previously rejected/denied, the degree of rejection increases, otherwise it decreases. In addition, when the computer detects a strong or repeated statement, or when approaching an extreme extent, the degree of attention increases. When the computer is approaching as defined as unpleasant, the degree of rejection increases.

The degree sequence (degree of confidence): when computerproblems to content predetermined as samoopredeljenje/self-negation, and the episode situation is assumed as samoopredeljenje/self-negation, the degree of repetition increases the degree of self-confidence goes down). When there is content that was previously defined as good, the degree of self-confidence increases (the degree of repetition is reduced). In addition, when a measure is defined in advance as uncertain, the degree of self-confidence increases. It should be noted that when the computer is coming to an extreme extent, the degree of disgust or degree of self denial can sometimes be increased.

Confidence (degree of confusion: when the computer is coming to an obfuscated content and anticipated episode of the situation in the case when the degree of recognition of different stimuli (inputs)related to instinct, is low (for example, 70% or less), the degree of surprise is manifested is inversely proportional to the degree of recognition. The detection rate is determined by the tone of voice and content of the conversation.

To implement the above control, the content desired by the computer, and a measure of the episode situation are determined in advance as individuals. As mentioned above, information about the emotions of the partner stimulates individual instinct of the computer, and thereby change the values with the corresponding instinctive parameters.

Information D4 instinctive motivation issued from block 110 to generate instinctive information is given to block 130 generate emotional information. Below is the description of block 130 generate emotional information.

As shown in Fig.9, the block 130 generate emotional information contains the block 134 search templates reactions, block 135 of the learning process block 136 multi-dimensional analysis, unit 137 controls arbitrary emotions and block 133 generate basic emotional parameters.

Block 134 search templates reactions, block 135 of the learning process and the base 141 of these patterns of emotional reactions is the response system 131. Block 136 multi-dimensional analysis and unit 137 controls arbitrary emotions constitute the current system 132.

Response system 131 is provided with the purpose of generating emotions caused by stimulating the induction. Operating system 132 is provided with the purpose of generating arbitrary emotions (libido).

Information related to the model templates reactions representing the relationship between information D4 instinctive motivation and basic emotional parameter previously stored in the database 141 data templates emotional reactions for use in the response system 131. Model template for the reaction may be depicted, for example, as pokazanova figure 10.

In the case of selective reproduction of individuals of many people using the same computer model templates reactions, each of which corresponds to a set of people or each type of individuality, are registered in advance in the database 141 data templates, emotional reactions, and model templates, reaction can be selected in accordance with the individuality of the person selected.

This example assumes that the quality of the information D4 instinctive motivation introduced above six instinctive parameters, which are: the degree of confidence (or the degree of confusion); the degree of pleasure (or the degree of dissatisfaction); the degree of hazard (or security); the extent or degree of rejection); the degree of achievement (or rate of change); and the degree of repetition (or degree of confidence).

Basic emotional parameters issued from the block 130 generate emotional information, the following are fifteen types of parameters. Concepts in brackets refer to the instinctive parameters, due to basic emotional parameters.

1. Anger (displeasure)

2. Joy/fun (fun)

3. Sadness (failure/congestion/displeasure)

4. Surprisingly (achievement/impact)

5. Fear (danger/tension)

6. The country is the W (danger/tension/pain)

7. Aversion (rejection/denial/displeasure)

8. Contempt (rejection/impotence)

9. The approximation (pleasure/security)

10. Rescue/care (danger/tension/pain)

11. Jealousy (displeasure/anger/jealousy/note)

12. Positivity (safety/fun/certainty)

13. The dependence of (achievement/follow)

14. Irritation/conflict (rejection/congestion/pain/danger)

15. Anxiety (risk/voltage/surprise/dismay)

Templates reactions representing the relationship with one or more basic emotional parameters are stored for each of the fifteen types of major emotional parameters in the database 141 data patterns of emotional response.

Block 134 search templates reactions searches for patterns of emotional reactions of the basic parameters in the database 141 data patterns of emotional reaction, explores negotiation/conflict with their entered information D4 instinctive motivation and yields consistent core emotional parameters in the form of data D6.

Block 135 of the learning process takes into account the probability of matching templates based on the data D3 generated by the block 150 detection sensitivity and thoughts, and following reactive emotions generated by the block 134 search emotionally the th reaction, and modifies the content database 141 data templates emotional reactions in accordance with the result of training.

At the same time, information environment (D2), including, for example, weather information, information on the time of year, the time information and the like, is input to operating system 132. Unit 136 performs multidimensional analysis multidimensional analysis of various input information environment (D2) and, accordingly, provides information vital rhythm.

Information life rhythm are regular (sine) rhythms with regular cycles, such as short-term rhythms (for example, the cycle duration-1 hour)rhythm of life (for example, a 24-hour rhythm), long-term emotional rhythm (e.g., 28-day rhythm), long-term bodily rhythms (for example, 23-day rhythm) and intellectual rhythm (for example, 33-day rhythm), and there are also irregular rhythms, such as the rhythms of temperature, humidity and weather.

Unit 137 controls random returns a random emotions emotions (libido) additional information life rhythm generated by the block 136 multivariate analysis, in accordance with a probability specified range.

Block 133 generate basic emotional parameters gives the result obtained through a comprehensive assessment of information relating to the basic emotional p is the parameter and the degree of coordination, issued from the response system 131, and random emotions, issued from the response system 132, as information D5 own emotions. In this case, the result is information composed of fifteen types of major emotional parameters.

In addition, given the information D5 own emotions temporarily stored in the database 142 data temporary storage and fed back to the input unit 133 generate basic emotional parameters. Block 133 generate basic emotional parameters receives feedback information from the database 142 data temporary storage as their emotions prior to this and reflects on the result of determination emotions next time.

When the block 133 generate basic emotional parameters performs a comprehensive definition, it decides on the priorities and degrees of influence of the respective blocks in accordance with individuality, specific information 143 individuality.

For example, in the case of playback emotions pulse type, the degree of influence response system 131 is increased (80% or more), and also increases the impact of your emotions just before. If emotions play a conscious type, the degree of influence response system 131 is reduced (30% or less), which also reduces the influence of their emotions prior to this in terms when the output of the current system 132 is stable.

Information D5 own emotions, issued from the block 130 generate emotional information is sent to the input unit 150, the detection sensitivity and thought. As shown in figure 11, the block 130 generate emotional information includes unit 151 of the process of assigning weights, block 152 of the sorting process, block 153 multivariate analysis, block 154 a comprehensive, intuitive solution and the block 156 the upgrade process.

Block 151 of the process of assigning weights assigns a weighting factor input D5 own emotions in accordance with the information 155 about individuality. Information own emotions with the assigned weighting coefficient outputted from the block 151 of the process of assigning weights.

At the same time symbolic information (5W3H), including the episode that represents the environment and the situation in which the partner is located, and the will of the partner and the result is data D2 to the input unit 152 of the sorting process.

Last episode and its result and semantic information, expressing their values are stored as knowledge in the form of symbolic information (5W3H) in the 161 data knowledge, which supports block 152 of the sorting process. In addition, portions of the knowledge base 161 data knowledge includes info is information about the time, when relevant data were obtained, and they are arranged in accordance with a sequence in time.

In this example, portions of the knowledge base 161 data knowledge can be classified as long-term memory, declarative memory and procedural memory. Declarative memory is memory that is stored words, and represents the information of the episodes as events in specific temporal/spatial context and semantic information as common knowledge. Procedural memory is memory that is related to a method and technology.

Anecdotal information includes the time, place, contents, will (agreement, disagreement, preference and the like), person, number, weight, situation, status, personal information partner, emotion, intent (object), relationship, personal relations and other Semantic information equivalent language dictionary and the dictionary of sensitivity. As personal information is treated temperament, character, emotion, social adaptability (sociability), desire, conflict, attitude, superiority complexes, interests, prilichnasty, morality, way of thinking, emotional characteristics, persistence (and its extent), the forbidden word, the taste, the criterion of good/evil, etc.

In this example, information knowledge is stored in the database 161 data C the project in accordance with grammatical rules, as will be described below. However, the content comprising a database is changed in accordance with the object.

Story = scene + story + decision

Scene = character + place + time

Topic = (event) + goal

Plot = episode

Episode = subgoal + try + result

Attempt = event + episode

Result = event + state

Solution = event + state

Subgoal, goal = desired state

Character, location, time = status

In addition, new information is continuously added to the database 161 data knowledge by unit 156 of the upgrade process. In addition, unclaimed information is automatically deleted from knowledge through re-running process of forgetting. Namely, the data is sequentially erased as they age, with the exception of data having a high priority. For example, priority is given to knowledge, reusable, and data that are recognized with a strong expressiveness, and even when they are aging, they are not erased. The degree of forgetting and priorities relevant data can be changed in accordance with the individuality.

Block 152 of the sorting process searches for and retrieves from the database 161 data knowledge last episode and its result, which is close to the entered data D2 on the basis of input data D2. Then block 152 of the sorting process the toilet is no data entered with the extracted knowledge.

System 157 learning process generates information that is related to the concept of values for the episode on the basis of the training. Namely, the system 157 learning process assigns a degree of satisfaction, pleasure and displeasure from the episode introduced.

Unit 153 performs multidimensional analysis multidimensional analysis: emotional information with the assigned weights entered from block 151 of the process of assigning weights; information episode and information of the result that both are entered from block 152 of the sorting process; information that is related to the concept of values, which is entered from the system 157 of the learning process; and information pertaining to their own will and instinct, which is entered from the database 163 data individuality. Then block 153 multi-dimensional analysis gives the result of the analysis in block 154 a comprehensive, intuitive solution.

Block 154 a comprehensive, intuitive solution uses the content of the database 163 data individuality and base 164 moral hazard data as dictionary definitions, comprehensively defines the information that is entered from block 153 multivariate analysis, and issues that should be done randomly, and the result in the form of data D3.

In the 163 data individuality stores the I various information, described below.

1. Information identity

(a) Criteria for determining in accordance with the degree of each type of identity: while these types are the stereotype, differently oriented type, internally oriented type, traditionally oriented type, aggressive oriented type, focused on the interaction type, overwhelming stress type of stress type, etc. as criteria, you can use the degree of motivation to achieve goals and level of opposition.

(b) Criteria for determining learning styles: learning styles are defined as the criteria by distinguishing between ' thinking ' and ' impulsive type ' and distinguish between dependent "environment type" and "independent from the environment type.

(c) Criteria for determining the character: for the Japanese, are used as criteria for determining the following characteristics classified by the method of testing personality and TPI (a List of personal characteristics Todai). When the classified features include: temperament, character, emotion, social adaptability (sociability), desire, conflict, attitude, superiority complexes, interests, prilichnasty, morality, way of thinking, emotional OS is particularly, persistence (and its extent), the forbidden word, the taste, the criterion of good/evil, etc.

(d) Criteria for determining denial/prejudice: prejudice is assigned to negative information in order to master this negative information in General, which is then used for the formation of character.

(e) Criteria for determining the time of allegiance/persistence: determines the degree of persistence for cognitive information partner information episode and emotional information and the time corresponding reaction.

2. Supporting information individuality/unconscious reaction:

(a) a Dictionary of words and a dictionary of claims, each of which has content that stimulates instincts.

(b) Correlation of different time instinctive reaction to the degree of persistence, degree of fidelity and level of integrity to each individual.

(c) the Template of its own instinct, corresponding to the emotions of a partner, recognized as identity.

3. Reference information of homeostasis (containment): criteria for attempts retention in harmony, all manifestations of the instinct, which must be stable.

4. The reference time self-conscious reactions: information criteria, representing the will of their own individuality.

In addition, the dictionary definitions include the W: used to determine the recognition and definition of identification, such as true/false, right/wrong and adequately/inadequately; the information used to instinctive determination, such as the pleasure/ displeasure; information used to determine individual posewaemost for the subject, such as the complexity, weight and the like; information used to determine the relative posewaemost among actors, such as equality, size, difference and similarity; the information used to determine metaplace, such as the degree of certainty for memory and accuracy of knowledge; information used to abstract definitions, such as loyalty, virtue, love, etc.; information used for inductive definitions; etc

Dictionary information related to professional ethics, personal morality, common morality, etc. is stored in the database 164 moral hazard data.

For example, as a professional morality registered "As an architect, I insist on full costing", "I give his work the highest priority", "I am proud that I am a professional", etc. in Addition, as individual morality recorded "I appreciate women (I don't pomykal people)", "I am proud of my hometown", "I am proud that I am Japanese", etc. as a General Mora and recorded "Killing people is bad", "I care about my parents, "I am a man (woman)" and so on

Block 154 a comprehensive, intuitive solution analyzes information related to their own emotions, which is generated by block 151 of the process of assigning weights, block 152 of the sorting process and unit 153 multivariate analysis. Then block 154 comprehensive, intuitive solutions prohibits the analyzed information related to their own emotions, on the basis of dictionary definitions in the database 163 data individuality, which represents the personality and the will of this computer, and also on the basis of dictionary definitions in the database 164 data moral hazard. Then block 154 a comprehensive, intuitive solution sequentially decides where, what kind and how their own emotional reactions (sensitivity) should be issued. If this decision displays the environment and situation in which there is a partner, and will partner at this time.

Below is a description of the function block 150 detection sensitivity and thought.

1. In case of detection of strong expressions or vocabulary or radical changes in emotions, cycle definition is changed in accordance with the individuality. For example, if a strong content suddenly is formulated with a loud voice, the loop determination is reduced.

2. In response to your own biorhythm, depending on the individual, determining the sensitivity is performed differently depending on whether there is or not the desire to do something.

3. According to his own pleasure/ displeasure and the value of emotions is performed to determine the sensitivity of different.

4. For information expressing the current situation, you estimate a reasonable value in accordance with knowledge in the 161 data knowledge, shows the influence of the assessment of emotions, and thus the final decision is made about the will.

5. When the evaluation value, the evaluation is performed from the appropriate point of view of social values, professional values, values for everyday life, individual values, etc. in Addition, social value, professional value, the value for everyday life, the individual value of each characterized in more detail and evaluated. For example, with regard to social values, the values are calculated from the appropriate point of view of religion, aesthetics, society, politics, Economics and ethics.

6. Evaluation values is performed for relevant factors, such as satisfaction/disappointment, loss and gain interest, security/risk, etc. as a material for assessment is key to a volitional decision. When, for example, estimates in respect of security, then the evaluation is performed as follows.

(a) When a third party should apply the "displeasure" to yourself, the generated values against hostile emotions and defenses.

(b) When the most you want to apply the displeasure to a third party, the generated values against hostile emotions and aggressive response.

(c) When I feel a need to take the side of a third party, when someone else has to Express displeasure by a third party, the generated values related to sympathetic emotions and joint aggressive response.

7. Information about the generated value is stored in the database 162 data sensitivity and then used as materials for evaluation.

It should be noted that the block 150 detection sensitivity and thoughts includes various teaching functions similar to the functions of the person, the content of the database 163 data individuality and base 162 data sensitivity is sequentially updated as experience is gained.

Because the block 150 detection sensitivity and thoughts gives the results after a comprehensive definition based on many variables, such as different size, it will not performs a logical conclusion or determination, as this is makes artificial intelligence. In particular, the data D3, issued from the block 150 detection sensitivity and thoughts are information sensitivity resulting from an intuitive definition of the computer itself.

As mentioned above, in the method of generating the sensitivity, according to this invention, information instinctive motivation that serves as motivation for the generation of emotion is generated based on the entered information about the situation (emotion partner, peripheral situation and so on), and emotional information is generated based on the information instinctive motivation. In addition, the generated emotional information is controlled in accordance with the information of individuality.

Therefore, it is possible to generate an emotion that is controlled by the mind and will of the individual, i.e. information sensitivity. In addition, because emotional information is generated through information instinctive motivation, generated by the emotion can be controlled more precisely and simply.

The way to generate emotions, according to this invention, can be used to detect emotions in the medical field, and can be used in various systems as part of artificial intelligence or artificial sentience. In addition, to control the sensitivity of the virtual human Il the robot, you can use the method of generating sensitivity according to this invention, in different systems for different purposes. In addition, by combining the method for detecting emotions and method of generating sensitivity, according to this invention, it is possible to perform different systems, each of which includes the function of the dialogue between computer and human.

The invention is not limited to the above options perform and various modifications are possible without departing from the idea and scope of the invention. You can improve part or all of the components.

1. The method for detecting emotions to detect the emotions of man, containing stage

input voice signal; detecting the intensity of voice and rate determined by the speed with which you receive a voice, respectively, and detection in the form of a time value, intonation, which reflects the pattern of intensity changes in each of the words performed by voice, based on input voice signal; receiving the first change value indicating the intensity of the detected voice in the direction of the time axis, and the second change value indicating a change of pace voices in the direction of the time axis, and the third value changes, indicating a change in intonation in the direction of the time axis, respectively; and generating signals warehouse the emotional state at least anger, sadness and pleasure, respectively, on the basis of these first, second and third values changes.

2. The detection system of emotions to detect the emotions of the subject, containing

block input voice input voice signal; a detection unit intensity for detecting the intensity of the voice based on the voice signal, introduced by the block of the input voice; a detection unit rate for detecting the speed at which you receive the voice, tempo, on the basis of the voice signal, introduced by the block of the input voice; a detection unit intonation to detect, in the form of a time value, intonation, which expresses the pattern of change of intensity in the word, performed by voice based on the voice signal, introduced by the block of the input voice; a detection unit change value to obtain a first value indicating the intensity the voice detected by the specified detection unit intensity along the time axis, and the second change value indicating the change of pace of the voice detected by the specified detection unit rate along the time axis, and the third value changes, indicating a change in intonation, found the specified detection unit intonation, along the time axis, respectively; and a detection unit of emotions for issuing signals, expressed the surrounding emotional state at least anger, sadness and pleasure, respectively, on the basis of these first, second and third values of the changes detected by the specified detection unit value changes.

3. The detection system of emotions according to claim 2, in which the specified detection unit intonation contains a block of bandpass filters for extracting specific frequency components of the voice signal separately for each word; a separation unit to separate the power spectrum of the signal extracted by the specified block of bandpass filters, into several zones based on the intensity of the power spectrum; a computing unit intonation for calculation of intonation based on the time intervals between the respective centers of the zones specified by the block division into zones.

4. The detection system of emotions according to claim 2, which further comprises a unit image to receive an image of at least one face of the subject; the recognition block image for detecting positional information related to each part of the face, from the image information, adopted by the specified block of the image; block retention reference information image retention reference information related to the characteristic value in each part of the face; a detection unit characteristic value of the image to be detected is I the characteristic values of the image in accordance with change of the positional information, found the specified block of image recognition in relation to the reference information held by the specified block retention reference information stored in the specified block holding the reference picture information, with the specified block detection emotions assesses emotional state in accordance with the change of positional information in relation to the reference information, expressed a characteristic value of the image set by the detection unit characteristic value of the image.

5. The detection system of emotions according to claim 2, which further comprises a power conservation emotional information for serial reception of portions of information related to emotional States, found the specified block detection emotions, and to store portions of the data; a unit process of forgetting to erase the information that has been stored for a specified period of time after the initial saving information, among the pieces of information related to emotional States that are saved in the past the specified persistence unit emotional information, and to exclude at least information that indicates a large change in emotions than the given value, and information, consistent with the specified template changes from the specified information and, you want to erase.

6. The detection system of emotions according to claim 5, which further comprises a recognition block phrases to perform grammatical analysis by processing information related to the voice, published by the subject or characters entered by the subject, and to generate voice data expressing the significance of the phrase; the control block storage for storing voice information generated by the specified block recognition of phrases in the save block emotional information synchronously with information related to emotional States.

7. The detection system of emotions according to claim 2, which further comprises a block defining a silent time to determine the reference silent time based on the emotional state of the detected emotional state; and a detection unit segmentation phrases to detect segmentation of voice phrases using anchor silent time specified specified block definitions silent time.

8. Machine-readable medium with a program for execution by a computer of a method for detecting emotions of the subject containing the phase of the input voice signal; a phase detection intensity of voice and pace determined by the rate of appearance of the vote, respectively, and detection, as felicidadramer, intonation, which reflects the pattern of intensity changes in each of the words performed by voice, based on input voice signal; a step for the first change value indicating the intensity of the detected voice in the direction of the time axis, and the second change value indicating a change of pace voices in the direction of the time axis, and the third value changes, indicating a change in intonation in the direction of the time axis, respectively; and the stage of generating signals expressing the emotional state of at least anger, sorrow and pleasure, respectively, on the basis of the received first, second and third values changes.

9. The way to generate sensitivity, containing pre-storing bits of information about identity, defining at least the mind set of characteristics and the will of the subject, which generates sensitivity; generating information instinctive motivation, including at least first instinct parameter expressing a degree of pleasure, the second instinct parameter expressing a degree of danger, and the third instinct parameter expressing a degree of achievements and changes, based on input information about the situation, which indicates any emotional status of the partner and the pursued which is a partner; generating emotional information, including the emotional parameter of at least pleasure, anger and sadness, on the basis of the generated information instinctive motivation; management generated emotional information based on the specified information about identity.

10. Generator sensitivity of the containing block instinctive determination to enter information about episodic situations involving emotional state partner, the environment in which the partner is located, and the will of the partner, and to generate information instinctive motivation, which includes at least first instinct parameter expressing a degree of pleasure, the second instinct parameter expressing a degree of danger, and the third instinct parameter expressing a degree of achievement and the degree of change, based on the information episodic situation; block generating emotions to generate emotional information, including basic emotional parameters at least pleasure, anger and sadness, on the basis of information instinctive motivation issued from the specified block instinctive determination; block provide information about an individual to provide information about identity, defining at IU is E. the mind and will of the sensitivity of the subject, which generates sensitivity; the control unit of emotions for emotional control information issued from the specified block the generation of emotions, on the basis of information about identity supplied from the specified block provide information about identity.

11. Generator sensitivity of claim 10, wherein said block generating emotions contains the power generating rhythm for generating information expressing periodically changing the environment or the life rhythm of a living body; a control unit random emotions to control arbitrary emotion in the specified block generating emotions in accordance with information about the rhythm of life, issued the specified unit generating the rhythm of life.

12. Generator sensitivity of claim 10, wherein said block generating emotions contains a block of information preservation instinct-emotion to store template information that provides match the specified main emotional parameter and said information instinctive motivation; education block information coordination for delivery of information, expressing agreement/disagreement information instinctive motivation pattern information specified block holding information instinct-emotion, if this info is provided instinctive motivation is issued from the specified block instinctive determination.

13. Generator sensitivity of claim 10, wherein said block generating emotions contains block feedback control emotions to enter into the power generation of emotions at least his own emotional information generated immediately before the input of the feedback signal, and to post that information on its signal to be in the next generation.

14. Generator sensitivity of claim 10, wherein said control unit emotion displays the information life cycle, which is the individuality of the subject, which generates a sensitivity to emotional information to be entered.

15. Generator sensitivity of claim 10, which additionally contains a database of knowledge to store information about situations, showing the last situation, the last episode and the previous situation and the episode; the finisher knowledge to search for and retrieve information about past situations similar to the newly introduced information about the situation from the specified database knowledge, and to issue information about the previous situation in the specified block of management of emotions;

the control unit updating data to update the contents of the specified database knowledge on the basis of information about the situation, showing once again entered the situation and the new situation is, and to automatically erase time from the specified database knowledge information about the situation with low priority in accordance with a weighty factor content.

16. Generator sensitivity of claim 10, which further comprises a block of input voice input voice signal; a detection unit intensity for detecting the intensity of the voice based on the voice signal, introduced by the block of the input voice; a detection unit rate for detection as the rate of speed at which you receive a voice based on the voice signal, introduced by the block of the input voice; a detection unit intonation to detect intonation, expressing the pattern of change of intensity in the word voice based on the voice signal, introduced by the block of the input voice; a detection unit change value for the detection value of the intensity of the voice detected by the specified block detection of the intensity, tempo, voice, found the specified detection unit pace, and intonation, found the specified detection unit intonation, respectively; a detection unit of emotions for issuing signals expressing the emotional state of at least anger, sorrow and pleasure, respectively, on the basis of the magnitudes of the changes detected by the specified unit detecting the position value changes.

17. Generator sensitivity clause 16, which further comprises a unit voice recognition for voice recognition, introduced by the block of the input voice, and for issuing a landmark information; a processing unit for natural language processing natural language vocal information detected by the specified block voice recognition, and for generating semantic information expressing the sense of the entered voice.

18. Machine-readable medium with program and data, ensuring the execution of the computer-method of controlling the generation of sensitivity, including the stage of generating information instinctive motivation, including at least first instinct parameter expressing a degree of pleasure, the second instinct parameter expressing a degree of danger, and the third instinct parameter expressing a degree of achievements and changes, based on input information about the situation, which shows the status of any emotion partner and the environment in which the partner is located; the stage of generating emotional information, including the emotional parameter of at least pleasure, anger and sadness, based on the specified generated information instinctive motivation; stage information about identity, defining p is at least the mind and will of the subject, which generates sensitivity; stage managing this generated emotional information based on the specified information about identity.

19. Method detection intonation of the voice signal that is used to detect the emotions of the subject, containing stage input voice signal; detecting sections having the same frequency components, based on the analysis of the pattern of change of intensity in the word, expressed the input voice signal; and

detection time intervals at which you receive each of these parcels have the same frequency components, and these time intervals as the characteristic value of intonation.

20. The detection system of intonation in the voice signal used to detect the emotions of the subject, containing a block of input voice input voice signal; a detection unit sections having the same frequency components, based on the analysis of the pattern of change of intensity in the word, expressed the input voice signal; a detection unit intonation for detecting time intervals, which appears each of these parcels have the same frequency components, and these time intervals as the characteristic value of intonation.

21. Masinac temy the program media and data ensure the implementation of the computer detection of intonation in the voice signal used to detect the emotions of the subject containing the phase of the input voice signal; a phase detection sections having the same frequency components, based on the analysis of the pattern of change of intensity in the word, expressed the input voice signal; a phase detection time intervals at which you receive each of these parcels have the same frequency components, and these time intervals as the characteristic value of intonation.

Priority items:

13.09.2000 - claims 1 to 8, 19-21;

16.01.2001 - p-18.



 

Same patents:

FIELD: computation engineering.

SUBSTANCE: device has object-oriented measurement tract correction mode. The device has test signal generator as sequence of N frequencies distributed over N strip means to which the hearing spectrum is divided with pauses between the frequencies, unit for transmitting test signals, microphone, N-bandwidth signal/noise ratio measurement unit, and computation unit for treating intelligibility. Manual generation frequency switch and pause switch mode is introduced. Level measurement unit is mounted in front of the transmitter. Controllable frequency characteristic adjustment unit is introduced in front of the signal/noise ratio measurement unit.

EFFECT: high accuracy and reliability of measurements.

2 dwg

FIELD: computation engineering.

SUBSTANCE: method involves carrying out object-oriented measurement tract correction. To carry out it, some test signal frequencies are transmitted of given level in turn. Real level of the acoustic signals is measured at each frequency at their transmission place. The acoustic signal levels are compared to given levels at the same frequencies, measured acoustic signals levels deviations from the given ones are measured and frequency characteristic of the measurement tract is adjusted to compensate the difference between the measured and given acoustic signal levels at their transmission place.

EFFECT: high accuracy and reliability of measurements.

2 dwg

FIELD: technology for measuring maximally possible legibility of speech in extremely unfavorable conditions, in noise, with low signal/noise ratios and is meant, primarily, for determining protection of objects in case of speech information loss through several leaking channels simultaneously.

SUBSTANCE: device contains generator of acoustic testing signal in form of series of N frequencies with pauses between frequencies, emitter, acoustic signal receiver, N-band device for measuring signal/noise rations and device for computing legibility. In addition, in parallel to receiver of acoustic signal K receivers of other signal types, formed by emission of acoustic testing signal, are mounted, for example, for detecting magnetic, electronic, vibro-acoustic types, while clock generator has additional outputs for organization of K+1 cycles of generator and synchronous control over switch of receivers and selection device.

EFFECT: increased trustworthiness of speech legibility measurements.

2 dwg

FIELD: technology for measuring maximally possible legibility of speech under extremely unfavorable conditions, in the noise, for low signal/noise ratios, and is meant mainly for determining protection of objects during loss of speech information through several leaking channels simultaneously.

SUBSTANCE: in accordance to method, in receipt point K receivers of other signal types, formed by acoustic testing signals, for example, electric, magnetic, vibro-acoustic signals, are positioned, acoustic and aforementioned K signal types formed in receipt position at all N frequencies of testing signal and in pauses between same, are received and measured separately, on basis of results of measurements signal/noise ratios are determined at each frequency of testing signal by each one of acoustic and K other types of received signals, at each frequency of testing signal greatest one of signal/noise relations are selected, measured by acoustic and K other types of received signals, then - speech legibility grade is calculated on basis of greatest signal/noise ratios.

EFFECT: increased trustworthiness of speech legibility measurements.

2 dwg

FIELD: technology for recognition of speech, in particular, method and device for dynamic adjustment of beam in Viterbi search.

SUBSTANCE: method includes selecting beginning width of beam, finding out whether value of probability for frame changes, dynamic adjustment of width of beam, decoding of input speech signal, while width of beam is dynamically adjusted. Also proposed is device including processor, component for speech recognition, connected to processor, memory, connected to processor. Component for speech recognition automatically adjusts width of beam to decode input speech signal.

EFFECT: increased speech recognition reliability.

6 cl, 6 dwg, 4 tbl

FIELD: technology for analyzing speech under unfavorable environmental conditions.

SUBSTANCE: during transformation of spoken command first circular buffer is continuously filled with digitized signal, comb of recursive filters is applied to multiply loosened signal and spectral components are utilized to fill second circular buffer, limits of speech fragment are determined within it on basis of adaptive estimate of noise environment, spectral components of speech fragment are transferred to linear analysis buffer, shortened sign space is received from aforementioned buffer and produced spectral components are compared to standard vectors of database commands.

EFFECT: utilization of device under conditions of, for example, moving vehicle or mechanical industry with high noise pollution level provides for stable recognition of commands independently on particularities of narrators pronunciation, decreased memory volume.

7 cl, 2 dwg

FIELD: engineering of informative system with speech interaction system.

SUBSTANCE: interaction system is connected to informative system and to recognition system. Interaction of user with informative system is performed by using user phrases, processed by recognition system. To achieve the result, system contains application sphere module, meant for setting phrase settings in it, which are constructed on basis of classes of objects, classes of attributes and classes of actions, which are common for aforementioned systems of subject application area and which should be matched by certain types of objects, types of actions, types of attributes and their instances, generated by informative system, for construction of grammar of phrases inputted by user.

EFFECT: possible interaction of user with informative system.

8 cl, 1 dwg

FIELD: analysis and recognition of speech signals, can be used for recognition of speech patterns.

SUBSTANCE: device for realization of aforementioned speech phoneme recognition method has: computing system, including clock generator, controller, random-access memory device, central microprocessor unit, meant for forming bispectral signs and recognizing them on basis of speech phonemes, digital-analog converter, long-term memorizing device, video-controller and analog-digital converter, and also keyboard, display, headphones and a microphone.

EFFECT: increased precision of speech patterns recognition due to forming of phoneme signs for speech phonemes recognition based on application of bispectral analysis, based on transformation of digital code series, appropriate for speech signals, to bispectral zone, characterizing interaction between values of Fourier components at different frequencies within speech spectrum, and thus, to provide selection of an additional, significantly new information from speech signals, to increase precision of phoneme recognition.

2 cl, 5 dwg

FIELD: radio engineering.

SUBSTANCE: device has block for determining beginning and end of command, first memory block, block for syllable segmentation, block for time normalization of command, standard commands block, commands likeness calculator, while output of block for determining beginning and end of command is connected to first inputs of first memory block and syllable segmentation block, output of first memory block is connected to first output of command time normalization block, second output of which is connected to output of syllable segmentation block. Device additionally has supporting noise input, second memory block, block for time normalization of noise, first and second blocks for level normalization, signals mixer, while input of speech command is connected to output of block for determining beginning and end of command and to second inputs of syllable segmentation block and first memory block, bearing noise input is connected to first input of second memory block, to second input of which output of block for determining beginning and end of command is connected, output of second memory block is connected to first input of block for time normalization of noise, output of syllable segmentation block is connected to second inputs of block for time normalization of noise, of first and second level normalization block, standard commands block, and to third inputs of first and second memory block, output of block for time normalization of noise is connected to first input of signals mixer, to second input of which output of standard commands block is connected, first input of which is connected to first output of commands likeness calculator, output of signals mixer is connected to first input of second level normalization block, output of which is connected to second input of commands likeness calculator, to first input of which output of first level normalization block is connected, first input of which is connected to output of block for time normalization of command.

EFFECT: higher probability of correct command recognition during effect from noises.

6 dwg

FIELD: automatic voice recognition technologies.

SUBSTANCE: acoustic signal, observed at a point of person body, unknown to unauthorized personnel, is inputted to computing device, values of parameters of acoustic signal are determined, values of estimates of statistical characteristics of parameters of acoustic signal are determined and standards are formed on basis of these, grade of difference between acoustic signal and standards is determined, on basis of aforementioned grade decision is taken whether acoustic signal belongs to person, whose values of statistical characteristics were used during forming of standards.

EFFECT: higher resistance to interference, higher efficiency, higher trustworthiness.

6 dwg

FIELD: computer-aided design, possible usage for video monitoring of development process of large-scale systems.

SUBSTANCE: method is based on using arrays of data about technical-economical characteristics of military equipment objects being developed with display and combination of this information in windows on screen of display.

EFFECT: provision of method for computer modeling of process of warfare, providing simplified modeling of warfare process.

10 dwg, 7 tbl

FIELD: information security facilities and verification which is stored on a material medium.

SUBSTANCE: control information is generated in digital form which uses all information or its part stored on the material medium. Control information is converted into image using sound-recording and soft hardware means by converting voice message into image, with voice message corresponding to control information. Information verification is executed by converting image to synthesized voice message and comparing the contents of visually receipted information and audibly interpreted information which is contained in the synthesized voice message.

EFFECT: increased reliability, simplified marking of material information-carrying medium and its verification.

7 cl, 2 dwg

FIELD: engineering of devices for capture, processing and transfer of text and mixed information, containing symbols and images.

SUBSTANCE: image is firstly analyzed relatively to its text information, after that original image is segmented onto block of text and block of image, while text is interpreted by means of methods for optical recognition of symbols, transformed and compressed to code. Graphic information is added to code, allowing to create synthesized image of text block.

EFFECT: possible capture, recognition and storage for text and other information.

2 cl

FIELD: engineering of devices for capture, processing and transfer of text and mixed information, containing symbols and images.

SUBSTANCE: image is firstly analyzed relatively to its text information, after that original image is segmented onto block of text and block of image, while text is interpreted by means of methods for optical recognition of symbols, transformed and compressed to code. Graphic information is added to code, allowing to create synthesized image of text block.

EFFECT: possible capture, recognition and storage for text and other information.

2 cl

FIELD: biometry.

SUBSTANCE: method includes reading information about biometric characteristic of user, comparing received information to sample and protection from fake static biometric characteristic. Technological result is achieved due to fact, that for protection from fake biometric characteristic, alternation of static biometric characteristic in time is measured, parameters of alternation of static biometric characteristic are determined and input static biometric characteristic is refused as fake if aforementioned parameters deviate from set norm. As static biometric characteristic, fingerprint is used, and alteration in time during reading of fingerprint is determined by temporal dependence of area of contact surface of fingerprint.

EFFECT: increased reliability of biometrical identification when used for protection from fake biometric parameters in access control and passport systems.

4 cl, 6 dwg

FIELD: possible use for positioning video cameras and correcting parameters of video cameras, working in composition of binocular technical sight system.

SUBSTANCE: known method includes determining calibrating object by building assignment functions for all possible calibration objects on image and following selection of object with greatest assignment function value, determining position of focus distances of video cameras and adjusting it to zero value, determining deviation angles of axes of local Cartesian coordinate systems of video cameras relatively to coordinate system of binocular technical sight system, and adjusting these to zero values. In accordance to invention, introduced to known method are determining differences of depth and height of video cameras mounting and adjusting these values to zero, determining of value of base between video cameras with adjusting it to given value.

EFFECT: increased precision of binocular technical sight system by means of external calibration of binocular technical sight system, providing for determining and correction of all necessary parameters.

14 dwg

FIELD: technologies for protecting goods from forgery, possible use by manufacturers for protecting goods manufactured by them, and also for protecting consumers from purchasing products, manufactured by illegal manufacturers.

SUBSTANCE: authenticity of product is determined by comparing digitized identifier image provided on the product, made in form of volumetric physical body, which contains randomly distributed optically visible elements, with digitized image of identifier positioned in electronic database.

EFFECT: increased trustworthiness of results of comparison procedure, decreased amount, increased simplicity and comfort of procedures, performed by manufacturer; possibly decreased time needed to determine authenticity of product due to automation of comparison procedure.

21 cl, 2 dwg

FIELD: automatics and digital computer engineering.

SUBSTANCE: device contains image generation block, comparison block, multiplier, accumulating adder, decision taking block, standards memory block, weight coefficients memory block and control block. Device additionally includes images normalization block, which consists of serially connected addition and control block, standard selection block and image modification block, and weight coefficients generation block, which consists of serially connected block for forming semantic significance fields, block for ordering semantic significance fields, block for calculation of semantic significance coefficients and block for generation of remainder fields of semantic significance.

EFFECT: increased reliability of symbol recognition.

6 dwg

FIELD: computer engineering.

SUBSTANCE: system contains processor with memory, in which software executed on processor is stored for comparing terms of text with terms of both common and specialized dictionary, while matches of terms being compared are shown on display and it is possible to insert translation of matching terms into text being translated. Processor is used for assigning priorities by means of graphic user interface to several dictionaries, as a result of which terms are offered on display on basis of assigned priorities, and for showing on display the longest translations of matching terms from text and terms from common or specialized dictionary.

EFFECT: increased speed and precision of translation, simplification of complicated translations.

2 cl, 10 dwg

Monitoring method // 2282895

FIELD: method for monitoring marks, made on printed documents.

SUBSTANCE: method includes making a digital image of front side of document by determining content of one-color and multi-color components of image pixels, content of color component of each pixel is compared to range from upper to lower thresholds for appropriate pixel of acceptable unmarked document and appropriate abnormal pixel is generated, if value of pixel is outside interval between upper and lower thresholds, then presence of mark different from dirt is detected, if abnormal pixels produced as a result satisfy previously set conditions.

EFFECT: provision of possible determining of some or other forms of distortion of document.

3 cl, 6 dwg

FIELD: engineering of equipment, limiting access to system being protected, possible use for preventing unsanctioned access to system by random individuals.

SUBSTANCE: method includes determining coordinates of certain specifics of papillary pattern of user and on basis of difference of coordinates of produced image of print and one stored in database, positive or negative decision is produced about access of user to system.

EFFECT: increased level of protection of system.

2 dwg

Up!