Method for synthesising of dynamic virtual images

FIELD: physics, processing of images.

SUBSTANCE: invention is related to technologies for processing of images and, in particular, to method of dynamic virtual images synthesising. Method comprises the following actions: a) synthesising server side receives user request for synthesising of virtual image sent by user, and, in accordance with user request information, receives image files of all components for synthesising of virtual image; b) appropriate component image files are alternately read according to numbers of layers in every component, and received component image files are transformed into preset format; c) component formatted at step b) is synthesised, as well as previously read template file for formation of intermediate image; d) it is decided whether all components are synthesised; if all are synthesised, then the following step is e); otherwise - the following step is f); e) synthesised virtual image is recorded on the basis of all synthesised components in file of virtual image and procedure is completed; f) appropriate images files of other components are read one after another according to number of every component layer, and prepared component image files are transformed in preset format; g) component formatted at step f) is synthesised, as well as previously synthesised intermediate image, and then goes return to step d).

EFFECT: provision of improved service for user.

14 cl, 2 dwg, 1 tbl

 

The scope of the invention

The present invention relates to image processing technologies, in particular to a method of synthesizing a dynamic virtual images.

Background of invention

The technology of virtual images developed on the Internet in recent years, and gradually the virtual web picture became fashionable among regular Internet users, because it can embody the personality of the user and to Express his originality; virtual picture has become very popular in this environment. Nowadays the majority of virtual images stored in the graphics interchange format files (.GIF), and each virtual image consists of several components. Here the component means each of the partial images included in the virtual image and saved in GIF format, where each component is a single-frame image of the GIF format.

The GIF format is the basic format for storing image files, with each GIF file can contain multiple color images, each color image is called a "frame"; in other words, each GIF file can be saved a lot of "frames". These GIF images can be divided into single-frame image and multi-frame image, and, generally speaking, single-frame the e image is a static image, while multi-frame image is displayed frame by frame to represent the dynamic effect or animation effect, as when projecting slides. However, each frame is different from the single picture element the fact that this is a separate frame can form the image, so one frame in the GIF image should be associated with the previous frame to be presented as an image. Image GIF keeps information about the image, a unit which is a unit, and each GIF file contains a block pointing to the chart/image sub-block and the control block data, graphics/images, which together are called stream GIF data. All control blocks and data blocks in the data stream should be located between the header and the End part. GIF applied compression algorithm Lempel-Ziv-Welch (LZW) for storing the image data, the users are allowed to specify the transparency for the background image.

The characteristic structure of the GIF file is presented in the table.

Table
LabelThe name of the componentDescription
1The headerThe header GIF
2The handle of the logical screenThe description section of the logical screen
3Global color tableGlobal color table
...Expansion modules (selected at random)...
4The descriptor of the imageThe unit description of the imageIs repeated N times
5Local color tableLocal color table (may be repeated N times)
6Tabular data about the imageTable compressed data about the image
7Expansion control graphicsThe expansion unit control graphics
8The extension of a simple textThe expansion unit plain text
9The extension of the commentThe expansion unit comment
10Application extensionBlock application extension
...Expansion modules (selected at random)...
11The end part GIFThe end part GIF

The GIF file contains three types of blocks: the control unit, the unit graphics and the unit for special purposes. Among them, the control block contains information to control the flow of data or parameters of the hardware, and the elements of the control unit contains: header GIF, the handle of the logical screen, the extension control graphics and the end part of a GIF file. The unit graphics contains information and data for displaying graphics on the display, and the graphical elements of the descriptor includes a descriptor of the image and the extension of a simple text. Block special purpose contains information that is independent of image processing, the block elements for special purposes include: extension of comment and extension of the application. Here the sphere of influence of the handle of the logical screen and the global color table in the control unit are located outside the data flow, while all other control units operate only the blocks of the graphics that follow them. In other words, in the table descriptor of the logical screen and the global color table apply to the entire file, while the extension application, the extension of readable text,the extension of the comment and expand manage pictures manage only those blocks of graphics, who will follow them.

The handle of the logical screen of the table contains the parameters that define the display area of the picture, including such information as the size of the logical screen, the background color, the presence of a global color table, etc.

Because a single GIF file can contain multiple images, and each color image may have a color table in accordance with the characteristics of this image, one GIF file can have multiple color tables. However, in General there are only two types of color tables: global and local color table the color table. Global color table applies to all images without a color table, and the extensions of readable text, while the local color table is applicable only to the next for her image. Global color table may be missing if each of the image frames has a local color table.

Extension of the application contains relevant information about an application program for generating an image file, for example: is there periodically played the animation, how many times it must be periodically lost, etc. This block may be omitted if the image file is static.

Expansion control graphics contains options for block processing is the playback of graphics, including: a mark of transparency, which indicates the presence of a transparency, a color transparency index, method of processing, time delay, etc. When this processing method is used to elaborate on the process after the graph displayed, for example: (a) the process is not specified; b) the process is not specific and graphics remain in the same place; the color, the graphics, should be restored as background color; d) shows the graph that was displayed before. The delay time, the unit of measurement is 1% seconds, is used to specify the interval to wait between excretion graphics and the subsequent process.

The image descriptor is used to record the size of each frame image and can contain any number of images without a rigid sequence of storage, with identification descriptor image used single-byte delimiter images. Each image contains the descriptor of the image, optional local color table and image data. Each image must be within the logical screen, asked the handle of the logical screen.

These images contain sequence data. These image data recording each pixel with the value of the table index is answering, and compressed using the LZW algorithm.

In the prior art for carrying out the process of synthesizing virtual images typically used in the software package synthesizing used, in General, through the Internet, for example, library Gif Draw (GD). Because the GD library cannot handle multi-frame file in GIF format and can only handle single-frame GIF file, then the existing technology of synthesizing virtual images can synthesize only static virtual image, which is in General referred to as synthesizing static virtual images. The procedure for synthesizing static virtual images in the prior art includes the following steps: define the contextual relationships between the components, and also ask these contextual relations with layers; performing a first synthesizing synthesize the first layer and the background image according to the sequence numbers of layers; and then synthesize the second layer and the previously synthesized image, the further you can continue similarly until then, until you have synthesized all the layers. For example, suppose there are three components, in addition to the background, namely, pants, jacket and face, this produces the following virtual picture: the person is in front of the jacket, part of the collar is of akrita face, jacket is in front of the pants and jacket covers part of the press; then the pre-defined number of layers are: 1 pants, jacket 2 and person 3; if the synthesis is completed, the press is inserted in the background first, then insert the jacket, the face insert; under insertion process here understand synthesizing images by using existing synthesizing functions.

Although the above method of synthesis is very simple and easy to implement, since each component is a single frame, and each temporarily synthesized image is one frame, but the final synthesized image is also one static frame; in other words, when using this method, you can only get a static virtual image. Even if the components of the synthesis is a multi-frame dynamic GIF files, only the first frame can be read with the aim of synthesizing therefore, the synthesized virtual image is simple and concrete, it is unable to meet the requirements of the user.

The invention

Is provided a method of synthesizing a dynamic virtual images with the aim of realizing synthesizing dynamic virtual images and, thus, to offer the user improved service.

For the stijene above objectives implement the following technical scheme of the present invention.

The method of synthesizing dynamic virtual images, containing the following:

a) synthesizing the server side receives the user request for synthesizing virtual images sent by the user and, in accordance with the information user's request, receives the image files of all components for synthesizing virtual images;

b) sequentially reads the corresponding image files of the component in accordance with the numbers of layers of each component and transforming the received image files of the components in the specified format;

c) synthesize component formatted in step b), and pre-read the template file for the formation of intermediate images;

d) determine whether all components are synthesized; if all components are synthesized, then go to step (e); otherwise, go to step f);

e) recording the synthesized virtual image on the basis of all the synthesized components in the image file of the virtual image and end the procedure;

f) read one after the other corresponding to the image files of another component according to the layer number of each component and convert the image files of the component in the specified format;

g) synthesized component formatted in step f), and p is evritania synthesized intermediate image and return to step d).

Step g) includes the following steps:

determine the number of frames of the synthesized image of the intermediate image and the number of frames formatted component, determine the duration of a fixation display of each frame and the corresponding communication synthesizing component between frames and intermediate frames of images; based on the determined corresponding relation synthesizing and in accordance with the fixation display each frame synthesize the frames of this component and corresponding frames of the intermediate images.

Before recording frame by frame of the synthesized virtual image on the basis of all the synthesized components in the image file of the virtual image in step d) the method further comprises the following steps:

compress frame-by-frame image file corresponding to the synthesized virtual image from the last frame to the second frame, and the step of recording the synthesized virtual image on the basis of all the synthesized components in the image file of the virtual images is the next step:

write each frame, the corresponding condensed and synthesized virtual image, in the image file of the virtual image.

The compression process, in particular, contains the following steps:

consistently Toccata point comparing the pixel value of the frame with the pixel value of the previous frame, and if the two pixel values are the same, then change the color of the dot on transparent; if the two pixel values are different, no action is performed.

The method further comprises the following step:

pre-receive information about all components of the virtual image that you want to synthesize; the step a) includes the following steps:

the user sends a user request carrying the identification information of the user and information about the components for synthesizing the virtual image by synthesizing the server side; analyzing the user request, synthesizing the server side receives the label components and the corresponding numbers of all layer components, and also receives the corresponding image files components according to the received label components and the corresponding numbers of layers.

Information on ingredients contains at least:

the component label that identifies this component is the only way, and the corresponding number of the layer of the component.

Transform the image files of the component in the specified format, in particular, contains the following steps:

b11) add colors from the local color table of the image file of the component into the global color table, and estimate exceeds the global color table maximum Chi is lo colors or not; if the global color table exceeds the maximum number of colors, then expect the least distant color and add this calculated the least remote color in the global color table; otherwise, directly add colors from the local color table into the global color table;

b12) estimate, transparent whether each pixel point of the frame; if each pixel point of the frame is transparent, then take on the color of the pixel point according to the previous frame associated with the frame; otherwise, take the color of the pixel point of a given frame;

b13) after displaying the extension control graphics image file of a component defines the appearance of the handle in a uniform way, namely: restore the color pane, which displays the graph as a mandatory background color.

The claimed method further comprises the following steps:

receive the fixation display of each frame from the expansion control of the image in the image file of the component or in the file image of the intermediate image.

Synthesizing frame component and sootvetstvuyuschih frames intermediate image in accordance with a fixation display each frame in step g) includes the following steps:

g11) calculate the total duration of the presentation of the texts of all frames in the intermediate image, and at the same time, calculates the total duration display all frames in the component, and take the least common multiple for both the calculated amounts to the duration of the display as the total duration of display of all frames in the synthesized image;

g12) determine the insertion point of the frame in accordance with the number of frames of the given component and fixation display of each frame to the frames of the intermediate image and fixation display each frame in it, and with a total duration of fixation display all frames received in step (g);

g13) synthesize the frame component and the intermediate frame images in each insertion point.

Synthesizing the respective frames of the given component and the corresponding frames of the intermediate image in accordance with a fixation display each frame in step g) further comprises the following steps:

g21) calculate the total duration display all frames in the intermediate image and, at the same time, calculate the total duration display all frames in this component, and take the larger value total duration of display of this component and the total duration of fixation of intermediate images;

g22) determine the insertion point of the frame in the accordance with the fixation display each frame in this component, with a fixation display each frame in the intermediate image, and with a large value of the total duration of fixation of the component and total duration of fixation of intermediate images;

g23) synthesize the frame component and the intermediate frame images in each insertion point.

If the duration of the fixation display each frame in the intermediate image and the fixation display each frame in this component are the same, then synthesizing the respective frames of the given component and intermediate frames of images in accordance with a fixation display each frame in step g) further comprises the following steps:

g31) calculate the least common multiple of the number of frames of this component and the number of frames intermediate images;

g32) determine the insertion point of the frame in accordance with the number of frames of this component, the number of frames of the intermediate images and the least common multiple of the number of frames of the given component and the number of frames intermediate images;

g33) synthesize the frame component and the intermediate frame images in each insertion point.

If the fixation frame intermediate image and the fixation frame of this component are the same, the item is and this and the number of frames of this component, and the number of frames of the intermediate images are a power of 2, then synthesizing the respective frames of the given component and frame intermediate images according to the duration of the fixation display each frame in step g) further includes the following steps:

g41) take the largest value of the number of frames of the component and the frame number of intermediate images;

g42) determine the point of insertion of the frame in accordance with the number of frames of the component, the number of frames of the intermediate image and the maximum value of the number of frames of the component and the frame number of intermediate images;

g43) synthesize the frame component and the intermediate frame images in each insertion point.

The synthesis in step (g) further comprises the following steps:

g1) analyze each pixel point of the frame in this component, assessing whether the transparent color of a given pixel point; if the color of a given pixel point is transparent, which means that the color of the corresponding pixel points of the synthesized frame is the same as the color of the corresponding pixel points of the associated intermediate frame images, then return to step g1) and continue to process the next pixel point; otherwise, perform step g2);

g2) assess whether there is in the global table is color intermediate color images, equivalent to the color of a given pixel point; if the global color table of the intermediate image has a color equivalent to the color of a given pixel point, then mark the corresponding pixel point of the synthesized frame with the use of equivalent color returns to step g1) and continue processing the next pixel point; otherwise, perform step g3);

g3) estimate, to the end whether filled global color table of the intermediate image; if the global color table of the intermediate image is not complete, then fill the global color table of the intermediate image by the color of the corresponding pixel points of the frame of this component and mark the corresponding pixel point of the synthesized frame that color; if the global color table is filled to the end, then look for the least distant color in the global color table of the intermediate image and mark the corresponding pixel point of the synthesized frame with that color.

The image file is interchangeable file format graphics interchange (GIF) format image files.

In the method of synthesizing dynamic virtual images, provided the claimed invention, before synthesizing the virtual image formats unify all components in a standard format, paragraph and during the process of synthesizing take the form of inserting the duration of the fixation frame to determine the number of frames of the synthesized virtual images, the duration of the fixation display of each frame and the frame in the intermediate image, which is necessary to synthesize with a specific frame in this component, and, at the same time, after a process of synthesizing apply comparative frame compression algorithm, starting with the last frame.

Thus, carry out synthesizing dynamic virtual images, allowing the user to provide better service, more satisfying user. In addition, the technology of synthesizing dynamic virtual images can bring great revenue to the service providers, because, as shown by the study, 10% of regular Internet users seek, and more than 85% of regular Internet users is very much strive to synthesize dynamic virtual images.

Brief description of drawings

Figure 1 shows a block diagram representing the implementation of this method in accordance with the implementation of the present invention.

Figure 2 is a schematic representing the process of determining the point of insertion of the frame during synthesis in the implementation of the present invention.

Detailed disclosure of the invention

For clarification purpose, technical scheme and advantages of the present invention further, it is disclosed in detail with refer to the kami on several options for implementation.

The key idea of the present invention is the following: after reading each component follow standard formatting of this component with the use of a single global color table and using a single method of processing a frame; and then, in accordance with the fixation of each frame of this component and each intermediate frame images define the insertion point of the frame for synthesizing and perform the synthesizing frame of this component and the intermediate frame images in each insertion point. Additionally, to reduce the size of the synthesized dynamic virtual image, after synthesizing may be performed frame-by-frame comparison of compression, starting with the last frame.

Since each component of the virtual image is the image file GIF, before beginning the process of synthesizing want to mark all the components of a virtual image that you want to synthesize, so that each component was identified only label. At the same time it is necessary to determine the number of layers of each component in accordance with the order of arrangement one behind the other each component in the virtual image; although the number of layer and label each component are independent of each other, during the synthesis may be only one component is defined by the number of layer. In General, the order of the background to the foreground uses increasing numbers of layers, in other words, the number of foreground layers is greater than the number of layers of background, in this position the components with higher numbers of layers can overlap components with lower numbers of layers, for example: portrait may overlap the background, the hand can cover the body, etc.

After you define the label and the layer number of each component, all the component files retain respectively the component label and the number of layers to organize the directory structure, for example: the label component of the vehicle 4, and the layer 20; the label component of the flower - 8, and the number of layer 12 and so on, it is convenient to read the components at the time of synthesizing virtual images.

In General, synthesizing virtual images starts and it becomes possible in accordance with the user request; before sending the request to the user already knows which components contains a virtual image that you want to synthesize, as well as the appropriate marks and numbers of layers of these components, for example: the user gets the necessary components in advance, selects match a specific background and some fabric. Thus, when a user sends a request, the request will contain information on all components; in the che speaking, the user sends its identification information, together with information about the components for synthesizing virtual images on a network server, for example, as follows: user ID, label 1-th component and the corresponding number of the layer, the mark of the 2nd component and a corresponding number of layer... tag n-th component and the corresponding number of the layer. Here, the server has to implement the synthesizing image called synthesizing server; user ID can be login or registration label of the corresponding user.

As shown in figure 1, the process of synthesizing dynamic virtual images in this embodiment of the present invention contains the following steps:

Step 101: when a user requests synthesizing virtual images, it sends the user request containing identification information of the user and information about the components on the side of the synthesizing server; synthesizing server analyzes the user request and receives from the user query, the number of components for synthesizing the virtual image and the corresponding label and the layer number of each component; and then the corresponding image file component can be obtained from the structure the factors directory of components in accordance with the label component and a corresponding number of the layer.

Step 102: after defining all components required for synthesizing the virtual image at the current moment, synthesizing server must first read the template virtual images, and then, in accordance with various numbers of layers, synthesizes in this pattern, each component one by one.

Because each user can choose different numbers of components with different numbers of layers, the template should be imposed on the basic pattern of the virtual image, where the template file is a GIF file, generated in accordance with a specified format, which is the image of a white color or transparent image; a template can also be considered a pre-synthesized intermediate virtual image. In General, the template can be added automatically synthesizing server.

Step 103: after reading the template virtual images, in accordance with the label component and the layer synthesizing server reads the corresponding image files of the component from the component with the lesser number of layer.

The specific process is as follows: respectively are shown in table GIF file format synthesizing the server reads the header of a GIF file, the descriptor of the logical screen, the global color table, application extension, expansion management graphy is Oh, the handle of the image, image data and so on; if you have a local color table, then synthesizing the server needs to read the local color table, it stores this table in the corresponding memory structure. If this component is a dynamic GIF image, the synthesizing server, you need to consider many management extensions graphics, handle image, and the image data to generate a multiframe image. If each frame has its own local color table, the color of the image in that frame will be determined by the local color table instead be determined by the global color table. You can see this component is a dynamic GIF image or not by asking whether there is a GIF file multiple frames; if at least one of the synthesized component is a multiframe, the synthesized virtual image is a multiframe and dynamic.

As an extension of the review and extension of the plain text in the GIF file is useless for synthesizing virtual images, the extension of the review and extension of the plain text can be temporarily lowered during reading GIF images to reduce the size of the synthesized file.

Step 104: a few image files components bringing the to the specified format, in other words, initialize format.

From this description it is obvious that the synthesizing dynamic GIF image with position management extensions graphics file in GIF format is very difficult because of three problems: 1) in a GIF file, the decoder can perform four types of processes mapped frame, which means that there are four types of dynamic GIF s, however, static GIF may not include the difference between these four types as static GIF contains only one frame; 2) since each frame in the multiframe GIF can have its own local color table or can use the color in the global color table instead local color table, then this situation can lead to a change in a multi-frame GIF-files; 3) since the number of frames in the GIF file of each component differs from the number of frames of the other components, and the length of the fixation display of each frame is also different from the other, this situation may also lead to a change in a multi-frame GIF files.

In relation to these issues in the variants of implementation of the present invention the above two problems can be solved by formatting the GIF component, while the third problem will be solved during the synthesis. A specific way of formatting, in accordance with the tvii with implementations of the present invention, below:

A1) table unify colors, namely, if the number of colors exceeds the maximum number of colors in the global color table, then adding color local color table into the global color table of the newly added color will be replaced by the least remote of flowers.

In this case, the distance between colors is defined as the quadratic sum of the two values of red, green and blue colors, respectively, and those two colors that have the minimum of a quadratic sum, called the nearest colors.

b1) each frame can be transformed into the image independently; if the frame must be associated with the previous frame for independent transformations in the image, this frame will be associated with the previous frame for independent imaging with unified format, and the method of processing after displaying the extension control graphics can be aligned as follows: the background color of an obligatory way to restore the color of the area in which display graphics.

In this case, the process of synthesizing the current frame associated with the previous frame is as follows: analyze each pixel point in the current frame and paint each pixel point again; if it is transparent, then take the color of the corresponding pixel of the previous point of the associated frame; otherwise, use the color pixel dot of the current frame.

The colors of the local color table in the process of formatting gradually add in the global color table, while each frame is transformed into the image independently; each formatted component has only a global color table, each frame here can be transformed into the image processing method of one.

Step 105: synthesize rich component and a pre-synthesized intermediate image. Synthesis in this step can be further divided into two steps:

A2) in accordance with the number of frames and the duration of fixation display each frame component and fixation display each frame of the intermediate images to determine the synthesis of a particular frame in this component and the frame in the intermediate image, namely, the corresponding relationship of the synthesis between the frames of this component and intermediate frames of the picture. In this case, the fixation display each frame can be obtained from expansion management image file, the image component or from expansion management image of the intermediate image.

Because this component can be dynamic and can be the centre of the public image, and since the intermediate image may also be a multi-frame, moreover, since the fixation of the image frame can also be different, in order to synthesize the current component and the intermediate image, you first need to determine the number of frames of the synthesized image, and to determine which frame in the intermediate image must be synthesized with a specific frame in the current component, namely: first, determine the insertion point.

b2) at each point insertion synthesize the current frame component and the current frame of the intermediate images, namely: insert the current frame component in the current frame of the intermediate images.

In the process of synthesizing the steps A2) and b2) is the main determining the insertion point in accordance with the relevant information about the component frames and intermediate frames of images; to relevant information here applies: the number of frames of the component, the fixation display each frame component, the number of frames of the intermediate image and the display time of each frame of the intermediate images. A common way to determine the point of insertion of the frame is as follows.

First calculate the total duration of display of all frames of the intermediate images and,at the same time, calculate the total duration of display of each frame component, and then take the least common multiple of both the calculated total durations output mappings as the total duration of display of all frames in the synthesized image.

Then determine the number of insertion points of the frame component and the number of insertion points of intermediate frame images in accordance with the duration of display of each frame component, the total duration of display of all frames of the intermediate image, and the display time of each frame in the synthesized image, respectively; and further determine the actual insertion point according to the duration of fixation of each frame, respectively. Each time you insert create one frame of this synthesized image and each synthesized frame is generated by the insertion of a particular frame in the component into the associated frame in the intermediate image, and the associated frame in the intermediate image denotes the nearest intermediate frame images among the frames located before the current frame of this component.

To synthesize the frame component and the associated frame in the intermediate image means to insert the current frame of this component at each insertion point in the current frame Prohm is the filling of the picture. During the synthesis of these two frame colors used by the current frame of this component, add in the global table of intermediate images; if the table is full, then these colors replace the least distant flowers.

With reference to figure 2, assume that the intermediate image has three frames, respectively, M1, M2 and M3, while the duration of these three frames is equal to 40, 50 and 60, respectively, therefore, the amount equal to 150; the component has three frames, respectively, G1, G2 and G3, while the duration of these three frames are respectively 30, 30 and 40, and, therefore, the amount equal to 100. Thus, the least common multiple of these two amounts equals 300, in other words, the total duration of display frames of the synthesized virtual images is equal to 300. Thus, each of the frames of the intermediate images are periodically inserted twice, so there are 6 points of insertion, i.e. the image is replaced 6 times; because each of the frames of this component periodically inserted three times, therefore, there are 9 points of insertion, i.e. the image is replaced 9 times. Since the first insertion point of the intermediate image and the first insertion point of the component are the same, i.e. coincide with their first shots, while the rest of the insertion point does not match, for a total of 14 points of insertion, and, thus, 14 is formed with nasirovna frames; the respective actual position of the insertion points 1 to 14 shown in figure 2.

After determining the number of synthesized frames and duration of fixation synthesize frames, where each frame receive by synthesizing frame component and the associated frame in the intermediate image, the associated frame is determined correspondingly insert time frame, each synthesized frame is obtained by synthesizing frame component, which is close to or coincides with the time of insertion of the synthesized frame and intermediate frame images, which is close to or coincides with the time of insertion of the synthesized frame. In this method of synthesis is still required to insert the frame component in the corresponding intermediate frame images; if there is any matching frames, these two matching frame will be synthesized into a single frame. As shown in figure 2, the newly synthesized first frame obtained by inserting a frame component G1 in the frame M1 intermediate images, the newly synthesized second frame obtained by inserting a frame G2 component in the frame M1 intermediate images, the newly synthesized third frame is obtained by inserting a frame G2 component in the frame M2 intermediate images, the newly synthesized fourth frame received PU is eating insert frame G3 component in the frame M2 intermediate images, newly synthesized fifth frame obtained by inserting a frame G3 component in the frame M3 intermediate images, the newly synthesized sixth frame obtained by inserting a frame component G1 in the frame M3 intermediate images, the newly synthesized seventh frame obtained by inserting a frame G2 component in the frame M3 intermediate images,... other frames get in the same way. Here the newly synthesized first - this is a synthesized frame 1, according to figure 2, and the remaining images can be obtained similarly.

The insertion process is performed in accordance with the following method: while you synthesize, analyze each pixel point of a certain frame components: if the pixel is transparent, the color of the corresponding pixel points of the synthesized frame and the color of the corresponding pixel of the associated frame intermediate images are identical; if the pixel is opaque, there are three ways of processing: A) looking for the equivalent color in the global color table of the intermediate image and mark the corresponding pixel point of the synthesized frame, this is equivalent to the color;) if this color is not equivalent, find, filled if the global color table of the intermediate image; if the global the color table of the intermediate image is not filled in, japanaustralia the color table of the intermediate image by the color of the corresponding pixel points in a certain frame in the component and mark the corresponding pixel point of the synthesized frame that color; (C) if the global color table of the intermediate image are filled, then conduct the search in the global color table of the intermediate images to find the least remote color and mark the corresponding pixel point of the synthesized frame with that color.

To reduce the size of the synthesized file, to determine the total duration of fixation of the synthesized frame can be used a larger value of the total fixation duration of this component and the greater the value of the total duration of fixation of the intermediate image instead of above the lowest common multiple. If you again refer to figure 2, where the number of frames of the component is equal to 3, the duration is equal to 40, 50 and 60, respectively, and the total duration equal to 150; the number of frames of the intermediate images is 3, the duration is equal to 30, 30 and 40, respectively, and the total duration equal to 100; the maximum total duration of fixation component and the total duration of fixation of the intermediate images is 150. Figure 2 shows that, if the periodic cycle is set as 150, the fixation display of the intermediate image is completed on the M3, i.e. on the second M1; concerning component, the display will be completed between the second G2 and G3, hence the need to synthesize only 7 ka the ditch, accordingly, namely: it is necessary to synthesize the frames from 1 to 7, according to figure 2. Of them synthesized frame 1 is obtained by inserting the frame component G1 in the frame M1 intermediate images, the synthesized frame 2 is obtained by inserting the frame G2 component in the frame M1 intermediate images, the synthesized frame 3 is obtained by insertion of the frame G2 component in the frame M2 intermediate images, ... etc. This process allows you to reduce the file size of the synthesized image, whereas on the perception of the user, it has little effect.

If the length of the fixing frame of the intermediate images and the length of the fixing frame component is uniform, then the alternative insertion of duration of fixation may be the method least common multiple, namely: the least common multiple of the number of frames of the component and the number of intermediate frames of picture taking as the number of frames of the synthesized image, for example: the number of frames of the component 3 and all the duration of fixation it is 50; the number of frames of the intermediate image 4 and all the duration of fixation it is 50, therefore, the least common multiple of the number of frames of the component and the number of frames of the intermediate images is equal to 12, in other words, determine that frames the component if and insert four times, while the frames of the intermediate image by periodically inserting three times.

If the length of the fixing frame intermediate image frames and unified component, as the number of frames of the component, and the number of frames of the intermediate images are a power of 2, the algorithm inserts the duration of fixation can be further transformed by taking the larger value of the number of frames of the component and the number of frames of the intermediate images. For example, the number of frames of the component is equal to 2, and the duration of fixation it is 50; the number of frames of the intermediate image 4 and the fixation duration in all 50; therefore, determines that frames this component periodically inserted twice, while the frames of the intermediate image is inserted only once.

Step 106: the newly synthesized virtual picture taking as an intermediate image, and then find out whether all the requested components are synthesized; if all components requested by the user, synthesized, then perform step 107; otherwise, perform step 103, where the image file is a component of the next layer is read sequentially, depending on the layer number for processing and synthesizing. This process is repeated until such time as all of the components in the abusive user, will not be synthesized to obtain dynamic virtual image required by the user.

Step 107: since each frame in the synthesized dynamic GIF file can form an image independently, then this file is relatively large, so the synthesized virtual image requires compression to reduce memory footprint.

According to the description in step 104: the synthesized virtual image has only a global color table, the colors are defined as the same if the corresponding pixel values of the pixels in the frame are the same, in this case, in particular, can be used the following compression algorithm: unused color is selected as the transparent color and the next step starts with the last frame; the value of the pixel dot of the current frame is compared with the value of the pixel point of the previous frame; if both values are equal, the color of this point will be changed to transparent and so on, until by comparing not be compressed second frame means; processing after displaying all frames unified as the lack of data processing, so graphics will remain stationary. After performing this compression algorithm, each frame captures only those pixels that are different from the pixels of the previous frame. The file size of the virtual is artini, compressed using the compression algorithm to be much smaller, for example, from a few hundred K to less than a hundred K.

Step 108: the header of a GIF file, the descriptor of the logical screen, the global color table and extension of the application in the memory structure, which are used for synthesizing the virtual image, recorded in a GIF file of the virtual images, respectively, in the format GIF-file, and further expansion control graphics, handle images and data on the image of each frame recorded in a GIF file of the virtual images, respectively, and at the end add the end part of the GIF. Of them the image data recorded in a GIF file of the virtual images on the LZW compression method.

Should compress the synthesized virtual image or not is determined in accordance with specific requirements; if the requirements on compression not, synthesized virtual image can be directly recorded in the image file, the virtual picture frame by frame; if you want the compression process may be performed in steps 107 and 108.

After performing the above steps, the resulting synthesized dynamic virtual image is a multiframe, each frame and the previous associated frame together form the image, and the dynamic effect can be achieved by periodic is a mini display these images.

The above description is only the preferred embodiments of implementing the present invention and do not necessarily limit the scope of the present invention.

1. The method of synthesizing dynamic virtual images, containing the following:

a) synthesizing the server side receives the user request for synthesizing virtual images sent by the user, and in accordance with the information of the user query retrieves image files of all components for synthesizing virtual images;

b) sequentially reads the corresponding image files of the component in accordance with the numbers of layers of each component and transforming the received image files of the components in the specified format;

c) synthesize component formatted in step b), and pre-read the template file for the formation of intermediate images;

d) determine whether all components are synthesized, if all components are synthesized, then go to step (e), otherwise go to step f);

e) recording the synthesized virtual image on the basis of all the synthesized components in the image file of the virtual image and end the procedure;

f) read one after another suitable the e image files of another component according to the layer number of each component and convert the image files of the component in the specified format;

g) synthesized component formatted in step f), and pre-synthesized intermediate image, and returning to step d).

2. The method according to claim 1, in which step g) includes the following steps: determine the number of frames of the synthesized image of the intermediate image and the number of frames formatted component, determine the duration of a fixation display of each frame and the corresponding communication synthesizing component between frames and intermediate frames of images, based on the determined corresponding relation synthesizing, and in accordance with the fixation display of each frame, synthesize frames of this component and corresponding frames of the intermediate images.

3. The method according to claim 1, wherein before writing the synthesized virtual image on the basis of all the synthesized components in the image file of the virtual image in step d), further includes the following steps:

compress frame-by-frame image file corresponding to the synthesized virtual image from the last frame to the second frame, and the step of recording the synthesized virtual image on the basis of all the synthesized components in the image file virtual pictures are as follows:

write each frame with the corresponding condensed and synthesized virtual image, in the image file of the virtual image.

4. The method according to claim 3, in which the compression process, in particular, contains the following steps:

sequentially point by point compare pixel values of the frame with pixel values of the previous frame and if the two pixel values are the same, then change the color of the dot on transparent if the two pixel values of a pixel are different, then no action is performed.

5. The method according to any one of claims 1 to 3, which further includes the following steps:

pre-receive information about all components of the virtual image that you want to synthesize, and the step a) includes the following steps:

the user sends a user request carrying the identification information of the user and information about the components for synthesizing the virtual image by synthesizing the server side, analyzing the user request, synthesizing the server side receives the label components and the corresponding numbers of all layer components, and also receives the corresponding image files components according to the received label components and the corresponding numbers of layers.

6. The method according to claim 5, in which information about the components, as a minimum, contain

the component label that identifies the component units is stannum way and the corresponding number of the layer of the component.

7. The method according to any one of claims 1 to 3, in which the transform of the image files of the component in the specified format, in particular, contains the following steps:

b11) add colors from the local color table of the image file of the component into the global color table, and estimate exceeds the global color table the maximum number of colors or not, if the global color table exceeds the maximum number of colors, then expect the least distant color and add this calculated the least remote color in the global color table, otherwise, directly add colors from the local color table into the global color table;

b12) estimate, transparent whether each pixel point of the frame, if each pixel point of the frame is transparent, then take on the color of the pixel point according to the previous frame associated with the frame, otherwise take on the color of the pixel point of a given frame;

b13) after displaying the extension control graphics image file of a component defines the appearance of the handle in a uniform way, namely to restore the color of the area where graphics displays, as a mandatory background color.

8. The method according to claim 2, which further includes the following steps:

what are square the fixation display of each frame from the expansion control of the image in the image file of the component or in the file image of the intermediate image.

9. The method according to claim 2, in which the synthesizing frame component and the corresponding frames of the intermediate image in accordance with a fixation display each frame in step g) includes the following steps:

g11) calculate the total duration display all frames in the intermediate image and at the same time calculate the total duration display all frames in the component and take the least common multiple for both the calculated amounts to the duration of the display as the total duration of display of all frames in the synthesized image;

g12) determine the insertion point of the frame in accordance with the number of frames of the given component and fixation display of each frame to the frames of the intermediate image and fixation display each frame in it, and with a total duration of fixation display all frames received in step (g);

g13) synthesize the frame component and the intermediate frame images in each insertion point.

10. The method according to claim 2, in which the synthesis of the respective frame component and the frame of the intermediate image in accordance with a fixation display of each frame, in step (g) further comprises the following steps:

g21) calculate the total duration of display of the of all frames in the intermediate image and at the same time calculate the total duration display all frames in this component and take a larger value total duration of display of this component and total duration of fixation of intermediate images;

g22) determine the insertion point of the frame in accordance with a fixation display each frame in this component, with a fixation display each frame in the intermediate image, and with a large value of the total duration of fixation of the component and total duration of fixation of intermediate images;

g23) synthesize the frame component and the intermediate frame images in each insertion point.

11. The method according to claim 2, in which, if the fixation display each frame in the intermediate image and the fixation display each frame in the same component, then synthesizing the respective frame component and the frame of the intermediate image in accordance with a fixation display of each frame, in step (g) further comprises the following steps:

g31) calculate the least common multiple of the number of frames of this component and the number of frames intermediate images;

g32) determine the insertion point of the frame in accordance with the number of frames of this component, the number of frames of the intermediate images and the least common multiple of the number of frames of the given component and the number of frames intermediate images;

g33) synthesize the frame component and the intermediate frame images in each insertion point.

12. The method according to claim 2, in which, if the fixation frame intermediate image and the fixation frame component are the same, and the number of frames of the given component, and the number of frames of the intermediate images are a power of 2, then synthesizing the respective frame component and the frame of the intermediate image in accordance with a fixation display of each frame, in step (g) further comprises the following steps:

g41) take the largest value of the number of frames of the component and the frame number of intermediate images;

g42) determine the point of insertion of the frame in accordance with the number of frames of the component, the number of frames of the intermediate image and the maximum value of the number of frames of the component and the frame number of intermediate images;

g43) synthesize the frame component and the intermediate frame images in each insertion point.

13. The method according to any of claim 2, 9-12, in which the synthesis in step (g) further comprises the following steps:

g1) analyze each pixel point of the frame in this component, assessing whether the transparent color of a given pixel point, if the color of a given pixel point is transparent, it means that the color of the corresponding pixel points of sinisiraan the th frame is the same as the color of the corresponding pixel points of the associated intermediate frame images, then return to step g1) and continue to process the next pixel point, otherwise, perform step g2);

g2) assess whether there is a global color table of the intermediate image color equivalent to the color of a given pixel point, if the global color table of the intermediate image has a color equivalent to the color of a given pixel point, then mark the corresponding pixel point of the synthesized frame with the use of equivalent color returns to step g1) and continue processing the next pixel point, otherwise, perform step g3);

g3) estimate, to the end whether filled global color table of the intermediate image, if the global color table of the intermediate image is not complete, then fill the global color table of the intermediate image by the color of the corresponding pixel points of the frame of this component and mark the corresponding pixel point of the synthesized frame that color, if the global color table is filled to the end, then look for the least distant color in the global color table of the intermediate image and mark the corresponding pixel point of the synthesized frame with that color.

14. The method according to any of the claim 1 to 3, in which the image file is interchangeable file format graphics interchange (GIF) format image files.



 

Same patents:

FIELD: television engineering, in particular, method for selection of objects on complex underlying background, possible use for systems for automatic detection of coordinates of objects in television automatics.

SUBSTANCE: in accordance to the method, histogram processing threshold is optimized with the target of reaching minimal radius of compactness of binary image. For that purpose, a set of weighted radiuses of binary elements is computed and resulting compactness radius is compared to standard, which represents a radius of ideally compact figure, such as a circle with area equal to area of resulting binary image. Then, range of one of histograms is reduced or increased and compactness radius calculations are repeated until its value maximally approaches the standard value. Object coordinates shifting value is determined by scanning the area of object/background binary image with sliding strobe of object and by detection of position with the least compactness radius.

EFFECT: high selectivity of object image.

1 cl, 5 dwg

FIELD: digital image processing technology, in particular, processing of signals for selecting moving objects in a series of television images.

SUBSTANCE: in accordance to the invention, image turning angle of previous frame is determined relatively to standard image, increase of precision of calculation of shift parameters up to shares of pixel, change of standard image depending on computed values of shift and turn, shift of background image for integer number of pixels, turn of image around current frame around image center and following shift of turned image for fractional number of pixels, computation of value of threshold value with consideration of turbulence of atmosphere, vibration of image sensor and error when determining parameters of shift and turn, inter-frame filtration of threshold processing results.

EFFECT: increased precision of object selection due to resistance to spatial distortions.

4 cl

FIELD: analysis of television images, possible use in video surveillance systems.

SUBSTANCE: in accordance to the invention, a set of digital video data from first source, representing first image, is identified as standard video data of first source. Then a second set of video data is read, which represents current image. Difference ratio is computed using standard digital video data and current set of digital video data. If difference ratio exceeds a threshold, a query is shown to system user on the display to receive a response about whether current digital video data belong to identified source, or originate from new source. If response points at new source, then current set of digital video data is dispatched for storage into second memory cell, connected to second source. Then the current set of digital video data is identified as standard digital video data for second source.

EFFECT: division of digital video data outputted by several sources.

5 cl, 9 dwg

FIELD: television equipment, allowing selective image scaling, in particular, industrial television equipment used for technological monitoring.

SUBSTANCE: in accordance to invention, components of combined image: with increased scale and normal (without scaling), - are formed in charge form on targets of first and second television sensors at varying times of exposition, optimal for each one of scene fragments being transmitted. Due to that, in output image of television system bright and light sections (hot rolled metal, measuring ruler) are transferred same as in prototype, without limitations of white, while dark and/or low light sections are transferred with high signal/noise ratio, and dynamic range of brightness levels is expanded.

EFFECT: expanded dynamic range of brightness levels for controlled objects.

2 cl, 7 dwg

FIELD: technical systems for provision of safety and automated monitoring, in particular, systems for automated control of situation in auditoriums.

SUBSTANCE: in accordance to invention at least one video camera is mounted in auditorium for producing image of auditorium, and at least one computer with memory, interconnected via a local area network, while in memory of computer a database is generated, storing data, reflecting filling of auditorium, in accordance to sold tickets, and computer can process video signal received from video camera for producing data about filling of auditorium with possible storage of these data in memory for following analysis, and also with possible comparison of produced data to data stored in database and with possible generation of signal of disruption of set mode in case if mismatch of data being compared exceeds the predetermined threshold value.

EFFECT: increased efficiency of control and statistical counting of access of viewers to auditorium.

2 cl, 2 dwg

FIELD: railway transport.

SUBSTANCE: invention relates to safety devices to be installed on dangerous section of railways. Observation and warning equipment with video processor subsystem for processing and recording images, alarm signaling unit and transmitting part of wireless communication subsystem is installed in area of potential dangerous section of track, for instance, crossing area. Train is furnished with train movement inter locking equipment, sound annunciator, video monitor and receiving part of wireless communication subsystem. Said video processor subsystem includes two video cameras covering potentially dangerous section of track. It includes also video image processor, image recorder, timer, video camera control unit, illumination pickup, lighting unit and resolver. Transmitting and receiving parts of wireless communication subsystem are made in form of video image transmitter and receiver with voice accompaniment. Display of video monitor is in field of vision of driver. Train movement inter locking equipment is operated by driver. System provides driver with on-line information on situation at nearest dangerous zone of crossing. Driver takes decision on emergency braking of train or continuation of movement basing of information available.

EFFECT: prevention of wrong decision.

3 cl, 4 dwg

FIELD: video surveillance technologies.

SUBSTANCE: method includes surveillance of state of object, by surveillance blocks, each of which includes camera and microphone, low-frequency signals received from each block are converted to high-frequency television modulated signals, which are inputted into unified cable main, formed by coaxial television cable, along which received independent signals are sent to input of control panel, in form of television receiver or computer, provided with extension board, allowing to receive and display images of several surveillance objects concurrently, while power is fed along coaxial chamber of television cable main.

EFFECT: higher efficiency.

2 cl, 3 dwg

FIELD: video surveillance.

SUBSTANCE: method includes video surveillance of controlled object state, while into television cable main of object high-frequency television modulated signal is sent, while to receive signal concerning state of S objects, each of which includes group of N video surveillance blocks, including camera and microphone, video-audio signals from each group of N video surveillance blocks are combined along low frequency, received complex video signal is converted from each group of N video surveillance blocks into high-frequency television modulated signal and it is synchronized with unified cable main - coaxial television cable, in arbitrary groups combination, via which received independent S signals are sent to input of visualization and/or recording systems.

EFFECT: higher efficiency.

3 cl, 2 dwg

FIELD: television systems.

SUBSTANCE: method comprises subtracting reference and current images, breaking the image series to be processed into fragments, and converting the characteristic features of the images into signals. The signals from one of the images are recorded as reference ones and are compared, e.g., by subtracting, with corresponding current signals, and, after the threshold processing, the difference signals obtained are converted into the binary signals for control of spatial filtration . As a result, the fragments of the current image, for which the control signals exceed the threshold, are transmitted, whereas the fragments, for which the signals are equal or less than the threshold value, are suppressed.

EFFECT: enhanced quality of the object image.

11 cl, 14 dwg

The invention relates to the field of devices which are placed on a movable base opto-electronic devices that convert electromagnetic radiation into an electrical signal that carries information about the image, and videosmotorola device for monitoring process

 // 2342705

FIELD: information technologies.

SUBSTANCE: invention refers to device and method of data reception in wireless terminal, particularly to device and method of communication and data processing received from device set. Device contains the first data device and the second data device generating the first data and the second data according to the first mode select input and the second mode select input respectively; data processor from several sources activating of data device chosen between the first and second data device in response to mode select input; data interface connected to the first and second data devices buffering data generated by data device activated by mode select input at specified data volume so that data can be processed in data processor from several sources, and coordinating buffered data; display representing image data displayed from data processor from several sources; and audio processor reproducing audio data represented from data processor from several sources.

EFFECT: actual maintenance of data processing devices from several sources in wireless terminal.

17 cl, 19 dwg, 1 tbl

FIELD: information technology.

SUBSTANCE: a) the system of three-dimensional videogame is capable of displaying left - the right sequence through various independent channels - VGA or the video-channel, with the device of the display sharing memory in an immersion mode, b) the system contains the videogame cursor operating and checking reliability of foreshortenings of the image, appointing structures, illumination, positions, movement and the aspects connected with each object, participating in game; creates left and right background buffers, creates images and displays the information in working buffers, c) the system allows to process the information of data connected with coordinates xyz of the image of object in real time, in it the volume operative memories (RAM) for the left-right buffer is increased, thus there is an opportunity of recognition and a choice of the corresponding background buffer which information is transferred in the working buffer or the additional independent device of display sharing memory in an immersion mode.

EFFECT: solution to the problem of incompatibility of technology in display of three dimensional images.

11 cl, 13 dwg

FIELD: electric engineering.

SUBSTANCE: availability of programmable sensor screen, which might be programmed for display of ten-button panel of destination point call, buttons of call for "up" and "down" or N-button panel of destination points, and also buttons that reflect functional purpose of the building main floors, including such functions as cafeteria, top site, parking site, exit to public transport and to hall, and also floors occupied with major tenants. Controller programs sensor screen depending on load value, time of the day, building floor where sensor screen is located or identifies appearance of special passenger close to sensor screen.

EFFECT: better control of lift load, better usage of lift by passengers, introduction of lift calling devices that are adapted to different time of the day, different modes of load and to appearance of different passengers, and also introduction of lift calling devices that are adapted for servicing different passengers at different modes of load at different time of the day.

6 cl, 10 dwg

FIELD: electric engineering.

SUBSTANCE: method provides determination of the area related to red eye, its orientation, detection of the second eye availability on the face of photographed subject, at that if the second eye has not been detected, then colour of the first eye pupil is corrected and method is completed, then colour of the second detected eye pupil is detected, colour of the first eye pupil is corrected, replacing colour of red dots for the colour of the second eye pupil, if the colour of the second eye pupil is not red, or colour of both eyes pupils are corrected, replacing colour of red dots for the same dark colour, if the colour of the second eye pupil is red. At that area that is related to red eye is detected automatically, recording colour mark into marks array, for every point of image filtering with four directed filters of borders detection, connected domains of points are determined and fixed criteria are calculated, on the basis of which connected domains of points are classified into areas of red eyes and false areas.

EFFECT: development of semi-automatic or fully automatic method of red eye effect elimination, which accounts for differences in eyes hues in correction without operator experience.

5 cl, 5 dwg

FIELD: physics, measurement.

SUBSTANCE: invention concerns methods of electromagnetic signal processing for tool of modelling and visualisation of stratified underground fractions surrounding the tool. Electromagnetic signals corresponding to current position of tool measurement point are obtained for measurement during drilling, and multilayer model is generated by the electromagnetic signals. Histogram describing multilayer model uncertainty is used to generate multiple colour tone values, representing formation property forecasts for depth level over/under the tool, and corresponding multiple saturation values. Screen diagram is generated and displayed. Screen diagram uses colours for visualisation of formation property forecast for depth levels over and under the tool for further positions of measurement point. New column in screen diagram is generated for current measurement point. Colours of new column are based on multiple colour tone and saturation values obtained from histogram. Saturation values of new column represent uncertainties of respective forecasts.

EFFECT: modeling and visualisation of underground fraction properties during well drilling.

25 cl, 10 dwg

FIELD: physics, computation equipment.

SUBSTANCE: group of inventions concerns data processing media in 3D graphics visualisation and display requiring computation and calculation using intermediary memory address. Different embodiment versions involve intermediary memory buffers in video memory to allow launched programs of graphic interface to support algorithms exceeding shading procedure media for single programs. Intermediary buffers enable joint use of processing data in computer system. Buffer size, i.e. data volume stored in intermediary buffers, can be variable for variable resolution value of graphic data.

EFFECT: increased or preserved data processing speed for computer graphics in computer system using intermediary addresses for reiterated data use at other processing stages.

37 cl, 10 dwg

FIELD: physics, processing of images.

SUBSTANCE: invention is related to methods of television image processing, namely, to methods of detection and smoothing of stepped edges on image. Method consists in the fact that pixels intensity values (PIV) of image are recorded in memory; for every line: PIV of the current line is extracted; PIV of line that follows the current line is extracted; dependence of pixel intensity difference module dependence (PIDMD) is calculated for the mentioned lines that correspond to single column; PIDMD is processed with threshold function for prevention of noise; "hill" areas are determined in PIDMD; single steps are defined out of "hill" areas; PIV of line that is next nearest to the current line is extracted; for current line and line next nearest to the current line operations of "hill" areas definition are repeated; for every part of image line that is defined as single step, availability of stepped area is checked in image in higher line, if so, these two stepped areas are defined as double stepped area (DSA); parts of DSA lines are shifted in respect to each other, and DSA is divided into two single steps; values of line pixels intensity are extracted for the line that is located in two lines from the current line, and operations of "hill" areas definition are repeated; single steps are smoothened by averaging of pixel intensity values.

EFFECT: improvement of quality of image stepped edges correction.

2 dwg

FIELD: physics.

SUBSTANCE: device perceives visible and infra-red light wave range, divides the light of the said ranges, so that a living body image based on the visible light waves is separated from blood vessel image based on the infra-red light wave range. Then a degree of position mismatch is detected on the basis of the living body image. This enables simultaneous survey object coverage without the need to optic system control processing, thus allowing to reduce computing load during image coverage.

EFFECT: higher reliability of identification.

8 cl

FIELD: information technologies.

SUBSTANCE: invention relates to the devices and methods for implementation and detection of water-marks in information signals. One proposed method and device of water-mark implementation in information signal (MPin), when administration of water-mark implementation process is fulfilled with, at least, one parameter of implementation. Implementation parameter value depends on transfer bit speed on information signal (MPin), and, at least, water-mark signals stability and its observability.

EFFECT: scheme creation of water-mark implementation, fitting water-mark implementation in different information signals, which can be broadcast with different speed of bit transfer.

9 cl, 9 dwg, 1 tbl

FIELD: computer network communication means.

SUBSTANCE: method includes conversion of speech to electric digital signal, transfer of said signal to sound-playing device, conversion of person face to electric digital signal, recognition of face, its characteristic areas and their movement parameters, transfer of information along communication channels to graphic information output device, control of shape changes and space direction of artificial three-dimensional object and its characteristic areas. Method additionally includes detecting errors in face recognition and accompanying parameters by detecting mismatches between configurations of face areas and characteristics of movement thereof for speaking person in electric digital signals, and correction of mistakes before visualization of artificial three-dimensional object by forming control commands on basis of previously recorded shape signs and orientation of three-dimensional object and its characteristic areas for speech characteristics.

EFFECT: higher reliability and precision.

3 cl, 1 dwg

Up!