Selecting viewpoints for generating additional views in 3d video

FIELD: physics, video.

SUBSTANCE: invention relates to video encoding techniques. Disclosed is a method of encoding a 3D video signal. The method includes a step of providing at least a first image of a scene as seen from a first viewpoint. The method also includes providing rendering information for enabling a decoder to generate at least one rendered image of the scene as seen from a rendering viewpoint, different from the first viewpoint. The method also includes providing a preferred direction indicator defining a preferred orientation of the rendering viewpoint relative to the first viewpoint.

EFFECT: high quality of generating images from different viewpoints by generating a preferred direction indicator.

13 cl, 4 dwg

 

The technical field TO WHICH the INVENTION RELATES

The present invention relates to a method of coding video data, the method containing phases in which: provide at least first image of the scene observed from a first viewpoint; provide information about visualization, to allow the formation of at least one rendered image of the scene viewed from the viewpoint of visualization; and forming a video signal containing the encoded data representing the first image and information visualization.

The present invention additionally relates to a method of decoding video data, the encoder, the decoder, and computer program product for encoding and decoding the video signal and the digital media data.

The LEVEL of TECHNOLOGY

In the emerging field of technology pertaining to three-dimensional (3D) video, there are different ways to encode the video signal of the third dimension. Typically, this is implemented by providing to an eye of a viewer viewing of various types of scenes. A popular approach to represent 3D video is the use of one or more two-dimensional (2D) image display depth, representing information about the third dimension. Yes�approach also provides the possibility of forming 2D images from different viewpoints and angles review different from those, which is made with 2D images, which are included in the 3D video signal. This approach provides several advantages, including the possibility of with relative ease of formation of additional species and ensuring effective representation of the data, thus, for example, reducing resource requirements, storage and communications relating to the 3D video signal. Preferably, the video data are supplemented with data that are not visible from the available viewpoints, however, become visible with slightly excellent vantage point. These data are referred to as overlay or background. In practice, data are formed from overlap of data of many types, obtained by shooting a scene with multiple cameras from different viewpoints.

The problem of the above described approach is that the availability of data to recreate emerging from the overlap of objects in the newly formed species may differ from frame to frame and even within a frame. As a result, the quality of the images generated for different viewpoints, may change.

Disclosure of the INVENTION

The aim of the invention is the provision of a method of encoding video data signal in accordance with that described in the opening paragraph, wherein the method allows forming images with different viewpoints bole� high quality.

In accordance with the first aspect of the invention, this object is achieved through provision of a method of coding video data, the method contains the stages at which: provide at least first image of the scene observed from a first viewpoint; provide information about visualization, to allow the formation of at least one rendered image of the scene viewed from the viewpoint of visualization; provide a pointer to the preferred direction, defining a preferred orientation of the viewpoint visualization relative to the first viewpoint; and forming a video signal containing the encoded data representing the first image, visualization information and a pointer to the preferred direction.

As explained above, the quality of the images formed under different points of view, depends on the availability of data required to recreate emerging from overlapping objects. Consider a situation in which the data is available to perform a bias point of view to the left, but not to shift the viewpoint to the right. Therefore, displacement of the viewpoint to the left can lead to the formation of images with a quality different from that which is obtained when shifting the viewpoint to the right.

A similar difference pokachestvu may occur when to fill emerging from overlapping areas is not enough information available about not overlap or information about the absence of overlap. In such a situation emerging from the overlap area can be filled using the so-called algorithms fill holes. Typically, these algorithms interpolate the information in the vicinity of emerging from the overlap area. Therefore, displacement of the viewpoint to the left can lead to the formation of images with a quality different from that which is obtained when shifting the viewpoint to the right.

Such differences in quality are provoked not only by the availability of the required data, but also the size and nature of the surface area that comes out of the slab at the offset point of the review. As a result, the quality of 3D video or images may change based on the newly selected view. It may depend, for example, is a new type on the left or right of the point of the review is already available the first image.

In accordance with the invention, the solution is the formation of the preferred direction pointer and the inclusion of it in the video. The pointer specifies the preferred direction of the preferred orientation of the viewpoint visualization for additional species relative to the original viewpoint image�tion, already included in the video signal. When decoding the video signal, the preferred direction indicator can be used to select the viewpoint visualization and forming a visualized image of the scene from the selected viewpoint.

The video signal may contain a pointer to the preferred direction for each frame, each group of frames of each scene, or even for the entire sequence. The encoding of such information is time-lapse unlike coarse partitioning allows random access; for example, to provide a combined playback. The more different frames for the pointer preferred visualization is unchanged, and the size of encoded video signal, usually large, the volume of duplicate pointers may be reduced by encoding information for the group, rather than frame-by-frame. More efficient encoding is the preferred encoding direction based on the stage, mainly if the preferred direction of the visualization remains the same throughout the scene, thereby ensuring the integrity within the scene.

Optionally, when the size of the encoded signal is small, the critical information can equally be encoded at the level of the frame, group of frames and scenes, since all indices�installed in concert with each other.

Because changes in the choice of a viewpoint visualization can affect the perceived integrity of the content when rendering, the pointer to the preferred direction remains mostly constant for several frames. He, for example, can be maintained constant for a group of frames or, alternatively, for the scene. It is noted that even in the case where the pointer is the preferred direction is kept constant for several frames, however, pre-emption may be the coding constant pointer of the preferred direction with a lesser degree of decomposition than strictly necessary, to provide random access.

The preferred orientation can be left, right, up, down, or any combination of these directions. In addition to orientation, together with a pointer to the preferred direction may be provided preferred distance or preferred maximum distance. If the first image has enough information about covered objects or depth values of the objects, it may be possible the formation of many additional high quality types with the points of view that may be quite removed from the original viewpoint.

Information visualization can, for example, contain data about the p�recrute, representing an object in the background, overlapped by the foreground objects in the first image, the depth map provides depth values of objects in the first image or transparency data for these objects.

The pointer indicates the preferred direction, with respect to any possible point in the overview visualization best available information about the visualization. For more information regarding image rendering with layered depth, see, e.g., International Application WO 2007/063477, which is incorporated into this description by reference. For further information regarding algorithms fill the holes with respect to the use, when filling coming out of the overlap areas when rendering images with layer-by-layer depth, see, e.g., WO 2007/099465, which is incorporated into this description by reference.

In accordance with an additional aspect of the invention, is provided a method of decoding video data, wherein the video signal contains encoded data representing a first image of a scene observed from a first viewpoint visualization information to provide the possibility of forming at least one rendered image of the scene observed from tokyobay visualization and a pointer to the preferred direction, specifies the preferred orientation of the viewpoint visualization relative to the first viewpoint. The decoding method includes the steps in which: accept the signal of video data; depending on the index the preferred direction is chosen in the point of view of visualization; and form a visualized image of the observed scene from the selected viewpoint visualization.

These and other aspects of the invention are obvious and will be explained with reference to the described variants of implementation.

BRIEF description of the DRAWINGS

In the drawings:

Fig.1 shows a structural diagram of a system for encoding video data in accordance with the invention,

Fig.2 shows a block diagram of a method of encoding in accordance with the invention,

Fig.3 shows a structural diagram of a system for decoding video data in accordance with the invention, and

Fig.4 shows a block diagram of a method of decoding in accordance with the invention.

The IMPLEMENTATION of the INVENTION

Fig.1 shows a structural diagram of a system for encoding video data in accordance with the invention. The system contains two digital video cameras 11, 12 and the encoder 10. The first chamber 11 and second chamber 12 are recording the same scene 100, but with slightly different positions and therefore also slightly different angles. The recorded digital video signals with �beech cameras 11, 12 is sent to the encoder 10. The encoder may, for example, be part of a special encoding unit, the video card of the computer or software that implements the function performed by a General purpose microprocessor. Alternatively, video cameras 11, 12 are analog video cameras and analog video signals are converted to digital video signals before they are fed to the input of the encoder 10. If the camera combined with the encoder 10, the encoding may occur during the recording of the scene 100. It is also possible first to record the scene 100 and later to provide the recorded video data to the encoder 10.

The encoder 10 receives digital video data from the video cameras 11, 12, either directly or indirectly, and combines the two digital video signal into a 3D video signal 15. It should be noted that both video cameras 11, 12 can be combined into a single 3D camera. There is also the possibility of using more than two cameras to shoot the scene 100 with more than two viewpoints.

Fig.2 shows a block diagram of a method of encoding in accordance with the invention. This method of coding can be performed by the encoder system 10 of Fig.1. Coding method uses the recorded digital video data from the cameras 11, 12 and provides the video data signal 15 in accordance with the invention. In step 21 of providing a base image is available, at least the first image of a scene to be included in the video data signal 15. This base image can be a standard 2D video data coming from one of the two chambers 11, 12. The encoder 10 may also use two basic image; one with the first chamber 11 and one from the second chamber 12. From basic images can be obtained color values of all pixels in each frame of the recorded video. Basic image represents the scene at a particular point in time, seen from a particular vantage point. Further, the specific point of the review will be referred to as the base point of the review.

In step 22 enable incoming 3D video data from the video cameras 11, 12 are used to add information to the base image. This added information should allow the decoder to form a visualized image of the same scene from a different point of view. Further, this added information is called information visualization. Information visualization can, for example, contain information about the depth or transparency value of the object in the base image. Information visualization can also describe objects, an overview of which is blocked with a base point of the review objects that are visible on the base image�AI. The encoder uses a known, preferably standardized, methods for obtaining this information about the visualization of the recorded normal video data.

In step 23 specify the direction the encoder 10 optionally adds a pointer to the preferred direction of information visualization. The pointer specifies the preferred direction of the preferred orientation of the viewpoint visualization relative to the base point of the review with regard to extra mind. Later, when the video signal is decoded, the pointer is the preferred direction can be used to select the viewpoint visualization and forming a visualized image of the scene 100 from the selected viewpoint. As described above, the quality of 3D video or images may change based on the newly selected view. Influence can, for example, is a new type on the left or right of the point of the review is already available the first image. A pointer to the preferred direction, which is added to the rendering information, may, for example, be a single bit indicating the direction to the left or right. Advanced pointer the preferred direction can also specify the direction up or down and/or, preferred, or maximum distance of the viewpoint from the base point review�.

Alternatively, a pointer to the preferred direction may provide a preferred orientation and/or distance multiple viewpoints visualization relative to the first viewpoint. For example, the preferred may be the formation of two viewpoints visualization with one and the same side of the first point of view, or one from each side. Preferred position(s) point(s) review visualization relative to the first viewpoint may depend on the distance between these two points. For example, the more beneficial it can be to visualize the image from the viewpoint to the left of the first point of view, when both points of view close to each other; wherein with the great distance between the two points of view for the formation of a viewpoint rendering more acceptable may be a direction to the left of the first point of the review.

The decision on what specific direction is more preferably may be taken automatically; for example, when encoding a stereo pair image as layered with depth, with the help of image, depth and presentation of the overlap, there is the possibility of using either the left or the right image as an image with layer-by-layer depth, and recreation on the basis of another image. Later for both options can be evaluated meth�ICA differences and may determine a preferred encoding and followed the direction.

Preferably, the differences are weighted according to the model of the visual perspective. Alternatively, in particular in the framework of professional settings, the preferred direction can be selected on the basis of user interaction.

In step 24, the signal information provided at the previous stages 21, 22, 23, used to generate the video data signal 15 in accordance with the invention. The video data signal 15 represents at least the first image, visualization information and a pointer to the preferred direction. A pointer to the preferred direction may be provided for each frame, for a group of frames or for the whole scene or even just a video. Changing the position of the viewpoint visualization within the scene can negatively affect the perceived quality of the 3D image, but on the other hand, it may be necessary, if the availability of information on visualization for different areas vary considerably.

Fig.3 shows a structural diagram of a system for decoding video data in accordance with the invention. The system comprises a decoder 30 for receiving the video data signal 15 and converting the video data signal 15 into a signal display which is acceptable for display on the display 31. The video data signal 15 m�can enter the decoder 30 as part of the broadcast signal, for example, via satellite or cable transmission. The video data signal 15 may also be provided on request, for example, through the Internet or through video on demand. Alternatively, the video data signal 15 is provided on the digital media data, such as a DVD or Blu-ray.

The display 31 is arranged to provide a 3D representation of the scene 100, which was filmed and encoded by the encoder system 10 of Fig.1. The display 31 may include a decoder 30, or may be combined with the decoder 30. For example, decoder 30 may be part of a 3D video receiver, which is connected with one or more conventional television or computer displays. Preferably, the display is a dedicated 3D display 31, is arranged to provide different kinds for different eyes of the viewer.

Fig.4 shows a block diagram of a method of decoding in the form in which it can be performed by the decoder 30 of Fig.3. In step 41 of receiving the video data signal 15 video data encoded by the encoder 10, is taken at the input of the decoder 30. Received signal 15 video contains encoded data representing at least the first image scene 100, visualization information and a pointer to the preferred direction.

In step 42 select additional vantage point of the pointer prefer�professional direction is used to select, at least one additional viewpoint for the appropriate supplementary form. In step 43 additional visualization types are formed one or more additional views from selected viewpoints and viewpoints. Then in step 44 the display of two or more species from different viewpoints may be provided to the display 31 for displaying the scene 100 in 3D.

It should be noted that the invention also extends to computer programs, namely computer programs on or in a carrier adapted to implement the invention in practice. The program can be made in the form of source code, object code, intermediate code between source code and object code such as in partially compiled form, or in any other form acceptable for use in implementing the method in accordance with the invention. Also should be taken into account that such a program can have many different architectural designs. For example, program code that implements the functionality of the method or system in accordance with the invention, can be subdivided into one or more subroutines. Specialist in the relevant field will be obvious diversity of ways of distribution of functionality between these routines. Podpress�mmy can be stored together in a single executable file, forming a modular program. This executable file may contain computer executable instructions, such as processor instructions and/or interpretable instructions (for example, interpreted instructions Java). Alternatively, one or more or all of the subroutines may be stored in at least one external library file and can, for example, during operation, to connect to the main program either statically or dynamically. The main program contains at least one function call, at least one of the subroutines. Also subroutines may contain function calls to each other. Implementation option relating to a computer program product includes computer executable instructions corresponding to each of the processing stages, at least one of the above ways. These instructions may be subdivided into subroutines and/or stored in one or more files that can connect to either statically or dynamically. Another variant of implementation relating to a computer program product includes computer executable instructions corresponding to each of the funds, at least one of the herein systems and/or products. These instructions may be subdivided into subroutines and/or may be stored in one or bol�e files that can be connected statically or dynamically.

The carrier of a computer program may be any object or device that is arranged to transfer the program. For example, the medium may include storage media such as a ROM, for example a CD ROM or a semiconductor ROM, or magnetic recording media such as floppy disk or hard drive. Additionally, media may be transmitted to a carrier, such as an electrical or optical signal which can be transmitted via electrical or optical cable or by radio transmission or other means. While implementing the program in such a carrier signal may be such cable or other device or means. Alternatively, the carrier may be an integrated chip, which is embedded in the program, wherein the integrated chip is configured to perform or used to perform the corresponding method.

It should be noted that the above embodiments of illustrate, rather than limit the invention, and that the specialist in the relevant field will be able to design many alternative embodiments, without departing from the scope of the attached claims of the invention. In the claims, any reference signs placed in �rounded brackets should not be construed as limiting the claim. The use of the verb "includes" and its conjugations does not exclude the presence of elements or steps than those listed in a claim. The use of the singular with respect to the elements does not exclude the presence of many such elements. The invention may be implemented by hardware that contains several separate items, and acceptable by the programmed computer. In the claim the device by enumerating several means, several of these means can be embodied by one and the same item of hardware. Just the fact that specific measures are in mutually different dependent claims does not indicate that to obtain the benefits cannot be used the combination of these measures.

1. The method of coding the 3D signal (15) of video data, the method contains the stages at which:
provide at least first image (21) of a scene (100) as seen from the first viewpoint;
provide information (22) about visualization, to give the decoder the possibility of forming at least one rendered image of a scene (100) as seen from the viewpoint of visualiza�AI, different from the first viewpoint;
give the pointer (23) the preferred direction, defining a preferred orientation of the viewpoint visualization relative to the first viewpoint; and
form (24) 3D signal (15) of video data containing the encoded data representing the first image, visualization information and a pointer to the preferred direction.

2. The method of coding the 3D signal (15) of video data according to claim 1, in which the preferred direction pointer contains a single bit to determine whether the preferred orientation of the viewpoint visualization relative to the first viewpoint to the left or right of the first point of the review.

3. The method of coding the 3D signal (15) of video data according to claim 1, wherein the pointer is the preferred direction is encoded in the 3D signal (15) of video data using at least one of the following options:
a pointer to the preferred direction is encoded for each frame;
a pointer to the preferred direction is encoded for each group of frames; and
a pointer to the preferred direction is encoded for each scene.

4. The method of coding the 3D signal (15) of video data according to claim 3, in which the pointer value is the preferred direction constantly for one of:
group of frames and scenes.

5. The method of coding the 3D SIG�Ala (15) of video data according to any of claims. 1, 2, 3 or 4, in which the preferred orientation depends on the distance between the first viewpoint and the viewpoint of the visualization.

6. The method of coding the 3D signal (15) of video data according to any of claims. 1, 2, 3 or 4, wherein the visualization information includes the values specify the depth for the pixels in the first image.

7. The method of coding the 3D signal (15) of video data according to any of claims. 1, 2, 3 or 4, wherein the visualization information includes alpha values for pixels in the first image, with the alpha value indicates the transparency of the corresponding pixels.

8. The method of coding the 3D signal (15) of video data according to any of claims. 1, 2, 3 or 4, wherein the visualization information includes data on the overlap, representing the data that is overlapped with the first point of the review.

9. Method of decoding a 3D signal (15) of video data, the 3D signal (15) contains the encoded video data representing a first image of a scene (100) as seen from the first viewpoint visualization information to provide the possibility of forming at least one rendered image of a scene (100) as seen from the point of view of visualization, different from the first viewpoint, and a pointer to the preferred direction, defining a preferred orientation of the viewpoint visualization relative to p�pout viewpoint the method contains the stages at which:
receive signal (41) video data;
depending on the preferred direction pointer select a point (42) review visualization; and
form a (43) rendered image of a scene (100) as seen from the selected viewpoint visualization.

10. The encoder (10) for encoding a 3D signal (15) of video data, wherein the encoder includes:
means for providing at least a first image of a scene (100) as seen from the first viewpoint, the rendering information to provide to the decoder the possibility of forming a visualized image of a scene (100) as seen from the point of view of visualization, different from the first viewpoint, the preferred direction pointer that specifies the preferred orientation of the viewpoint visualization relative to the first viewpoint
means for forming a 3D signal (15) of video data containing the encoded data representing the first image, visualization information and a pointer to the preferred direction, and
output means for providing the 3D signal (15) of video data.

11. A decoder (30) for decoding 3D signal (15) of video data, wherein the decoder (30) contains:
an input means for receiving a 3D signal (15) of video data, the 3D signal (15) contains the encoded video data d�record, representing a first image of a scene (100) as seen from the first viewpoint visualization information to provide the possibility of forming a visualized image of a scene (100) as seen from the point of view of visualization, different from the first viewpoint, and a pointer to the preferred direction, defining a preferred orientation of the viewpoint visualization relative to the first viewpoint
means for selecting the viewpoint of the visualization depending on the preferred direction pointer,
means for generating a rendered image of a scene (100) as seen from the selected viewpoint visualization, and
output means for providing a rendered image.

12. Machine-readable medium that stores a computer program which when executed by a processor causes the processor to perform a method according to claim 1.

13. Machine-readable medium that stores a computer program which when executed by a processor causes the processor to perform a method according to claim 9.



 

Same patents:

Brightness meter // 2549605

FIELD: instrumentation.

SUBSTANCE: brightness meter contains an opaque light filter attached to a piezoelectric element which is connected to a frequency divider output, a lens, a pyramidal mirror octahedron with four external smooth surfaces and four disk photodetectors, each with two photoreception sectors. Photoreception sectors are fitted with colour light filters. The output of each photoreception sector is connected to the input of an analogue-digital converter. Each analogue-digital converter comprises the pulse amplifier to the output of which pulse light-emitting diodes are connected. Radiation from each light-emitting diode enters the group of eight identical photodetectors, each of which has on the reception side a neutral light filter with a ratio respectively of the register digit weight to which the output of each photodetector is connected.

EFFECT: possibility of synchronous receiving of brightness codes of eight colour components of the spectrum.

2 dwg, 1 tbl

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to image compression systems and methods. The method of compressing a digital image in a computing device comprises steps of dividing an image into a plurality of image subregions; selecting from a catalogue which includes a plurality of predetermined template forms, wherein each template form comprises a plurality of elements, properties and image variables, such as colour, colour gradient, gradient direction or reference pixel, and wherein each said form is identified by a code, the template form of each subregion best corresponding to one or more image elements of said subregion; and generating a compressed data set for the image, wherein each subregion is represented by a code which identifies the template form selected therefor.

EFFECT: improved compression of image data, thereby reducing the amount of data used to display an image.

22 cl, 4 dwg

FIELD: physics.

SUBSTANCE: this method comprises memorising the input raster video image as a flow of frames in the line input buffer. Said frames are splitted to micro blocs. The latter are compressed and stored in external memory. For processing, said micro blocs are retrieved from external memory, unclasped and written to internal memory. Raster macro blocs are formed and processed by appropriate processors.

EFFECT: efficient use of internal memory irrespective of processing algorithm type.

22 cl, 2 dwg

FIELD: physics, photography.

SUBSTANCE: invention relates to an image processing device and method, which can improve encoding efficiency, thereby preventing increase in load. The technical result is achieved due to that a selection scheme 71 from a prediction scheme 64 by filtering selects a motion compensation image for generating a prediction image at a high-resolution extension level from key frames at a low-resolution base level. The filter scheme 72 of the prediction scheme 64 by filtering performs filtration, which includes high-frequency conversion and which uses analysis in the time direction of a plurality of motion compensation images at the base level, selected by the selection scheme 71, in order to generate a prediction image at the extension level.

EFFECT: reducing load in terms of the amount of processing owing to spatial increase in sampling frequency at the base level for encoding the current frame.

19 cl, 26 dwg

FIELD: information technology.

SUBSTANCE: method of compression of graphic file by fractal method using ring classification of segments, in which the graphic file is split into rank regions and domains, and for each rank region the domain and the corresponding affine transformation is found, that best approximates it to the appropriate rank region, and using the obtained values of the domain parameters, comprising their coordinates, the coefficients of the affine transformations, the values of brightness and contrast, the archive is formed, and classification of domains and rank regions are introduced, based on the allocation in them of the "rings" and the calculation of the mathematical expectation of pixel intensity of these "rings", which enables to reduce the complexity of the phase of correlation of the segments and to accelerate compression.

EFFECT: reduced time of compression of the graphic file by fractal method.

3 dwg

FIELD: physics.

SUBSTANCE: method comprises making each array element in an image sensor from one "R, G, B radiation colour brightness to code" converter, which performs parallel synchronous conversion of radiation of three colours analogue video signals R, G, B into three codes. The frame image digitisation apparatus includes an objective lens, an image sensor comprising an array of elements, three switch units, three register units and a control signal generator, wherein each switch unit includes the same number of encoders as converters.

EFFECT: reduced cross dimensions of array elements in an image sensor, which enables to reduce the frame format size or increase resolution of the image sensor.

6 dwg, 1 tbl

FIELD: physics.

SUBSTANCE: disclosed is a frame image digitisation apparatus. The disclosed apparatus comprises a lens in the focal plane of which there is an image sensor having an array of elements, a control signal generator and three register units, the outputs of which are the outputs of the digitisation apparatus. Each array element consists of a converter for converting radiation of colours R, G, B into three codes. Images are input into the sensor, the number of said images being equal to the number of array elements and the number of colours R, G, B of analogue-to-digital converters (ADC).

EFFECT: high image frame resolution owing to conversion of three colours R, G, B into codes using one converter.

4 dwg, 2 tbl

FIELD: physics.

SUBSTANCE: apparatus comprises a lens, an image detector having an array situated in the focal plane of the lens, the array having elements which are converters for converting radiation to codes based on the frame resolution number 106, each having an opaque housing in the front part of which, in a partition wall, there is a microlens, on the optical axis of which and at an angle of 45° thereto semitransparent micromirrors are arranged in series and rigidly mounted based on the number of bits per code, each preceding micromirror transmitting to the next micromirror radiation flux with half the strength.

EFFECT: high speed of frame digitisation.

1 tbl, 4 dwg

FIELD: physics.

SUBSTANCE: apparatus comprises a lens, an image detector which includes an array of elements based on the frame resolution number 106, situated in the focal plane of the lens and having three groups of outputs of colour codes R, G, B, includes three register units and a control signal generator which outputs from the first output pulses with frame frequency (25 Hz), connected to the control inputs in array elements, and from the second output pulses with code sampling frequency, connected in parallel to the second control inputs of the first through third register units.

EFFECT: high frame resolution by making array element converters of brightness of radiation of colours R, G, B - three codes, which synchronously output codes of three colours R, G, B.

5 dwg, 1 tbl

FIELD: physics.

SUBSTANCE: disclosed is a method of obtaining a structural image of a biological object in optical coherence tomography. The method includes breaking down a source colour vide frame into non-overlapping spatial blocks consisting of more than one pixel. A structural image is obtained via small-angle raster scanning in the arm of an optical coherence tomography sample. The obtained image with a size of Piskh bytes is broken down into non-overlapping spatial blocks only on columns; adjacent column blocks are averaged pixel by pixel to form a new image with a size of Pstl bytes; the new image is broken down into non-overlapping spatial blocks only on rows; adjacent row blocks are averaged pixel by pixel to form a resultant image with a size of Pres bytes, and the averaging process is controlled based on an exponential relationship Pstl from the number of averaging column blocks Ustl and Pres from the number of averaging row blocks - Ustr.

EFFECT: high quality of the structural image of a biological object in optical coherence tomography.

7 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to three-dimensional video image processing facilities. The video unit contains the output interface unit (55) for output through the high-speed digital interface to the three-dimensional display device (60) with the three-dimensional display, the output signal formatted according to the HDMI standard, containing in the three-dimensional display mode the three-dimensional display signal in a three-dimensional signal format; in the two-dimensional display mode a two-dimensional display signal in a two-dimensional signal format; in the mode of pseudo-two-dimensional display the signal of pseudo-two-dimensional display including the two-dimensional video image data in a three-dimensional signal format.

EFFECT: increase of rate of switching between the modes of three-dimensional and two-dimensional displaying.

13 cl, 10 dwg

FIELD: physics, optics.

SUBSTANCE: invention relates to autostereoscopic display devices. The device comprises a display panel (3) having an array of display pixel elements (5), an imaging arrangement (9) which directs an output signal from different pixel elements to different spatial positions, having first and second polarisation-sensitive lens-raster arrays (50) and (52), wherein light incident on the imaging arrangement is controlled to produce one of the two possible polarisations.

EFFECT: high efficiency of effective resolution while maintaining the required switching rate.

14 cl, 10 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to means of producing three-dimensional (3D) motion picture subtitles. The method comprises receiving a 3D image sequence; receiving a subtitle file for the 3D image sequence, the subtitle file comprising a subtitle element and timing information; associating the subtitle element with a segment of image frames based on the timing information; generating an abstract image for the right and left eyes from the segments; computing, by a computing device, an abstract depth map from said abstract images, the computing device having a processor; computing a proxy depth based on the abstract depth map for the subtitle element; using the proxy depth to determine a rendering attribute for the subtitle element; and outputting the rendering attribute.

EFFECT: optimising production of subtitles on a displayed 3D image with high parallax.

34 cl, 21 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to information and network technologies, and particularly to an electronic information system which enables generation and visual display on a screen of a terminal device a personalised graphic model of an individual based on input anthropometric, diagnostic, biochemical and other factors. The system is an extensible and modifiable modular interactive tool for rendering parameters of the functional state of an individual for reporting on the current state and existing functional problems. Operation of the system is based on use of parameters of the functional state of an individual, analytical and expert processing of all input parameters, creating an individual parametric model and forming a personalised graphic model for displaying the current state and existing functional problems. Using the system, an individual can monitor their own functional state, including health, and perform timely prevention of chronic diseases and other functional problems.

EFFECT: enabling an individual to self-monitor their health status and timely signalling of health disorders.

24 cl, 7 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to means of processing digital images. The method includes obtaining a series of images of a three-dimensional object with a given scene depth pitch and converting said images into a spatial spectrum using two-dimensional Fourier transform, processing the obtained spatial spectra of images in the series by spatial-frequency filtering, performing coordination of scales of the images in the series, summing the filtered and scaled spatial spectra of the images, performing reconstruction of a sharp image of the object using inverse two-dimensional Fourier transform of the summed spatial spectrum of the image.

EFFECT: obtaining a sharp image of a three-dimensional object with an unlimited depth of field.

4 cl, 4 dwg

FIELD: physics.

SUBSTANCE: method comprises stereo capturing with a symmetrically centred multiview stereo system with synchronised video cameras, recording and comparing video signals of adjacent lines, recognising therein view signals adjacent to the central signal, measuring time parallaxes thereof in a single time frame, synchronising the parallax signals with the video signal of the central video camera, transmitting to a receiving side and recording a stream of signals, restoring video signals of view stereo frames by shifting elements of the signals of the central camera by the adjacent time parallaxes and reproducing the image.

EFFECT: high accuracy of controlling transmission of a stereoscopic video image through automatic measurement of the object capturing space in real time.

1 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to means of displaying a three-dimensional (3D) image. The system comprises a display (207), a rendering unit (203), means (209, 211) of estimating user presence in response to an attempt to detect a user in the view area of the display system, means (213) of changing differentiated forms of adjacent images for 3D effect adaptation in response to the user presence estimation. In the system, the changing means (213) are adapted to control the scene depth range provided by the differentiated forms in response to the user presence estimation.

EFFECT: providing automated adaptation of 3D display based on determining presence of a user.

13 cl, 4 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to means of generating and displaying a stereoscopic image. Disclosed is a method of generating visual data for playback, which evokes a three-dimensional visual impression in an observer. The method includes steps of receiving and/or inputting preferably two-dimensional colour visual data of an original view, deriving a plurality of other partial views which represent the information shown in the original view from different viewing angles. The method also includes, for each point of the image of the two-dimensional visual data of the original view, determining colour hue of at least two primary colours of the colour system. Each point of the image is automatically assigned a control value depending on at least two defined colour hues. Each control value represents incompatibility information.

EFFECT: improved reality of playing back a three-dimensional image through automated generation from a two-dimensional view of the scene of other views of the image, a scene at different viewing angles.

41 cl, 7 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to methods of presenting animated objects. Disclosed is a method of presenting an animated object in the form of an animation sequence by creating a sequence of vector-based separate objects for a defined moment of an animation sequence of an object and connecting the vector-based separate objects for the defined moment to form an animation sequence. The method includes calculating surface changes of the object as textural animation, where the textural animation of the object is created using a graphics program and is merged with the object using a program with a vector-based page description language. The method also includes projecting the textural animation of the object in the animation sequence using a program with a vector-based page description language to form an animation sequence with textural animation of the object.

EFFECT: faster operation and resource saving when presenting an animated object with interactive change of said presentation by the user.

15 cl, 5 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to means of reproducing a stereoscopic image from a recording medium. The device includes a unit for reading streaming data from a recording medium; a unit for decoding streaming data; a TS priority filter unit; a metadata processor and a plane integration unit. In the device, a main-view video stream, a sub-view video stream and a graphic stream are recorded on the recording medium; the main-view video stream includes a main-view image; the sub-view video stream includes a sub-view image and metadata; the graphic stream includes monoscopic image graphic data.

EFFECT: retrieving offset information from video stream information.

9 cl, 123 dwg

FIELD: technology for processing images of moving objects, possible use, in particular, in theatric art, show business when registration/recording is necessary or repeated reproduction of scenic performance.

SUBSTANCE: method includes inserting enumeration system for each object and performing projection of enumerated objects onto plane, while projection is displayed in form of graph with trajectories of movement of enumerated objects in each staging.

EFFECT: spatial-temporal serial graphic display of scenic action for its further identification and repeated reproduction.

2 dwg

Up!