Selection of dominating colour with application of perception laws for creation of surrounding lighting obtained from video content

FIELD: information technologies.

SUBSTANCE: method is suggested for selection and processing of video content, which includes the following stages: quantisation of colour video space; making selection of dominating colour with application of mode, median of average or weighted average of pixel colorations; application of perception laws for further production of dominating colorations by means of the following steps: transformation of colorations; weighted average with application of pixel weight function affected by scene content; and expanded selection of dominating colour, where pixel weighing is reduced for majority pixels; and transformation of selected dominating colour into colour space of surrounding light with application of three-colour matrices. Colour of interest may additionally be analysed for creation of the right dominating colour, at that former video frames may control selection of dominating colours in the future frames.

EFFECT: creation of method to provide imitating surrounding lighting by means of dominating colour separation from selected video areas, with application of efficient data traffic, which codes averaged or characteristic values of colours.

20 cl, 43 dwg

 

The technical field to which the invention relates.

The present invention relates to the creation and setting up of the effects of ambient lighting with multiple light sources and are usually based on or associated with the video content, for example, from a video display. In particular, the invention concerns a method for highlighting information about the dominant color, in combination with the laws of perception, from a discrete or subdirectives video content in real time and to perform transformations for color display of the color space of the video content in the space that best allows you to bring multiple sources of ambient light.

Prior art

Engineers have long been seeking to enhance the experience of perception, resulting from the consumption of video content, for example, by increasing the size of the projection screens and projection areas, modulation of sound to create a realistic three-dimensional effects and improve the quality of the video, including the expansion of color schemes, increase resolution and aspect ratio for images, for example, as in the case of digital TV and high-definition video (HD). In addition, producers of film, television and video are also trying to influence the experience of the perception of the viewer, the COI is lsua audiovisual means, for example, by a skillful use of color, quick change of scenes, angles, peripheral sets and computerized graphical representations. This also includes the lighting of the stage. For example, the lighting effects are usually written in the script (synchronized with video games or scenes) and are played by some mechanism or a computer programmed with appropriate scripts for scenes that are encoded on the required schema.

Under the existing digital divide is not easy to provide harmonic automatic adaptation of lighting to rapid changes in the scene, including unplanned or not scripted scenes, largely because of the overhead associated with the need to use existing systems large bit streams with high throughput.

Philips (Netherlands) and other companies have offered the means to change the surrounding or peripheral lighting to improve the quality of video content for a typical home or business applications using separate light sources, remote from the video display, and for many applications with advanced programming or encoding the desired lighting effects. It was found that the surround is its coverage in addition to the video display or TV reduces fatigue of the viewer and enhance the realism and depth perception.

The experience of perception depends of course on aspects of human vision, which uses extremely complex sensory and neural mechanism for creating sensations of color and light effects. Man is able to discern, in all probability, 10 million individual colors. In the human eye to perceive color or daylight vision has three sets consisting of about 2 million touch phone, called cones, which have a distribution of absorption, reaching maximum lengths of color waves 445, 535 and 565 nm with a large overlap. These three types of cones to form what is called, for historical reasons, three-color system, that is, B (blue), G (green) and R (red); moreover, these peaks do not necessarily correspond to any of the primary colors used in the display, for example, commonly used RGB phosphors. Also there is interaction, providing bodies, called sticks, for scotophase or so-called night vision. The human eye typically has 120 million rods that define the experience of perception of video images, especially in low light conditions, such as in a home theater system.

Color video based on the principles of human vision, and a well-known theory of the tri-color and the opposite channels of the human C is placed to become an integral part of our understanding how to affect the eyes that he had seen the desired colors and effects with high fidelity in relation to the original or specified image. In most color models and spaces to describe the visual experience of the person using the three dimensions or coordinates.

Color video entirely based on metamerism, which allows you to create color perception using a small number of reference stimuli, and not the actual light of the desired color and character. Thus, in the human consciousness plays full range of colors using a limited number of reference stimuli, for example, the well-known RGB system with three stimuli (red, green, blue)used all over the world when playing video. It is well known, for example, that almost all screens display the yellow light in the scene by creating approximately equal proportions of red and green light at each pixel or picture element. The pixels are small relative to the spatial corner, which they protivorechat, and the eye seems that he perceives yellow color; however, he does not see green or red, which actually airs.

There are many color models and methods of job C is billing purposes, including the widely known system of color coordinates CIE (international Committee on illumination)used to describe and specify a color for video playback. Using the present invention, it is possible to apply any number of color models, including the application for netherlandic opposite color spaces such as CIE L*U*V* (CIELUV) or CIE L*a*b* (CIELAB). In 1931 the CIE Commission has established the basis for control and playback of all colors, which resulted in the chromaticity diagram using three coordinates x, y and z. The graph of this three-dimensional system with maximum brightness, use the generic way to describe the color coordinates x and y, and it is considered that this graph is called the graph of the chromaticity x, y, 1931" is able to describe all the colors perceived by a person. This contradicts color reproduction, in which "deception" eyes and brain use the meta-matter. Currently, use many color models or spaces for color reproduction on the basis of the three primary colors or phosphors, including Adobe RGB, NTS C RGB, etc.

However, it is important to note that the range of all possible colors, created a video using the mentioned systems with three stimuli is limited. RGB national the Committee by TV standards (NTSC) has a relatively wide range of available colors, but this system can only play half of all colors perceived by man. Many shades of blue and purple, blue-green and orange/red not adequately reproduced using the available capabilities of traditional video systems.

In addition, the human visual system have properties compensation and recognition, understanding of the mechanism of action of which is necessary for the development of any video system. The color of a person may appear in several modes, including in object mode and in an illumination mode.

In object mode light stimuli are perceived in the form of light reflected from an object, which is illuminated by the light source. In the lighting mode of the light stimulus is visible in the form of a light source. Lighting mode includes stimuli in a complex field, which is much brighter than other stimuli. This does not include stimuli, known as the light sources, such as video displays, whose brightness or luminance equal to or lower than the overall brightness of the scene or field of view, so that the stimuli looked like in object mode.

Note that there are many colors that appear only in object mode, which include brown, olive, red-brown, grey and beige flesh tone. Does not exist t is some things as a source of brown light, for example light brown color.

For this reason, installation of the ambient light for the video, try to add the object color, can't do it using direct bright light sources. No single combination of sources of bright red and green light in a narrow range cannot play brown or red-brown, and this greatly restricts the possible choices. Only the spectral colors of the rainbow when changing intensities and saturation can be reproduced by direct observations of bright sources. This highlights the need for accurate control systems of the ambient light, so as to provide a low brightness from the light sources, paying particular attention to control color tones. Currently, no attention is paid to accurate control, which enables fast and subtle ambient lighting in existing architectures data.

Show video performance can take many forms. The representation of the spectral colors faithfully reproduce the spectral power distribution of the original stimuli, but it is impossible to implement when the show video performance, which uses three main colors. Accurate representation of the color can dublinbet the color values of the human visual system, creating Matematicheskoe according to the original, but viewing conditions in General for the image and the original scene must be the same to provide a similar appearance. All conditions for the image and the original scenes include contractible corner of the image, the brightness and chromaticity of the environment and flare. One of the reasons why it is often impossible to achieve accurate color reproduction are restrictions on the maximum brightness that can be achieved on a color monitor.

Colorimetric reproduction light provides a useful alternative where color values are proportional to the corresponding values in the initial scene. The coordinates of the color reproduced exactly, but with reduced brightness. Colorimetric color reproduction is good reference standard for video on the assumption that the original and the reproduced reference white are the same color, visibility equal, and the system has a single value in the overall scheme. Equivalent color reproduction when the color and brightness match the original scene cannot be achieved because of the limited brightness, created in video displays.

In most cases, when you play a video in practice trying to achieve sootvetstvujushej the color reproduction, where the reproduced colors look the same, what would the colors in the original, if they were lit to create the same average brightness level and the same reference white chromaticity, as during playback. However, many argue that for display systems in practice, the preferred ultimate goal is the color reproduction, in which the accuracy of color reproduction depends on the preferences of the viewer. For example, preferred for ordinary real skin color is the color of tanned skin, and the sky is preferably made more blue, and the leaves are more green than they appear in reality. Even if the appropriate color reproduction is accepted as a design standard, some colors are more important than others, such as flesh tones, are subject to special treatment in many reproduction systems, such as the video standard is NTSC.

When playing light scenes important chromatic adaptation to achieve white balance. When properly configured cameras and displays white and neutral gray color is usually played with D65 chromaticity of daylight in the standard CIE. By continuously playing white surface with the same color system adapts to the visual system, which is basically adapter the t perception of so, that white surfaces always look the same, whatever the color of the light, so that white sheet of paper will appear white regardless of whether it is located on the beach in a Sunny day or in the scene indoors, illuminated by an electric bulb. When the color reproduction adjustment of white balance is performed by using controls amplification of the channels R, G and B.

The light output of a typical receiver color is usually not linear, but rather obeys a power-law dependence on the applied voltage signal. The light output is proportional to the excitation voltage of the video signal in the degree of gamma, where gamma index is usually 2.5 for a color cathode-ray tube (CRT) and 1.8 for light sources of other types. Compensation for this indicator is provided by three main gamma offsets in the amplifiers video processing chamber, so that the main video that is encoded, transmitted and decoded, in fact, represent not R, G and B, and R1/(, G1/(and B1/(. Colorimetric color reproduction requires that the resulting index gamma for video playback, including camera, display, and any electronic circuit for gamma adjustment, was unity, but when you try to play sootvetstvujushej the color brightness of the environment takes precedence. For example, dim environment requires that the gamma index was about 1.2, and dark environment requires that the gamma index was about 1.5 for optimum color reproduction. The gamma index is an important realisation for color spaces RGB.

In most cases, when coding for color use the standard RGB color space such as sRGB, ROMM RGB, Adobe RGB 98, Apple RGB and RGB video space, for example, which are used in the NTSC standard. Usually the image is recorded in the device space of the sensor or device that is specific to a particular device and image. It may be converted into space netherlandic images, which is a standard color space that describes the colorimetry of the original (see Definitions).

However, the video is almost always directly converted-space of the source device in the space toned images (see Definition)that describes the color space of some real or virtual output device, such as a video display. Most of the existing standard color spaces RGB are tinted spaces images. For example, space the source and output space, created by cameras and scanners, are not color spaces on the basis of the CIE, and are spectral spaces that are determined by the spectral sensitivity and other characteristics of the camera or scanner.

Tinted space images are color spaces that are bound to a specific device based on the colorimetric characteristics of a real or virtual device. Images can be converted into a toned space of tinted or not tinted spaces images. The complexity of these transformations vary and may include sophisticated algorithms that depend on the image. Conversion may be irreversible, and some of the information encoded in the original scene is discarded or compressed to align with the dynamic range and color gamut of a particular device.

Currently there is only one netheravon the RGB color space, located in the standardization process is ISO RGB defined in ISO 17321, which is most commonly used to describe the color characteristics of digital cameras. In most modern applications, the image is converted into a tinted color space for archiving and transfer of data, including video. Convert the use from one space toned images or color space to another can lead to serious distortion of the image (artifacts). The more inconsistent palette of colors and white dots between the two devices, the stronger the negative effects.

One of the ways to overcome them in existing systems display the ambient light, consisting in the selection of video content representative colors for broadcast ambient light can be problematic. For example, the color averaging Venosta pixels often leads to shades of gray, brown, or other colors, which do not distort the correct color perception videoscan or image. Colors derived from simple averaging of Venosta, often look dirty and mistakenly selected, in particular, when they contrast with such features as, for example, bright fish, or dominant background, such as blue sky.

Another problem of existing systems display ambient light is the fact that do not specify a particular method of providing simultaneous real-time operation for converting a toned tri-color values of the video in the corresponding parameters of the ambient light sources to provide appropriate colorimetry and appearance. For example, the output of led sources of the ambient light is often very bright and low or distorted colors is mi, moreover, the tone and color, as a rule, it is difficult to estimate and reproduce. For example, U.S. patent No. 6611297 issued by Akashi and others, dedicated to providing realistic ambient lighting, but it has not proposed a specific way of guaranteeing a correct and pleasing to the eye color and the principles of patent Akashi does not allow to analyze the video in real time, and require the presence of the script or its equivalent.

In addition, the setting of the ambient light sources using a gamma-corrected color space of the video content often leads to a dazzling, bright colors. Another serious problem with existing technology is a large amount of transmitted information, which is necessary for excitation of the ambient light sources depending on the video content in real time and to adapt to a rapidly changing environment ambient light where you want glubokomyslennoy color selection.

In particular, average or other color selected for use in the effects of ambient light, often non-reproducible (e.g., brown) or not suitable for reasons of perception. For example, if the dominant color is specified, for example, brown, system ambient lighting, acting pursuant to this order, may, on default is to create a different color, for example the closest color in his light space, which is able to create (e.g., pink). However, this color is selected for creation of the system may be inappropriate, as it may be incorrectly perceived or to be unpleasant.

The inclusion of ambient light during dark scenes often blinding, too bright and does not pass the color corresponding to the color content of the scene. The inclusion of ambient light during bright scenes may lead to the creation of the ambient color, looking weak and do not have sufficient color saturation.

In addition, it may be that the selection of the dominant color, it is advisable to use some aspects of a scene, such as blue sky, to inform the system of the ambient light, when other objects such as cloud cover, may be less preferred. In known systems, there is no mechanism for continuous study of the elements of the scene, distracting attention from the majority or a large number of pixels whose color is not preferred from the point of view of the preferences of perception. Another problem of the prior art is that the newly introduced features of videoscan often not represented or are not fully represented when ever the situation and choosing a dominant color. To date, there is no way the application of the laws of perception to resolve these problems.

Therefore, it is advantageous to expand the possible range of colors generated by ambient light, along with the standard tri-color videodisplay system using characteristics of the human eye, such as a change in the relative visual brightness of different colors in the function of lighting levels, by modulating or changing nature of color and light provided by the viewer, using the system of the ambient light, which benefits uses the effects of compensation, sensitivity and other features of human vision and provides ambient lighting, which not only correctly extracted from the video content, but also provides skillful use of a variety of potential dominant colors that are present in the scene.

Also beneficial to create a high quality indoor environment, free from the effects of distortion due to the exponent gamma. In addition, it is desirable to be able to provide a way to ensure simulating ambient light through the selection of the dominant color of the selected videobeta, using the economic data flow, which encodes the average or characteristic color values. More preferably, the decrease is ing the required volume of data flow and to give the opportunity to apply the laws of perception to improve visibility and fidelity, and also give the opportunity to exercise the prerogatives of perception when selecting Venosta and brightness can be selected to broadcast ambient light.

Information about the development of video and television systems, compression technologies, transport and encoding data, human vision, scientific knowledge about flowers and their perception, color spaces, imaging colorimetry and images, including video, can be found in the following works, the contents of which are entirely incorporated here by reference: [1] Color Perception Alan R. Robertson, Physics Today, December 1992, Vol. 45, No 12, pp. 24-29; [2] The Physics and Chemistry of Color, 2ed, Curt Nassau, John Wiley & Sons, Inc., New York © 2001; [3] Principles of Color Technology, 3ed, Roy S. Berns, John Wiley & Sons, Inc., New York © 2000; [4] Standard Handbook of Video and Television Engineering, 4ed, Jerry Whitaker and K. Blaier Benson, McGraw - Hill, New York © 2003.

The invention

The methods proposed for various embodiments of the invention include the use of statistical levels of pixels or its functional equivalent for the identification or selection of one or more dominant colors in a way that is associated with the minimum possible computational load, but at the same time provides eye-pleasing and appropriate color is selected as the dominant color according to the laws of perception.

The invention relates to a method for allocating dominiruyushie what about the color of the video content, encoded in tinted color space, to obtain, using the laws of perception, the dominant color for its emulation source of ambient light. Possible steps of the method include: [1] executing highlight the dominant color of the pixel Venosta of video content in tinted color space to create a dominant color by highlighting any of: [a] fashion pixel Venosta; [b] the median of the pixel Venosta; [c] the weighted average color of the pixel Venosta; [d] the weighted average of the pixel Venosta using pixel weighting function, which is a function of any one of: the location of the pixels, the color and brightness; [2] removing color dominant color according to the law of perception, but the act of perception is selected from any from: [a] simple conversion of color; [b] weighted average using the pixel weighting function, optionally formulated in a way that represents the influence of the content of the scene, which is obtained by evaluating the color or brightness for a set of pixels in the video content; [c] extended selection of dominant colors using a weighted average, where the pixel weighting function is formulated as a function of the content of the scene, which is obtained PU is eaten evaluation of chromaticity or brightness for a set of pixels in the video content, moreover, the pixel weighting function additionally formulated in such a way that weigh at least reduced for the majority of pixels; and [3] the transformation of the dominant color of the tinted color space into the second toned color space formed so as to allow the excitation source of the ambient light.

At desire it is possible to perform quantization of the pixel Venosta (or tinted color space), and it can be done in several ways (see Definitions), where the objective is to alleviate the computational load by trying to reduce any possible color States, for example, in the assignment of more Venosta (for example, pixel Venosta) fewer assigned Venosta or colors; or decreasing the number of pixels by the selection process, which selects the selected pixels; or save to establish a representative pixels or superpixels.

If this quantization tinted color space is partially performed by storing the pixel Venosta at least one superpixel, the thus created superpixel can have the size, orientation, shape or location, formed in accordance with a feature of the image is Oia. Assigned colors used in the quantization process can be selected in the form of regional color vector, which is not necessarily in tinted color space, for example in the second toned color space.

Other embodiments of the method include an option in which in a simple conversion is selected chromaticity detected in the second toned color space used to create ambient light.

It is also possible to formulate the pixel weighting function in order to support the dark by [4] evaluation of video content to demonstrate that the brightness of the scene in the content of the scene is low; and then [5] perform any of: [a] using the pixel weighting function, additionally formulated to reduce weight bright pixels; and [b] broadcast dominant color obtained using the reduced brightness compared to the brightness, which would have been created otherwise.

In an alternative embodiment, can also be formulated pixel weighting function in order to support color by: [6] evaluation of video content to demonstrate that the brightness of the scene in the content of the scene is high; and then [7] perform any of: [a] using the pixel weighting function, the l is further formulated to reduce weight bright pixels; [b] step [2] [c].

Extended selections dominant color may be repeated separately for different features of the scene in the video content to generate a set of dominant colors, and the step [1] can be repeated, where each of the multiple dominant colors are indicated by the pixel color. Then, if desired, the above-mentioned step [1] (extract dominant color) can be repeated separately for pixel Venosta in the newly introduced features of the scene.

For the formation of a distribution of assigned colors can be taken quantization at least some of the pixel Venosta of video content in tinted color space, and in step [1] distribution of assigned colors you can get at least some of the pixel Venosta. In an alternative embodiment, the quantization may include the preservation of the pixel Venosta at least one superpixel.

If the distribution of assigned colors met, then at least one of the assigned colors can be a regional color vector, which is not necessarily in tinted color space, such as the regional color vector lying in the second toned color space used for the excitation source okrugaushih the light.

The method may also further comprise setting at least one interesting colors in the distribution of assigned colors, followed by separation of the assigned pixel Venosta to obtain the correct dominant color, subject to designation in the end as the dominant color.

The dominant color may actually contain a palette of colours, each of which receive, applying this method.

The method may also be performed after quantization tinted color space, namely the quantization of at least some of the pixel Venosta of video content in tinted color space to form a distribution of assigned colors, so the selection is the dominant color on the step [1] gives the distribution of assigned colors (for example, [a] fashion distribution of assigned colors, etc). Then similarly can be formulated pixel weighting function in order to support the dark by: [4] evaluation of video content to establish that the brightness of the scene in the content of the scene is low; and [5] perform any of: [a] using the pixel weighting function, additionally formulated to reduce weight assigned colors attributed bright pixels; and [b] broadcast dominant color, receive the frame with the use of reduced brightness compared to the brightness, which would have been created otherwise. Similarly to support color can be formulated pixel weighting function in order to support color by: [6] evaluation of video content to establish that the brightness of the scene in the content of the scene is high; and [7] perform any of: [a] using the pixel weighting function, additionally formulated to reduce weight assigned colors attributed bright pixels; and [b] step [2] [c]. Other steps may be interchanged in accordance using the assigned colors.

The method also may, but need not, contain: [0] decoding of video in a tinted color space into multiple frames and quantization at least some of the pixel Venosta of video content in tinted color space for the formation of a distribution of assigned colors. In addition, the method may, but need not, contain: [3a] the transformation of the dominant color of the tinted color space in netheravon color space; and then [3b] the transformation of the dominant color of the in standard color space into the second toned color space. This can be facilitated [3c] matrix transformation of basic colors tinted color space and vtoro what about the tinted color space in netheravon color space using the first and second tri-color matrixes of basic colors and obtaining the conversion of the color information in the second tinted color space by matrix multiplication of basic colors tinted color space, the first color matrix and the inverse of the second tri-color matrix.

As soon as the distribution of assigned colors selected dominant color, can, so to speak, to go back to get the actual pixel color to clear dominant color. For example, as mentioned above, it is possible to install at least one interesting color in the distribution of assigned colors and allocate assigned to the pixel color to obtain the correct dominant color, referred to as the dominant color. Thus, although the assigned color can represent a rough approximation of the video, right dominant color can provide the correct color for the distribution of the ambient light and reduce the amount of computation that would be required otherwise.

Pixel color in step [1] can be obtained from the marquee, having any shape, size or location, and one broadcast ambient light dominant color from the ambient light source adjacent to the marquee.

These steps can be combined in a variety of ways to Express different simultaneously applicable laws of perception, e.g. the measures by installing a set of criteria, who should co-exist and compete for priority in the allocation and selection of the dominant color. Netheravon color space that can be used for conversion into the surrounding second-toned color space can represent one of the spaces: CIE XYZ; ISO RGB defined in the ISO Standard 17321; Photo YCC; CIE LAB; or any other netheravon space. The steps taken to perform the selection of the dominant color and the application of the laws of perception, can actually be performed synchronously with the video signal, and broadcast ambient light from or around the video display uses color information in the second toned color space.

List of figures

Figure 1 - basic view of the front surface of the video display, which shows the selection of the color information and the corresponding broadcast ambient light from ambient light sources according to the invention;

Figure 2 is a top view (partially schematic and partially in cross-section) of the room where ambient light from multiple sources of ambient light is created using the invention;

Figure 3 - system according to the invention for selecting a color information and perform colour space conversion, allowing who is to uidate sources of ambient light;

Figure 4 - the equation for calculating the average color information of the selection of video;

5 is a known matrix equation for converting a toned elements in RGB netheravon XYZ color space;

6 and 7 matrix equations to display tinted color videoproject and ambient light, respectively, in netheravon color space;

Fig solution using known conversion matrices to obtain a three-color values R'g'b' ambient light from in standard color space XYZ;

Fig.9-11 - known option for obtaining tri-color matrix M primary colors using the method of white dots;

Fig system similar to the one shown in figure 3, which further comprises the step gamma correction to broadcast ambient light;

Fig diagram of the overall transformation process used in the invention;

Fig - process steps to obtain the coefficients of the transformation matrix for the source of the ambient light used in the invention;

Fig - steps process for the selection of the evaluated video and playback of ambient light with the use of inventions;

Fig - selection scheme of the video frame according to the invention;

Fig - steps process for abridged evaluation of chromaticity agreement is but the invention;

Fig - steps selection, shown in figure 3 and 12, using decoder frames, installation of frequency allocation of personnel and the implementation of the output calculations for the excitation source of the ambient light;

Fig and 20 - step process for the extraction and processing of color information according to the invention;

Fig diagram of the overall process according to the invention, including the selection of the dominant color and the conversion to the color space of the ambient light;

Fig is a schematic representation of a possible method of quantization of pixel Venosta of video content by assigning pixel Venosta assigned color;

Fig - schematic representation of one example of quantization by storing pixel Venosta in supercade;

Fig - schematic representation of the process of preservation, similar Fig, but where the size, orientation, shape, or location of superpixel may be formed in accordance with the characteristics of the image;

Fig - regional color vectors and their color or chromaticity coordinates on the standard CIE color map with a rectangular coordinate system, where one color vector lies outside the range of colors obtained according to the standards create a color PAL/SECAM, NTSC and Adobe RGB;

Fig - close-up of the chart on Fig, where it is additionally shown is exeline color and the regional assignment of the color vectors;

Fig - histogram showing the mode of distribution of assigned colors according to one possible method of the invention;

Fig - schematic representation of the median of the distribution of assigned colors according to one possible method of the invention;

Fig - mathematical summation of the weighted average by chromaticity assigned colors according to one possible method of the invention;

Fig - mathematical summation of the weighted average by chromaticity assigned colors using the pixel weighting function according to one possible method of the invention;

Fig - schematic representation of the setup process of a certain color in the distribution of assigned colors, followed by separation of the assigned pixel Venosta to obtain the correct dominant color, referred to as the dominant color;

Fig is a schematic view of the fact that the selection of a dominant color according to the invention can be executed multiple times in parallel or separately to provide a palette of colours;

Fig - elementary view of the front surface of the video display, shown in figure 1, showing the example of the unequal weighting for the preferred spatial domain for methods shown in Fig and 30;

Fig - elementary is on the front surface of the video display, shown in Fig, schematically showing the feature of the image selected with the aim of identifying the dominant color according to the invention;

Fig - schematic representation of another version of the invention, with which the video content decoded in the frame set allows you to get the dominant color, at least partially based on the dominant color from the previous frame;

Fig - steps process for abbreviated procedures select a dominant color according to the invention;

Fig - elementary view of the front surface of the video display representing the content of the scene with the newly introduced feature, to illustrate the selection of the dominant color with the support of the darkness;

Fig - elementary view of the front surface of the video display representing the content of a scene to illustrate the selection of the dominant color with color support;

Fig - schematic representation of the three illustrative categories by which to classify the laws of perception according to this invention;

Fig is a schematic representation of a simple transformation of color in the form of a functional operator;

Fig is a schematic representation of a sequential series of steps to select the dominant color with a average, calculated using icalneu weighting function according to the invention for performing two possible illustrative of the laws of perception;

Fig is a schematic representation of a sequential series of steps to select the dominant color with a average, calculated using the pixel weighting function for the extended extract dominant color according to the invention for performing two possible illustrative of the laws of perception;

Fig possible functional forms for the pixel weighting function used according to the invention.

Detailed description of the invention

Further, throughout the description, the following definitions:

Source ambient light- in the claims includes any necessary to create light circuit or pathogens that creates the light.

The surrounding space- associated with any material bodies, air or space, external to the videodisplay block.

The distribution of assigned colorsdenotes the set of colors chosen to represent (for example, for computational purposes) full ranges of pixel Venosta appearing in the video or in the video.

Bright- referring to the pixel brightness indicates [1] relative characteristic, that is brighter than other pixels, or [2] the absolute characteristic, such as a high brightness level, or both. This molochnitsa to a bright red light in the scene is dark red or treat bright by nature to Tsvetnoy, such as white and gray color.

The conversion- refers to replacing one color with another color in the application of the law of perception, as described here.

Chromatic datain the context of the excitation source ambient light indicate mechanical, numerical or physical way the color character color, such as color, and does not imply any specific methodology, such as that used in television (NTSC or PAL).

Painted- referring to the chromatic pixel data indicates: [1] a relative characteristic, that is, the manifestation of a higher color saturation than the other pixels, or [2] the absolute characteristic, such as the level of color saturation, or both.

Color information- includes chromatic data and brightness, or functionally equivalent quality, or both.

Computer- includes not only all the processors, such as CPUs (Central processor units), which are known architecture, but also any intelligent device that enables the encoding, decoding, reading, processing, execution codes configuration or code changes, such as digital optical or analog e is actionsee scheme, performing similar functions.

Dark- when referring to a pixel brightness mean: [1] a relative characteristic, that is darker than other pixels, or [2] the absolute characteristic, such as low brightness.

The dominant color- refers to any color chosen to represent the video content for broadcast ambient color, including any color selected using the disclosed here illustrative methods.

Extended selection (dominant color)- refers to any process for the selection of the dominant color that is executed after the previous process has eliminated or reduced the influence of the majority of pixels or other pixels in the video scene, to name or video content, for example, when interested in the colors themselves are used to further highlight the dominant color.

The selection- includes any subset of the entire video image or frame, or more generally any videobest or frame, which is discretized in order to identify the dominant color.

Frame- includes a sequence in time of the presentation of information about the image in the video content corresponding to the use of the term "frame" technique, but also includes any partial (for example, carastro the data or image data, used for transmission of video content at any time or at regular intervals.

Goniochromism- refers to the property that defines different color or color depending on the viewing angle or angle, for example, created as a result of iridescence.

Goniophotometric- refers to the property that defines the different intensity of light transmission and/or color depending on the viewing angle or angle, for example, that occurs when the mother-of-pearl effect, sparking or metrotrain.

Interpolation- includes a linear or mathematical interpolation between the two sets of values, as well as the functional requirements for the installation of the values between two known sets of values.

The nature of lightin a broad sense to mean any definition of the nature of light, for example, created a source of ambient light, including all descriptors, different from the brightness and chromatic data, such as the degree of transmission or reflection of light; or any determination goniophotometric properties, including color, sparks or other known phenomena depending on the angles of visibility when observing the source of the ambient light; the direction of the exit light, including orientation, defined by the Poynting vector and the other vector distribution; or set the angular distribution of light, for example spherical angles or distribution function of spherical angle. This term may also include coordinate or coordinates to specify locations in the source of the ambient light, such as locations of elementary pixels or lamps.

Brightness- refers to any parameter or measure of brightness, intensity, or an equivalent measure, and this term does not imply a particular method of generating or measuring light, or psychobiologically interpretation.

The majority pixels- refers to the pixels, delivering similar color information, such as saturation, brightness, or chroma in the video scene, to name. Examples of this include the pixels that are set to demonstrate the dark (the dark in the scene), while fewer, or a different number of other brightly lit pixels; the pixels, which are primarily installed to demonstrate white or gray (for example, a region in the scene); and the pixels that share the same color, for example green leaves in the forest scene, which also separately depicted by the red Fox. The criterion used to set what is considered to be such, may vary, and the numerical majority is not mandatory, although often used.

Pixel

Pixel color- includes the actual value of the pixel Venosta, and any other color values that are set in the result of the quantization process or consolidation, for example, when the process was used for the quantization of the color space. Therefore, in the annexed claims, it is considered that the pixel color can include values from a distribution of assigned colors.

The quantization of the color space- in the description and in the context of the claims refers to the reduction of possible color States, for example, as a result of assigning more Venosta (for example, pixel Venosta) fewer assigned Venosta or colors; or the reduction of the number of pixels by the selection process, which selects the selected pixels; or conservation to establish a representative pixels or superpixels.

Tinted color spacedenotes the space is nstwo images or color space, recorded by the sensor, or specific source device or display device, which is specific to a particular device and image. Most color spaces RGB are tinted spaces images, including video space used for excitation of the video display D. In the accompanying claims color space, as attached to the video display, and tied to the source 88 of the ambient light are tinted color spaces.

The brightness of the scene- refers to any measure of brightness in the scene content according to any desired criteria.

The content of the scene- refers to the characteristics of the video data capable of forming a visible image that can be used to influence the desired selection of the dominant color. Examples include white clouds or darkness in the most part of the video, which may lead to the fact that some of the pixels forming the image specified will be considered by the majority pixels, or may lead to an anisotropic processing of the pixels in the pixel weighting function (W Fig); or may cause the detection of image features (e.g., J8 on Fig) and be subjected to a special or extended extracting the dominant color.

Easy conversion - refers to changing or getting the dominant color or color according to the law of perception, not selected or received depending on the content of the scene, and where the change or retrieve the color creates a color that differs from that which would be selected otherwise. Example: converting a first dominant color (x,y), selected through selection of the dominant color (such as pink) in the second color (x', y')that meets certain law of perception.

Converting color information in netheravon color space- in the accompanying claims contains either a direct conversion in netheravon color space, or useful effect obtained by using the inverse color matrix of the primary colors obtained by converting in standard color space (for example, (M2)-1as shown in Fig) or any of the calculated equivalent.

Netheravon color spacedenotes the standard or unattached to the device color space, for example, describes the colorimetry of the original image using the standard CIE XYZ, ISO RGB defined in ISO 17321; Photo YCC; and a color space CIE LAB.

Videodenotes lubovislove device or device the Creator of light, whether it is an active device that requires energy to create light, or any transmitting medium which transmits the information about the image, such as a window in an office building or an optical waveguide, where the image information is obtained remotely.

Videodenotes a signal or information delivered to the videodisplay control unit, including the portion relating to the audio data. Thus, it is assumed here that the analysis of video content includes a possible analysis of the audio content for the part relating to the audio data. In the General case, the video signal may include a signal of any type, such as RF signals using any number of known modulation methods; electrical signals, including analog and quantized analog signal; digital (electrical) signals, such as signals using pulse-width modulation, the number of pulse-modulation, phase modulation, pulse code modulation (PCM) and amplitude-modulation; or other signals, such as acoustic signals, audio signals and optical signals, which can be used in digital technology. You can also use the data that you simply consistently placed between or together with other information, the example of packetized information in computer applications.

Weighted- refers to any method, equivalent here to set preferred state or large mathematical weights for some Venosta, brightness or spatial locations, possibly depending on the content of the scene. However, in order to provide a simple (arithmetic) average nothing prevents you from using the unit as a weight value.

Pixel weighting functionas it is described here, does not have to take specified here functional type (for example, summation of W on the set of pixels), and includes all the algorithms, operators or other calculations that lead to the same result.

Ambient light received from the video content, according to the invention, is formed so that when the desire to give the opportunity to provide a high accuracy for the color of the light source videoscan with the support of a high degree of specificity of degrees of freedom for ambient light with a low level of required computational load. This allows the ambient light sources with small color gamuts and reduced spaces brightness to simulate light videoscan from better light sources with relatively rich colors and brightness. Possible sources of light is for the ambient light may include any number of known lighting devices, including LEDs (light emitting diodes) and related semiconductor emitters; an electroluminescent device, including nonsemiconductor types; incandescent lamps, including their modified types that use halogen or improved chemical compounds; gas-discharge lamps, including fluorescent and neon lamps; lasers; light sources, modulated, for example, by using liquid crystal displays (LCD) or other light modulators; photoluminescent emitters or any number of known controlled light sources, including matrices that have a functional similarity with displays.

This description is partly primarily to the selection of the color information of the video content, but also to cues that are subject to the laws of perception to obtain the dominant or the right colors for broadcast ambient light representing a video image or scene.

Now refer to figure 1, where, for illustrative purposes only shows an elementary view of the front surface of the video display D according to the invention. The display D can contain any number of known devices that decode the video content of tinted color space, for example, in the standard is emania NTSC, PAL or SECAM, or tinted RGB space such as Adobe RGB. The display D may contain possible areas R1, R2, R3, R4, R5 and R6 selection of color information, whose boundaries may differ from those shown. Selection of color information are pre-determined arbitrarily and should be characterized with the aim of creating a characteristic of the ambient light A8, for example, mounted through the rear of managed blocks ambient light (not shown), which generate and transmit, as shown, surrounding the light L1, L2, L3, L4, L5 and L6, for example, partially illuminating the wall (not shown)on which is mounted the display D. In an alternative embodiment, the display frame Df, as shown, by itself, may also contain blocks ambient light that visualize the light in the same way, including directed outwards to the observer (not shown). If desired, each region R1-R6 selection of color information can affect neighboring ambient light. For example, the region R4 highlight color information may, as shown, to influence the surrounding light L4.

Refer now to figure 2, which shows the bottom view (partially schematically and partially in cross-section) of the premises or surrounding area SA, which creates ambient light from multiple sources of ambient light using the a W of the invention. As shown, in the surrounding space AO are chairs and tables 7, which are arranged so that was visible video display D. In the surrounding space of the JSC also has many blocks ambient light, which, optionally, is carried out using the present invention, including, as shown, light dynamics 1-4, the illumination element of SL under the sofa or seat, and also a set of special blocks ambient light, placed near the display D, namely: Central lamps that create ambient light Lx, as shown in figure 1. Each of these blocks ambient light can emit ambient light A8, shown by shading in figure 2.

Using the present invention, it is possible to create ambient light from these blocks ambient light, with flowers or svetlosti received from the video display D, but in reality it is not transmitted. This gives the opportunity to use the characteristics of the visual system and the human eye. It should be noted that the function of the brightness of the human visual system, which provides sensitivity to different visible wavelengths, changes depending on lighting levels.

For example, scotophase or night vision-based sticks more sensitive to blue is green colors photopic vision, using cones, more suitable for the perception of light with a longer wavelength, such as red and yellow. In a darkened home theater environment, these changes in the relative brightness of different colors depending on the light level can be somewhat neutralized by modulating or changing the color supplied to the user of the video system in the surrounding space. This can be done by subtracting the light of the blocks ambient light, for example light speakers 1-4, using a light modulator (not shown), or by the use of additional components in the lamps, namely: photoluminescent emitter for additional changes of light before it is output to the outside. Photoluminescent emitter performs color conversion by absorbing or being stimulated by the incoming light from the light source with subsequent re-radiation of this light on the required longer wavelengths. This excitation and re-emission of photoluminescent emitter, such as a fluorescent pigment, enables visualization of the new colors that were initially absent in the source video, or light, and possibly outside of the color range or color gamut inherent display D. Atomage be useful when the desired brightness of the surrounding light Lx low, for example, during very dark scenes, and the required level of perception is higher than the level typically achieved without modification of the light.

Creating new colors can provide new and interesting visual effects. An example of this can be the creation of orange light, for example light, which is called the "hunter orange", which is available fluorescent pigments are well known (see reference [2]). This example consists of a fluorescent light in contrast to the surrounding phenomena of fluorescence and related phenomena. The use of fluorescent orange or other types of fluorescent color can be particularly useful in low-light conditions, when the gain of the red and orange colors may compensate for the reduced sensitivity scotophase of view on the long waves.

Fluorescent dyes that can be used in blocks ambient light, may include known dyes in these classes of dyes, as perylenes, naphthalimide, coumarin, thioxanthine, anthraquinones, thioindigo and proprietary classes of dyes, for example, supplied by the firm Day-Glo Color Corporation, Cleveland, Ohio, USA. Colors available include yellow, Apache, yellow Tigris, yellow Savannah, yellow Pocono, yellow Mohawk, Potomac yellow, orange Mrigold, red Ottawa, red Volga, pink Salmon and Columbia blue. These classes of dyes can be incorporated into such resins as Pa, PET and ABS using known processes.

Fluorescent dyes and materials enhance the visual effects, because they can be designed to provide a much greater brightness than afluorescent materials of the same color. In the last two decades were mostly solved the problem of the so-called "longevity" of traditional organic pigments used to create fluorescent colors, as technological advances have led to the development of long-lasting fluorescent pigments that maintain their bright color for 7-10 years, while under the influence of sunlight. Therefore, these pigments are virtually indestructible in the home theater environment, where ultraviolet radiation is minimal.

Alternatively, you can use fluorescent photopigment that act, just absorbing short-wavelength light and re-emitting the light at longer wavelengths, such as red or orange. Currently widely available technologically more advanced inorganic pigments, which come in a state of excitation, using visible light, for example blue and pIOL is preset colors for example, in the range of 400-440 nm.

Similarly, you can implement goniophotometric and goniochromism effects to create different colors, intensity, and nature of light depending on the angles of visibility. To implement this effect blocks 1-4 and SL and Lx ambient light can use the well-known goniophotometric elements (not shown) separately or in combination, such as metal and translucent dyes with pearl effect; iridescent materials, use the known diffraction effects or film interference, for example, using scaly essence; flakes of guanine; or 2-aminopiperidin with a protective film. You can use diffusers that use finely ground mica or other substances, such as materials with pearl effect, made of layers of oxides, bornite or peacock ore; metal, glass or plastic flakes; materials in the form of separate particles; oil; frosted glass and matte plastic.

Refer now to figure 3, which shows the system according to the invention for selecting a color information (e.g., dominant color or the right color) and transforming the color space, enabling the excitation source of the ambient light. In the first step, color information which I extracted from the video signal AVS using known methods.

The AVS video may contain known frames or packets of digital data such as those used for MPEG encoding, the PCM audio encoding, etc. For data packets, such as software threads with data packets of variable length or transport streams in which the data packets have the same length, you can use the well-known coding scheme or other schemes, such as single-program transport streams. In an alternative embodiment, shown in this description of the functional steps or blocks can be emulated using computer code and other communication standards, including asynchronous protocols.

As shown as a General example, the video signal AVS can be analysed SA video content, possibly using known methods for recording and sending the selected content to the hard disk HD and back, possibly with the use of library content type or other information that is stored in the memory MEM, also shown in the drawing. This gives the ability to provide independent, parallel, direct, delayed, continuous, periodic or aperiodic forwarding the selected video content. This video can, as shown, to carry out the extraction of FE features, for example to obtain the color information (for example, the dominant color in General is whether the characteristics of the image. This color information is encoded in a tinted color space, and then transmitted to netheravon color space such as CIE XYZ using, as shown, circuit 10 conversion to display RUR. It has the desired type of conversion, namely a "toned - netheravon - tinted (RUR), and thus the circuit 10 conversion to display RUR additionally converts the color information in the second color space is formed so that, as shown, to have the possibility of excitation of the specified source or sources 88 ambient light. Convert RUR is preferred, but you can use other display, as soon as a scheme for the creation of ambient light or its equivalent receives information in the second toned color space, which it can use.

Scheme 10 conversion to display RUR functionally can be contained in a computer system, which uses software to perform similar functions, but in the case of decoding packetized information sent by the data transfer Protocol may be a memory (not shown) in the circuit 10 containing (or updated to contain information that matches or provides coefficie what you tinted the color space for video etc. This newly created second tinted color space is suitable and desirable for the excitation source 88 ambient light (such as shown in figures 1 and 2) and, as shown, is supplied using a known encoding method in scheme 18 create ambient lighting. Scheme 18 create ambient lighting receives information about the second toned color space of the circuit 10 conversion to display RUR, and then takes into account any input from any user interface and any memory of the result of preferences (shown together as U2) to create a valid control parameters in output of the ambient light (e.g., applied voltage) after possible recourse to the shown reference table LUT (second toned) color space to the ambient light. Options to control the output of the ambient light created by the circuit 18 create ambient light, serves, as shown in pathogens D88 tube interface for direct control or power source 88 of the ambient light, which may contain separate blocks 1-N of the ambient light, such as previously described dynamics 1-4 ambient light or the Central lamps Lx ambient light, shown in figures 1 and 2.

To reduce the computational load in real the om-time color information, taken from the video signal AVS, may be reduced or limited. Refer now to figure 4, where for discussion shows the equation to calculate the average color information of the highlight area of the video. As mentioned below (see Fig), here it is assumed that the video content in the video signal AVS contains a number of consecutive time frames, but it is not mandatory. For each video frame or equivalent temporary block can be averaged or other color information of each of the selection (e.g., R4). For each selection can be set to a certain size, for example 100 376 pixels. Assuming a frame rate of e.g. 25 frames/s, the resulting summary data for regions R1-R6 allocation before allocating medium (assuming that to specify 8-bit color, you need only one byte) will be 6×100×376×25 or 5.64 million bytes/s for each tri-color element for RGB video. This data flow is very high and will lead to great difficulties in the processing scheme 10 conversion to display RUR, highlighting why the average color for each area R1-R6 selection can be made during the allocation of FE features. In particular, as shown, it is possible to sum the values of color channels RGB (for example, Rij) the La for each pixel in each selection, consisting of m x n pixels, and divide by the number of pixels m × n to obtain the average for each primary color RGB, for example, Ravgfor red, as shown here. Repeating this summation for each color channel in RGB, you can get the average for each selection in the form of a triplet RAVG=|Ravg, GavgBavg|. The same procedure was repeated for all regions R1-R6 allocation and for each color channel RGB. The number and size selections may differ from those shown, and can be set as desired.

The next step in the execution of transformations for color display scheme 10 conversion to display RUR can be shown and is for illustrative purposes using the well-known tri-color matrix primary colors, as shown in figure 5, where tinted trichromatic color space vectors R, G, and B are converted using the tri-color matrix M primary colors with such elements as Xr,max, Yr,max, Zr,maxwhere Xr,max- tri-color is the main element R at the maximum output.

Conversion of tinted color space in netheravon, not device-dependent space, can be a well-known linearization tied to the image and/or device, the pixel reset the resolution (if necessary), this can be performed the steps of selection of white pixels, followed by a transformation matrix. In this case, you just make the choice to make the restored output video space as the starting point for conversion to colorimetry in standard color space. You want glass-Untinted images have gone through more transformations to ensure their capabilities to view or print, and, thus, RUR conversion includes converting the second toned color space.

At the first possible step, as shown in Fig.6 and 7, presents the matrix equation to display tinted color space for video, pronounced primary colors R, G and B, and tinted the color space of the ambient light, expressed primary colors R', G' and B', respectively, in netheravon color space of X, Y and Z, where the tri-color matrix M1primary colors converts RGB video in netheravon XYZ, and tri-color matrix M2primary colors, as shown, converts the ambient light source R'g'b' in netheravon XYZ color space. Equating the two toned color spaces RGB and R'g'b, as shown in Fig allows you to perform a matrix transformation of primary colors RGB and R'g'b' that is new (video) color space and the second tinted (ambient) color space in netheravon color space conversion for display RUR) using the first and second tri-color matrix (M1, M2) basic colors and to obtain the conversion of the color information in the second toned color space (R'g'b') by matrix multiplication of the basic RGB colors tinted color space for video (the first three-color matrix M1and the opposite second tri-color matrix (M2)-1. While the tri-color matrix primary colors to known display devices readily available, a similar matrix for the source of the ambient light can be determined using the method of white dots, known to specialists in this field of technology.

Refer now to figures 9 to 11, which shows a method of obtaining a generalized tri-color matrix M primary colors using the method of white dots. Figure 9 value of type SrXrrepresent three-color value of each primary color (source ambient light) with a maximum output, and Srrepresents the amplitude of the white point, and Xrrepresents the color of the primary colors created by the source (ambient) light. When using the white dots matrix equation, as shown, equals Srthe vector of reference values of the white points using the well-known inverse matrix color source color. Figure 11 presents the algebraic transformation, voltage is moneysee, that the reference values of the white points, such as Xware the product of the amplitudes of white dots or brightness and Venosta source color. Everywhere tricolor value of X is set equal to the chromaticity x; tri-color value Y is set to the color y, and tri-color value Z is set equal to 1-(x+y). The primary colors and the supporting components of white color for the second toned color space of the source of the ambient light can be obtained using known methods, for example using a color spectrometer.

You can find the same value for the first tinted color space for video. For example, it is known that modern Studio monitors have slightly different standards in North America, Europe and Japan. However, it achieved an international agreement on the basic colors for high-definition television (HDTV), and these primary colors are typical for modern monitors in Studio video and computer graphics. This standard is formally known as ITU-R Recommendation BT.709, which contains the required parameters, where the corresponding tri-color matrix (M) primary colors RGB is as follows:

0.640·0.300·0.150
0.330·0.600·0.060
0.030·0.100·0.790
The matrix M d is I ITU-R BT/709

moreover, the values of the white points are also known.

Let us now turn to Fig, which presents a system similar to the system shown in figure 3, further containing the step 55 gamma correction after step FE highlight features to broadcast ambient light. In an alternative embodiment, step 55 gamma correction may be performed between steps performed by the circuit 10 conversion to display RUR and circuit 18 create ambient lighting. Established that the optimal value of gamma for sources of ambient light on the LEDs is equal to 1.8, so can be made negative gamma correction to neutralize the typical values of the gamma of 2.5 color video space, and the exact value of gamma is found using known mathematical relationships.

Typically, the circuit 10 conversion to display RUR, which may represent a functional unit made using any suitable known software platform, performs total RUR conversion, as shown in Fig, where, as shown in the diagram, is the AVS video containing tinted color space, such as RGB video, which is converted to netheravon color space such as CIE XYZ, and then the second toned color space (RGB source surround is its light). After this transformation RUR sources 88 ambient light, as shown, can be excited in addition to signal processing.

On Fig shows the process steps for obtaining the coefficients of the transformation matrix for the source of the ambient light used in the invention, these steps include, as shown, the excitation block (blocks) of the ambient light and check the linearity of the output, as is well known to specialists in this field of technology. If the main color of the ambient light source is stable (on the left branch is shown as "stable primary colors"), we can obtain the coefficients of the transformation matrix using a color spectrometer; at the same time, if the main color of the ambient light source is unstable (on the right branch is shown as "unstable primary colors"), you previously specified gamma correction set to the initial state (shown in "installing the gamma curve in the initial state").

In the General case, it is desirable, but not required, to select the color information of each pixel in the selection, such as R4, and instead, if necessary, to perform a survey of selected pixels that will help you to estimate the average color or faster to create a color characteristic of the selection. On Fig shows the process steps for the selection of the OC is full of videos and playback of ambient light with the use of the invention, these steps include: [1] preparation of colorimetric evaluation of video playback (from tinted color space such as RGB video); [2] the transformation in netheravon color space; and [3] the conversion of the colorimetric estimation for playing ambient light (second tinted color space, such as RGB LED).

It was found that the required bit stream data required to support the selection and processing of video content (e.g., dominant color) of the frames (see below Fig), can be reduced according to the invention, by a reasonable domain downsampling frames. Let us now turn to Fig, which shows the allocation of the video frame according to the invention. Here is shown the number of separate successive frames F, namely frames F1F2F3and so, for example, separate interlaced or interlaced frames specified in the NTSC, PAL or SECAM. In the analysis of the content and/or selection of features, such as highlighting information about the dominant color of the selected consecutive frames, such as frames F1and FNyou can reduce the load for the data or overhead, while maintaining acceptable reactivity, realism and fidelity of the source of the ambient light is. It was found that good results are obtained when N=10, namely: effective may be downsampled on the principle of 1 frame of 10 consecutive frames". This ensures that the period P of the pack between the discharge frames with low overhead costs, during which process interframe interpolation can provide an adequate approximation of the color changes in time on the display D. the Selected frames F1and FNallocated, as shown (allocation), and the intermediate interpolated values for the chromatic data, shown as G2, G3, G4provide the necessary color information to inform you of the previously specified process excitation source 88 ambient light. This eliminates the need for fixing or supporting the same color information for all frames 2 through N-1. The interpolated values can be linearly determined, for example, when the total difference in color information between the selected frames F1and FNbeyond the interpolated frames G. alternatively, the function can expand the difference in color information between the selected frames F1and FNotherwise, in order to provide an approximation of a higher order for the time variation of the selected color information is promotion. The results of interpolation can be used by early treatment to the frame F to influence interpolated frames (such as a DVD player), or alternatively, interpolation can be used to influence future interpolated frames without prior treatment to the frame F (for example, in decoding applications for broadcast).

On Fig shows the process steps for abridged evaluation chromatic data, according to the invention. Analysis of higher order for the selection of personnel can update more periods P and more N periods than is possible otherwise. During the selection of personnel or during the additional survey of selected pixels in the regions Rxselection can conduct a short evaluation of the chromatic data, as shown, which will lead either to a delay in the allocation of the next frame, as shown on the left, or initiates a full scan of the frame, as shown at right. In any case, the interpolation continues ("interpolation"), and the selection of the next frame delay leads to the preservation or increase of the values used chromatic data. It can provide a more efficient operation from the point of view of unproductive expenses of the bitstream or bandwidth.

On Fig presents the upper part of figure 3 and 12, the DG is shown an alternate step selection in accordance with that used human decoder FD, which allows, as shown in step 33, to highlight regional information of selections (e.g., R1). Step 35 additional process or components includes an assessment of the differences between chrominance and use of this information, as shown, to set the frequency of selection of frames. The next step of the process execution output computations 00, such as averaging over 4 or selection of a dominant color, discussed below, is performed, as shown, before shipment data shown earlier in scheme 18 create ambient lighting.

As shown in Fig, the steps of the surrounding process for extracting and processing the color information according to the invention include receiving a video signal AVS, highlighting regional (color) information from the selected video frames (for example, the previously mentioned frames F1and FN); interpolation between the selected video frames; conversion to display RUR; optional gamma correction and use of this information for the excitation source (88) of the ambient light. As shown in Fig, after the regional allocation information from the selected frames can be inserted two additional processing step: one step can be evaluated differences in chrominance between you the security personnel F 1and FNand depending on a predetermined criterion can, as shown, to establish a new frequency allocation of personnel. Thus, if the difference in chrominance between successive frames F1and FNlarge or rapidly growing (for example, a large first derivative) or satisfy some other criteria, for example, based on historical differences in chromatic data, then you can increase the frequency allocation of frames, thereby decreasing the period P of the pack. However, if the difference in chrominance between successive frames F1and FNlittle and it is stable or not quickly increases (e.g., low or zero absolute value of the first derivative) or satisfy some other criteria, for example, based on historical differences in chromatic data, then you can save the desired bit stream of data and to reduce the refresh rate, thereby increasing the period P of the pack.

Let us now turn to Fig, which shows the diagram for the overall process, according to one aspect of the invention. As shown here, as an optional step for a possible simplification of the computational load [1] toned color space corresponding to the video content, quantuum ("quantization of color space is TBA (QCS)"), for example, using the methods described below; and then [2] choose the dominant color (or palette colours) ("selection of the dominant color (DCE)"); and perform [3] conversion for color display, for example, the transformation (10) to display RUR ("conversion for display (MT) in R'g'b'") to improve the fidelity, increase the range and suitability established ambient light.

Optional quantization of the color space can be equated to a reduction of the number of possible color States and/or pixels to be viewed, and can be performed using various methods. For example, on Fig schematically shows one possible method for quantization of the pixel Venosta of video content. As here shown, illustrative values for the primary colors R for video are in the range from 1 to 16, and for any of these values, the main R is an arbitrary evaluation with respect to the assigned color AC as shown. Thus, for example, whenever the video face svetlosti or values of the red pixels from 1 to 16 assigned color AC replaced, which leads to a decrease in 16 times for one of the red primary red from the number of colors required for the characterization of video. For all three of the primary colors such reduction possible color States can lead in this example, to a decrease in 16×16×16 or 4096 times the number of colors, used for the calculation. Especially useful it can be to reduce the computational load when determining the dominant color in many video systems, for example, with 8-bit color, which is 256×256×256 or 16.78 in a million possible color States.

Another way to quantize the color space for video presented on Fig, where schematically shows another example of a quantization tinted color space by storing pixel Venosta or multiple pixels Pi (e.g., 16, as shown in superpixel XP. Saving is a way by which neighboring pixels are added to each other (mathematical or computational means) for forming superpixel, which itself is used for further calculation or presentation. Thus, in video format, which typically has, for example, of 0.75 million pixels, the number of superpixels selected to represent the video content, can reduce the number of pixels to calculate up to 0.05 million or to any other desired fewer.

The number, size, orientation, shape or location of these superpixels XP may vary depending on the video content. For example, when during the highlight FE particularly advantageous to ensure that superpixel XP was is obtained only from the features of the image, instead of the wider area or background, superpixel (superpixel) XP can be formed accordingly. On Fig schematically shows the process of storing, similar Fig, but where the size, orientation, shape, or location of superpixel can be formed, as shown, in accordance with a feature of 38 images. Shows feature 38 of the image is jagged or irregularity in the absence of direct horizontal or vertical boundaries. As shown, superpixel XP is selected to simulate or emulate the shape features of the image. In addition to the existing special form, location, size and orientation of these superpixels may affect the feature J8 image using well-known methods of calculating the pixel level.

Quantization can be selected pixel color and also change the assigned color (e.g. color assigned AC). These assigned colors can be assigned to anything, including using the preferred color vectors. Thus, the preferred color vectors can be assigned to at least some of the pixel of the color video image, and not arbitrary or unified set of assigned colors.

On Fig shows regional color vectors and their colors or coordinates zwetna the TEI standard chart of Venosta or color map in a rectangular coordinate system x-y CIE. This map shows all the known color, or perceive color with the maximum brightness depending on the coordinates x and y chromaticity, and as a benchmark shows length color waves in the nano-range and white point light source in the standard CIE. This map shows the three regional color vector V, and you can see that one of the color vector V lies outside the gamut of colors that can be obtained according to standards create color PAL/SECAM, NTSC and Adobe RGB (scale shown ).

For clarity, Fig shows a closeup of the chart CIE on Fig and additionally shows the pixel color Cp, and how they are assigned regional color vectors V. Criteria for the assignment of regional color vector can vary and can include calculating the Euclidean or other distance from the specific color vector V using known computational techniques. Marked color vector V lies outside the tinted color space or color gamut display systems; it provides the possibility that the preferred color, easy system-generated ambient light or a source 88 of light, may become one of the assigned colors used in quantization tinted (video) color space.

Once completed the distribution of the assigned color is in using one or more of the above methods, the next step will be the implementation of extracting dominant colors from a distribution of assigned colors by selection of any of: [a] fashion assigned colors; [b] median assigned colors; [c] weighted average color assigned colors; or [d] weighted average using the pixel weighting function.

For example, to select the assigned color that appears with the greatest frequency, you can choose the method of histograms. On Fig shows a histogram, which gives the assigned pixel color or colors (assigned color)that appear most often (see ordinate the Percentage of pixels), namely: mode of distribution assigned colors. This fashion or the most commonly used is assigned the color can be chosen as the dominant color DC (shown) for use or emulation system of the ambient light.

Similarly, you can choose the median of the distribution of assigned colors as the dominant light DC or to help you choose the dominant color. On Fig schematically shows the median of the distribution of assigned colors, which shows the median or mean value (interpolated for an even number assigned colors), chosen as the dominant color DC.

Alternatively, you can perform the summation over when the military colors using a weighted average, in order to influence the selection of the dominant colour (s), it is possible to best match the color levels in the system colors of the ambient light. On Fig shows the mathematical summation of the weighted average by chromaticity assigned colors. For clarity, shows a single variable R, but you can use any number of dimensions or coordinates (e.g., coordinates x and y for CIE). The variable R color added, as shown, the pixel coordinates (or coordinates superpixels if required) i and j, which in this example range from 1 to n and m, respectively. The variable R color, as shown, is multiplied throughout the summation on the pixel weighting function W with indexes i and j, and the result is divided by the number of pixels n x m to obtain the weighted average.

A similar weighted average, using the pixel weighting function shown in Fig, which is similar to Fig, except that, as shown here, W is now also a function of the locations i and j pixels, which allows to obtain the spatial function of dominance. By assigning weights for the location of the Central pixel or any other part of the display D can be allocated in the choice or selection of the dominant is the DC Council, as is discussed below.

Operations weighted summation can be performed as specified in step 33 the regional allocation information"above, and W can be selected and stored by any known method. Pixel weighting function W can be any function or operator, and, for example, may be equal to the unit to enable or zero for exceptions for specific locations of the pixels. The features of the image can be recognized using known methods, and W can be respectively changed to a more important goal, as shown below in Fig.

As soon as using the above methods or any equivalent method assigned to the selected color as a dominant, can be made more accurate estimation of the color appropriate to the expression system of the ambient light, because it requires much less processing steps than otherwise, when you would have to take into account all color and/or all of the video pixels. On Fig as schematically shows an example installation of the interesting colors in the distribution of assigned colors, followed by separation of the assigned pixel Venosta to obtain the correct dominant color, subject to designation as the dominant color. As you can see, the pixel color Cp privativum assigned colors AC; color assigned AC, shown in the lower part of the device is not selected as the dominant, while the color assigned in the upper part is considered to be dominant (DC) and, as shown, selected as the interest color COI. Then you can optionally check the pixels that have been assigned to (or at least part of them) are assigned the color of the AC considered interesting color COI, and by reading them directly chroma (for example, using the average as shown in figure 4) or by performing the steps of extracting dominant colors that are already data on a small scale for this specific purpose, you can get the best reproduction of the dominant colors, shown here in the form of a right dominant color TDC. Any processing steps required for this can be done by using the steps and/or components described above, or using a separate selector right color, which may be a known program or subprogram or task schema or its equivalent.

The application of the laws of perception are discussed below, but in the General case, as schematically shown in Fig, the selection of a dominant color according to the invention can be executed many times, separately or in parallel to provide a palette of colours, where the dominant color DC may contain domineer the matter of color DC1 + DC2 +DC3, as shown. This palette can be the result of applying the methods outlined here to create, using the laws of perception, the above set of dominant colors.

As mentioned above in connection with Fig, the pixel weighting function or its equivalent can provide weighting to pixel locations that allows you to specifically consider or to highlight a specific area of the display. On Fig shows a basic view of the front surface of the video display, shown in figure 1, and the example of the unequal weighting of pixel Pi in the preferred spatial domain. For example, the Central area of the display can be weighted using a numerically large weighting function W, while the area of extraction (or any area, for example the background of the scene), as shown, may be weighted using a numerically small weight function w.

Such weighting or emphasis can be applied to features J8 image, as shown in Fig that gives an elementary view of the front surface of the video display shown in Fig, and where the feature J8 image (fish) can be selected using known methods in step FE highlight features (see figure 3 and 12). This feature J8 image can only be used by video or just a part used in the of geocontent during the selection of the dominant color DCE, as was shown and described above.

Let us now turn to Fig, from which you can see that it is also possible the use of the guidelines for getting the dominant color is selected for a frame at least partially based on at least one dominant color from the previous frame. Shown in the diagram frames F1F2F3and F4processed to highlight the dominant color DCE, as shown, which aims to highlight the dominant colors DC1, DC2, DC3 and DC4, respectively, and where by means of calculations, you can install the dominant color for the frame (shown here that it DC4) as a function of the dominant colors DC1, DC2 and DC3 (DC4=F (DC1, DC2, DC3)). This opens up the possibility of providing an abbreviated procedure to select the dominant color DC4 for frame F4or raise awareness about the stage at which the dominant color selected for the previous frame F1F2F3help to influence the selection of the dominant color DC4. This abbreviated procedure is shown in Fig where to reduce the computational load in the preliminary selection of the dominant color DC4* used colorimetric evaluation, and then the next step is the addition of dominant colors extracted from the previous frame (or previous one is th frame), that helps to prepare the choice for DC4 (preparation DC4 using an abbreviated procedure). This procedure can give good results, as described below.

Let us now turn to Fig, which shows the basic view of the front surface of the video display that displays the content of the scene, including the possible, the newly introduced feature to show the need for determining the dominant color of the support of the dark and other prerogatives of perception, according to the invention. For the reasons set out above, the dominant color selection often produces results that are not consistent with the desired perception in the output. On Fig schematically presents dark or night scene, characterized by a specific feature V111 (for example, the green spruce). When using the dominant color selection without regard to the laws of perception, the problem is often aggravated: color content of the scene or in a particular frame is often too much depends on the point of perception, and broadcast around the colors seem too bright, and not subtle or appropriate for content dark scene. In the example shown in Fig, a large number or majority of the pixels, for example the majority of the pixels MP, as shown, form a large part of the image in the frame, with the majority of these pixels MP have on average low brightness or n is left brightness. In this example, can be the preferred effects of the darkness to spread the ambient light, and color that you prefer developers to broadcast ambient light, are often svetlosti single scene object, such as a tree, especially V111 scene, not svetlosti derived in large part from the majority of pixels MP, which in this illustrative example, display dark, having low average brightness, and the nominal color, as represented in ambient light, may seem unnatural.

How to implement the above include the application of the law of perception, which is implemented by the support of the darkness, as discussed below, when the detected dark scene and identified these majority pixels MP, which either excluded from consideration in the allocation of the dominant colors, or set with reduced weights compared with other pixels forming the features of the scene, such as feature V111 scene. This requires recognition of the element of the scene using the CA content of the scene (see Fig) and then performing special processing for various other scene elements, for example a dark background or a particular scene. The application of the laws of perception may also include the removal frequent the th stage, which are undesirable for the selection of the dominant color, such as scenic spots or artifacts, and/or may include the detection of image features, for example features V111 scene by recognition features (for example, the allocation of FE features, for example, in figure 3 and 12, or its functional equivalent), as discussed in connection with Fig.

In addition, a new feature of the scene, for example V999 (lightning stroke or flash of light), can outperform the importance (or coexist with) the chromaticity, which is provided by the evolution of basic color of the features V111 scene, obtained using the above methods.

Similarly from the application of the laws of perception can win a bright, bright white, greyish scenes or scenes with a uniform high brightness. Let us now turn to Fig, which shows the basic view of the front surface of the video display that displays the content of the scene, to illustrate the selection of the dominant color with color support. On Fig presents a scene showing relatively bright, with something of self-similar region in the form of features V333 scenes, which can represent a clear or white water spray from the waterfall. This feature V333 scene can mainly be grey or white and therefore can be considered as consisting of the C majority of pixels MP, as shown, while another feature V888 scenes, such as blue sky, does not contain the majority of the pixels and can be considered as preferred by majority of the pixels MP to highlight the dominant light, that is, the developer of the effects of ambient light may prefer in this example, the broadcast of the blue, not white or gray color, especially if the feature V888 scene appeared again, or contains preferred color (such as sky blue color) to broadcast ambient light. One of the problems of the prior art is that the selection of the dominant colors sometimes leads to an underestimation of the color and the predominance of bright or highly saturated white, grey or other insufficiently saturated colors. To address this circumstance to ensure color support, you can apply the law of perception or set of laws of perception, for example, to estimate the brightness of the scene and reduce or eliminate the influence or weighing white/gray majority of pixels MP at strengthening the impact of other features of the scene, such as the blue sky V888.

Contact Fig, where schematically shows three illustrative categories by which to classify the laws of perception according to this invention. As shown, the laws of perception to select the dominant cvetanovic contain any or all of the following components: simple conversion of the chrominance (SCT), pixel weighting as a function of scene content (PF8) and an expanded selection/search (EE8). These categories are given only as illustrations, and specialists in the art will be able to use the principles outlined here for the development of similar alternative schemes.

Let us now turn to Fig-43, where examples of specific techniques associated with the use of specified groups of the laws of perception.

The first simple conversion color SCT can give a number of methods, each of which seeks to replace or convert the originally scheduled dominant color other than the color. In particular, the specific chromaticity (x,y)created by the selection of the dominant colors, can be replaced in any desired variant converted by the chromaticity (x', y'), as shown in Fig, which shows schematically a simple conversion SCT in the form of a functional operator.

If, for example, when selecting FE features get specific dominant color (such as brown) to broadcast ambient light and the closest match for this dominant color in the color space of the source 88 of the ambient light is chromaticity (x,y), for example color, which has a pinkish hue (and this is the closest matching color impractical from the point of view of the public perception is), then you can conversion or replacement on chromaticity (x',y'), such as color, created from orange and green ambient light circuit 18 create ambient lighting or its equivalent, as mentioned earlier. These transformations can have the display view "color on color", which can be represented in the form of a reference table (LUT) or can be implemented in native code, software, data file, the algorithm or function statement. Since the law of perception of this type does not need explicit content analysis, it is called simple conversion.

Simple conversion of the color SCT can use the laws of perception that give you more time to broadcast the preferred Venosta than it is given otherwise. If, for example, preferred or desirable is a specific blue color, it can be an object or a simple conversion color SCT, which will prefer it by displaying a large number of similar blue Venosta in this specific blue color. Also the invention can practically implement this way, when a simple conversion is used for the choice of color, found in the second toned color space of the source 88 surrounding the Board.

Also, according to the invention, it is possible to use the analysis of the CA content of the scene to add functionality in the pixel weighting function W thus, to be able to apply the laws of perception. On Fig shows a possible functional form for the specified pixel weighting function. Pixel weighting function W may be a function of many variables, including any or all of the variables from the spatial location of the pixel of the video display, indexed, for example, by indexes I and j; the color of, for example, the level of luminescence of the phosphor, or the value of the R primary color (where R can be a vector representing the R, G and B), or the variables x and y chromaticity; and the brightness L (or its equivalent), as shown. The execution result of the allocation FE features and content analysis CA values of the pixel weighting function W can be installed so as to implement the laws of perception. Since the pixel weighting function W can be a functional operator, it can be installed to reduce (or, if necessary, to exclude) any impact of the selected pixels, such as those which are spots on the screen, screen artifacts, or those that are considered by the majority pixels MP, through content analysis, for example, cogdogblog, water, darkness or other content of the scene is set with low weight or no weight, to ensure compliance with the law of perception.

Let us now turn to Fig, where schematically shows a number of possible steps to select the dominant color with a average, calculated using the pixel weighting function according to the invention for performing two possible illustrative of the laws of perception. The main step, called pixel weighted in function of the content of the scene (PF8), may contain much more possible functions, the two functions are shown using arrows.

As shown in the left part pig, to ensure the specified law of perception to support dark, or support of the darkness (as discussed in connection with Fig), parses the content of the scene. One possible step, which is not necessarily the first step is to appeal to the brightness of the scene, for example, by computing, for any or all of the pixels, or for the distribution of assigned colors, the total or average brightness per pixel. In this particular example, the overall brightness of the scene is considered to be low (this step is omitted for clarity), and the possible resulting step is, as shown, to reduce the brightness of the ambient light to match the creation of ambient light with darkness scene the best. is another possible step is to exclude or reduce weight, asked the pixel weighting function for very bright pixels, which is shown as the truncation/weight reduction for bright/colored pixels. The selected threshold brightness level for decisions about what is bright or colored pixel may vary, and be established in the form of a fixed threshold, or may be a function of the content of the scenes, historical scenes, and user preferences. For example, all bright or colored pixels can have their values W, reduced three times, to reduce the brightness of the ambient light, which would be the dominant light neither of them was selected. This purpose can also be accomplished step decrease the brightness of the ambient light, so as to equally reduce all the brightness of the pixels by the corresponding additional reduction of the pixel weighting function W. alternatively, the pixel weighting function W can be reduced by a separate function, which itself is a function of the brightness of a particular pixel, such as 1/L2 times, where L is the brightness.

Another possible step to support dark is a possible choice of colors COI from bright/colored pixels, namely, the above-described process, in which interested the color is set from a subset of pixels in the video content that are bright and may have a high the e saturation (painted), for example, from features V111 on Fig. In particular, some color may be selected for further analysis in a manner similar to that discussed above and shown in Fig, whether the distinction between the correct colors for the assigned color that was selected, or whether you are interested in color from the pixel color, which itself will be part of a distribution of assigned colors for further analysis, for example the repetition of the selection of the dominant color for the specified interest colors (for example, finding a representative green to eat V111). This can lead to another possible shows step (possible extended release), which is discussed below, and as shown, the dominant color choice that may result from implementation of the extended highlight the dominant color on the distribution of interesting colors, selected from the previous selection of the dominant color.

As shown in the right part Fig, to ensure the specified color support of the law of perception (this color support was discussed in connection with Fig) again performed a content analysis of the scene. One possible, but not necessarily the first step is an appeal to the brightness of the scene, for example, by computing, for any or all of the pixels, or for the distribution of assigned colors, the overall or average brightness per pixel, as was done previously. In this example, the detected high overall brightness of the scene. Another possible step is to avoid or reduce the weighting specified pixel weighting function W for the very bright, white, grey or bright pixels, which is shown in the form of truncation/decrease the weighting for bright/colored pixels. This can prevent the selection of the dominant color with weak or excessively bright color, which may be saturated, or too white or too grey. For example, pixels representing cloudiness V333 on Fig, can be excluded from the pixel weighting function W, by setting their contribution is negligible or zero. You can then choose the dominant or interesting color that is shown in the form of choice COI remaining Venosta. As shown, a possible extended selection can also be performed to facilitate the step of selecting a dominant color, as shown and discussed below.

Step extended selection/search EE8 mentioned above and shown in Fig may be any type of process performed after the initial allocation process dominant color, such as the use of the laws of perception to narrow down the set of dominant colors candidate. On Fig schematically shows a number of possible steps to select the dominant C the ETA using the average color/brightness, calculated using the pixel weighting function for extended selection of a dominant color, according to the invention, in order to implement two possible illustrative of the law of perception. Shows two examples of extended allocation is static support of the law of perception and dynamic support of the law of perception. As shown in the left part of the figure, one possible process support act of perception may include the step of identification and subsequent truncation/reduced weighting for the majority of pixels. This may include the use of content analysis stage to identify the majority of pixels MP, as shown in Fig and 38, using the analysis of edges, shape recognition or other methods of analysis of video content. Then, the pixel weighting function W is reduced or set to zero for the pixels that are considered by the majority pixels MP, as discussed above.

Then at the next possible step (you can select the COI of the remaining Venosta (for example, by the method of histograms)) is extended highlighting the dominant color on the pixels that are not majority pixels MP, for example, the previously mentioned selection of the dominant color of the pixel Venosta or distribution of assigned colors by selection of any of: [a] fashion (EmOC is emer, method of histograms); [b] median; [c] weighted average color or [d] weighted average using the pixel weighting function of the pixel Venosta or assigned colors. This may sound like a functional repeat the selection of the dominant color after the application of the law of perception, such as weight reduction, asked the majority of the pixels. From this process the dominant color selection can be performed the last step, namely the choice of the dominant color for broadcast ambient light.

Another possible act of perception is an act of perception for dynamic support, as shown on the right side Fig. Shows the first two steps are identical to the steps for static support on the left side. The third step is the identification of the newly introduced features of the scene (for example, lightning V111), and performing the selection of the dominant color of the newly introduced features of the scene as shown. The fourth possible step is, as shown, in the choice of Venosta from either or both of the previous steps to broadcast ambient light, namely: the law of perception can include obtaining either or both results highlight the dominant color of the newly introduced features scenes or performing selection of a dominant color from the remaining of Venosta, obtained after reduction or exclusion of the influence of majority pixels MP. Thus, for example, re-introduced the lightning V999 and tree V111 are most likely to contribute to the acquisition of one or more dominant colors DC to broadcast ambient light than using the direct selection of the dominant color without the law of perception.

In this application of the law of perception is nothing to prevent early quantization of the color space, as described above. Also these methods can be repeated for the selected features of the scene or to carry out a further search of the preferred Venosta to broadcast ambient color.

As an additional example consider a specific illustrative scenario or video content that contains three background features scenes and one newly introduced feature. In the background appears the sand, the sky and the sun. The scene is evaluated using content analysis. Then find the shades of sand, creating 47% of the image pixels. Use the law of perception, which is that the pixels with the color of the sand is defined as a majority of the pixels by the pixel weighting function W set zero impact, while there are other major elements of the scene. For extended selection select the sky and as interested in CEE is and set the resulting blue color, selected using the above methods. Then starts the process of allocating the right dominant color (see Fig) to obtain the correct colors representing the actual pixel color that distinguishes the sky. This process is updated on a frame-by-frame basis (see Fig and 17). The sun, in turn, is recognized by allocating particular FE, and using a simple conversion of the color rather than bright white, inherent color video information is selected more pleasant yellowish-white color. When the pixels with shades of sand is below a certain numerical threshold, another law of perception allows you to set all three features as a dominant colors that can be set to broadcast ambient light or separately, depending on the locations of pixels (e.g., selection, such as R1, R2 etc), or together. Then the newly introduced feature (the white ship) when using another act of perception, which allocates new content, causes the white color-based selection of the dominant color for the vessel, which is broadcast in white, while the ship in the video scene, to name not retired. When the ship left the scene, one act of perception, which believes that newly appeared content with the al unmanaged, when the number of pixels that represents him, was below a certain percentage (or lower part, not related to features that are already reproduced (sand, sky, sun)), allows to re-establish these three background features to broadcast ambient light through the respective dominant colors. When the number of pixels with shades of sand increases again, the impact is again suppressed by zeroing them pixel weighting function W. However, another law of perception allows you to make it so that when the other two background features (the sky and the sun is no longer present, the pixel weighting function W for the pixels with colours, sand is restored, and it will again be reduced in the presence of the newly introduced features of the scene. When there is a red snake, the content analysis takes 11% of the pixels for this feature. Again excludes the impact of pixels with shades of sand in the allocation of the dominant color, and highlighting particular with regard to the snake gives interesting color COI, which expanded the selection and/or an optional process of choosing the right colors cleans the dominant color, selected to represent the color of the snake to broadcast ambient light.

From the preceding description, it can easily understand that without mechanisms changes the selection of the dominant colors in accordance with the laws of perception of selected dominant color can represent you everywhere time-varying pale blue-white shade not representing the content of the scene and didn't have much entertainment or informational value to the viewer. The application of the laws of perception allows for the specificity of the parameters and, once implemented, has the effect of skillful choreography. The results of the application of the rules of perception when selecting a dominant color can be used, as suggested previously, so that the specified color information becomes available to the source 88 of the ambient light in the second toned color space.

In this way ambient light created in L3 to emulate region R3 allocation, as shown in figure 1, may have a color that enhances the perception of the phenomenon in this area, such as floating fish, as shown. This can increase the visual experience and to provide adequate shades that look not stridently bright or poorly matched.

In the General case, the source 88 of the ambient light may embody various effects of the diffuser to create a mix of light and radiation or other phenomena, for example, by using designs of lamps with frosted or glazed surface; corrugated glass or plastic; or a structure with holes, for example, by using metal structures, others from the additional light source. To ensure the desired effects you can use any number of known diffusion or scattering materials or phenomena, including those who receive, using the scattering from small suspended particles; matte plastics or resins, drugs, use of colloids, emulsions or granules 1-5 microns or less, for example, less than 1 micron, including durable organic compounds; gels and salts, methods of making and production are well-known specialists in this field of technology. The phenomenon of scattering can be technically implemented so that it contained the Rayleigh scattering for visible wavelengths, for example, to create a blue color to enhance the blue in the surrounding light. Created colors may be limited in areas such as General bluish coloring in some areas or regional colors, for example, in the form of the apical section that generates blue light (surrounding light L1 or L2).

Lamp ambient light can also be mounted with goniophotometric element, such as a cylindrical lens or a lens that can be generated internally as part of the design of the lamp, or inserted into it. This gives you the opportunity to obtain special effects when a character is created of light varies in function of the location of the viewer. You can use other optical and configuration including rectangular, triangular, or regular prisms or form, and they can be placed on a block or blocks ambient light or to present as a whole. In the result, rather than obtaining an isotropic output, can indefinite way to change the effect, for example the zone of interest of the light shade on the surrounding walls, objects, and surfaces near the source of the ambient light, which creates the option of a light show in a darkened room in the form of a change of scene elements, color and intensity on the videodisplay block. This effect may be an element of theatrical lighting, which greatly changes the nature of light depending on the location of the viewer, for example, when observing blue sparks, instead of the red light when the viewer stands up from a chair or moved while watching a movie in the home theater. The number and type of conformations elements that you can use almost unlimited, including pieces of plastic, glass and optical effects created as a result of the application of technology ribs and moderate degradation. Lamp ambient light can be unique or interchangeable for various theatrical effects. These effects can be adjusted, for example, by changing the number of the Board, passing through goniophotometric element, or by radiation of different parts (for example, through the use of sunlamp or groups of LEDs) block ambient light.

Of course, the signal AVS may be a digital data stream and contain the bits of the synchronization bits and clutch; bits parity; error codes; alternation; special modulation; the packet headers and the required metadata, such as description of the effect of ambient light (for example, "the storm"; "sunrise"), and specialists in the art should understand that shown here functional steps are merely illustrative and does not include for clarity the standard steps or data.

Shown in figure 3 and 12 user interface and memory preferences can be used to change the preferences for system behavior, such as changing the precision of color reproduction of the video content of the video display D to the desired level; change flowery, including the range at which any fluorescent color or colors out of the color gamut are broadcast into the environment, or how quickly or strongly reacts to the surrounding light on the changes in the video content, for example, by the excessive increase in the intensity or changes in other quality content team light SP is the Seminary. This may include advanced content analysis, which can be done in muted tones for movies or content of a specific character. The video contains a lot of dark scenes, can affect the mode of operation of the source 88 of the ambient light, causing the weakening of the transmitted ambient light, while flowery and bright colors can be used for some other content, such scenes with a lot of skin tones or bright scenes (Sunny beach, the tiger in Savannah).

The description given here in order to give an opportunity to those in the art to implement the invention in practice. Using these principles, you can offer a variety of configurations, and the configuration and layout are only illustrations. Not all of these needs in a practical implementation; for example, special conversion in the second toned color space can be excluded from the proposed principles, without leaving the scope of the invention, in particular, if the two toned color space RGB and R'g'b' are similar or identical. In practice, the methods described in the claims, can be implemented as part of a larger system, for example, entertainment center or home theater.

It is well known that from the its here to illustrate the functions and calculations can functionally reproduce or emulate, by using the software or machine code, and specialists in the art will be able to use these principles regardless of how the proposed encoding and decoding. This is especially true when you consider that there is no strict need to decode the video frames in order to process the statistics of the levels of the pixels, as here suggested.

Experts in the art based on the principles have the ability to modify the device and methods described in the claims, and, for example, to reflow steps or data structures tailored to specific applications, and to create a system that has some similarities with the options chosen here for illustrative purposes.

The invention disclosed using the above examples, it is possible to practically implement, using only some of the traits mentioned above. Also nothing proposed in the claims does not preclude adding other structures or functional elements.

Obviously, in light of the above principles, there are numerous modifications and variations of the present invention. Thus, it should be clear that within the scope of the attached claims it can practically implement differently than C is followed by a session has been specifically described or suggested.

1. The method of selection of the dominant color of the video content encoded in tinted color space (RGB)to create, using the laws of perception, dominant color (DC) to simulate the ambient light source (88), and the method comprises:
[1] perform the selection of the dominant color of the pixel Venosta (Cp) of the video content in tinted color space to create a dominant color by highlighting any of: [a] fashion pixel Venosta; [b] the median of the pixel Venosta; [with] the weighted average color of the pixel Venosta; [C1] weighted average of the pixel Venosta using pixel weighting function (W), which is a function of any one of: the location of the pixel (i, j), chromaticity (x, y, R) and the brightness (L);
[2] removing color dominant color according to the law of perception, but the act of perception is selected from any of: [a] simple conversion chroma (SCT); [b] weighted average using the pixel weighting function (PF8), optionally formulated in a way that represents the impact of the content of the scene, which is obtained by evaluating the color or brightness for a set of pixels in the video content; [with] large selection of dominant colors (:EE8) using a weighted average, where the pixel is owned by the weight function is formulated as a function of the content of the scene, which is obtained by evaluating the color or brightness for a set of pixels in the video content, and pixel weighting function additionally formulated in such a way that weigh at least reduced for the majority of pixels (MP); and
[3] the transformation of the dominant color of the tinted color space into the second toned color space (R'g'b'), formed so as to allow the excitation source of the ambient light.

2. The method according to claim 1, in which in a simple conversion choose the color, found in the second toned color space.

3. The method according to claim 1, in which the pixel weighting function is formulated to support the dark by: [4] evaluation of video content to establish that the brightness of the scene in the content of the scene is low; and [5] perform any of: [a] using the pixel weighting function, additionally formulated to reduce the weighting of bright pixels; and [b] broadcast dominant color obtained using the reduced brightness compared to the brightness, which would have been created otherwise.

4. The method according to claim 1, in which the pixel weighting function is formulated so as to provide color support by: [6] evaluation of video content to establish that the brightness of the scene is found in the content of the scene is high; and then [7] perform any of: [a] using the pixel weighting function, additionally formulated to reduce the weighting of bright pixels; and [b] step [2] [C].

5. The method according to claim 1, in which an expanded selection of the dominant color is repeated separately for different features of the scene (J8, V111, V999) in the video content, forming a set of dominant colors (DC1, DC2, DC3), and step [1] again, where each of the set of dominant colors is designated as a pixel color.

6. The method according to claim 1, wherein after execution of the extended dominant color selection step [1] is repeated separately for pixel Venosta in the newly introduced features of the scene (V999).

7. The method according to claim 1, before the step [1] contains the quantization of at least some of the pixel Venosta (Cp) of the video content in tinted color space for the formation of a distribution of assigned colors (AC) and, during the step [1], obtaining at least some of the pixel Venosta distribution of assigned colors.

8. The method according to claim 7, in which the quantization includes the preservation of the pixel Venosta at least one superpixel (XP).

9. The method according to claim 7, in which at least one of the assigned color is a regional color vector (V), which is not necessarily in tinted black is produced by the wholesale space.

10. The method according to claim 9, in which regional color vector is in the second toned color space.

11. The method according to claim 7, additionally containing the installation of at least one interesting colors (COI) in the distribution of assigned colors and the allocation of the assigned pixel Venosta to obtain the correct dominant color (TDC), subject to designation as the dominant color.

12. The method according to claim 1, in which the dominant color palette contains the dominant colors (DC1+DC2+DC3), each of which receive, using the specified method.

13. The method of selection of the dominant color of the video content encoded in tinted color space (RGB)to create, using the laws of perception, dominant color (DC) to simulate the ambient light source (88), and the method comprises:
[0] quantization of at least some of the pixel Venosta (CP) of the video content in tinted color space for the formation of a distribution of assigned colors (AC);
[1] perform the selection of the dominant colors of the distribution of assigned colors to create a dominant color by highlighting any of: [a] fashion distribution of assigned colors; [b] the median of the distribution of assigned colors; [with] a weighted average by chromaticity distribution of assigned colors; [d] in vesennego average distribution of assigned colors using the pixel weighting function (W), which is a function of any one of: the location of the pixel (i, j), chromaticity (x, y, R) and the brightness (L);
[2] removing color dominant color according to the law of perception, but the act of perception is selected from any of: [a] simple conversion chroma (SCT); [b] weighted average using the pixel weighting function (PF8), optionally formulated in a way that represents the impact of the content of the scene, which is obtained by evaluating the color or brightness for a set of pixels in the video content; [with] large selection of dominant colors (:EE8) using a weighted average, where the pixel weighting function is formulated as a function of the content of the scene, which is obtained by evaluating the color or brightness for many of the pixels in the video content, and pixel weighting function additionally formulated in such a way that weigh at least reduced to the assigned colors attributed to the majority pixels (MP); and
[3] the transformation of the dominant color of the tinted color space into the second toned color space (R'g'b'), formed so as to allow the excitation source of the ambient light.

14. The method according to item 13, in which in a simple conversion choose the color, found in Deuteronomy is m toned color space.

15. The method according to item 13, in which the pixel weighting function is formulated to support the dark by: [4] evaluation of video content to establish that the brightness of the scene in the content of the scene is low; and [5] perform any of: [a] using the pixel weighting function, additionally formulated to reduce weight assigned colors attributed bright pixels; and [b] broadcast dominant color obtained using the reduced brightness compared to the brightness, which would have been created otherwise.

16. The method according to item 13, in which the pixel weighting function additionally formulated so as to provide color support by: [6] evaluation of video content to establish that the brightness of the scene in the content of the scene is high; and [7] perform any of: [a] using the pixel weighting function, additionally formulated to reduce the weighting assigned colors attributed bright pixels; and [b] step [2] [C].

17. The method according to item 13, which expanded the selection of the dominant color is repeated separately for different features of the scene (J8, V111, V999) in the video content, forming a set of dominant colors (DC1, DC2, DC3), and step [1] again, where each of the multiple dominant colors denote as assigned color.

18. The method according to item 13, in which vypolnyayuschii selection of the dominant color, moreover, the step [1] is repeated separately for the assigned colors corresponding to the pixels representing the newly introduced feature of the scene (V999).

19. The method according to item 13, additionally containing the installation of at least one interesting colors (COI) in the distribution of assigned colors and the allocation of the assigned source pixel Venosta to obtain the correct dominant color (TDC), subject to designation as the dominant color.

20. The method of selection of the dominant color of the video content encoded in tinted color space (RGB)to create, using the laws of perception, dominant color (DC) to simulate the ambient light source (88), and the method comprises:
[0] decoding of video in a tinted color space into multiple frames and quantization at least some of the pixel Venosta (Cp) of the video content in tinted color space for the formation of a distribution of assigned colors (AC);
[1] perform the selection of the dominant colors of the distribution of assigned colors to create a dominant color by highlighting any of: [a] fashion distribution of assigned colors; [b] the median of the distribution of assigned colors; [with] a weighted average by chromaticity distribution of assigned colors; [d] weighted average RA the distribution of assigned colors using the pixel weighting function OS), which is a function of any one of: the location of the pixel (i, j), chromaticity (x, y, R) and the brightness (L);
[2] removing color dominant color according to the law of perception, but the act of perception is selected from any of: [a] simple conversion chroma (SCT); [b] weighted average using the pixel weighting function (PF8), optionally formulated in a way that represents the impact of the content of the scene, which is obtained by evaluating the color or brightness for a set of pixels in the video content; [with] large selection of the dominant color (EE8) using a weighted average, where the pixel weighting function is formulated as a function of the content of the scene, which is obtained by evaluating the color or brightness for many of the pixels in the video content, and pixel weighting function additionally formulated in such a way that weigh at least reduced to the assigned colors attributed to the majority pixels (MP); and
[3A] the transformation of the dominant color of the tinted color space in netheravon color space (XYZ);
[3b] the transformation of the dominant color of the in standard color space into the second toned color space, followed by
[3C] matrix transformations of primary cell battery (included) is tov (RGB, R'g'b') tinted color space and the second toned color space in netheravon color space using the first and second tri-color matrix (M1, M2) primary colors; and obtaining the conversion of the color information in the second toned color space (R'g'b') by matrix multiplication of basic colors tinted color space, the first color matrix and the inverse of the second tri-color matrix (M2)-1.



 

Same patents:

FIELD: information technologies.

SUBSTANCE: invention concerns systems of coding/decoding of the squeezed image with the use of orthogonal transformation and forecasting/neutralisation of a motion on the basis of resolving ability of builders of colour and colour space of an input picture signal. The device (10) codings of the information of the image the forecastings (23) block with interior coding is offered is intended for an adaptive dimensional change of the block at generating of the predicted image, on the basis of the signal of a format of chromaticity specifying, whether is resolving ability of builders of colour one of the format 4:2:0, a format 4:2:2 and a format 4:4:4, and a signal of the colour space specifying, whether the colour space one of YCbCr, RGB and XYZ is. The block (14) orthogonal transformations and the quantization block (15) are intended for change of a procedure of orthogonal transformation and quantization procedure according to a signal of a format of chromaticity and a signal of colour space. The block (16) of return coding codes a signal of a format of chromaticity and a signal of colour space for insert of the coded signals gained, thus, in the squeezed information of the image.

EFFECT: increase of image coding and decoding efficiency.

125 cl, 12 dwg, 1 tbl

FIELD: information technologies.

SUBSTANCE: device and method are suggested which are intended for effective correction of wrong colour, such as purple fringe, created as a result of chromatic aberration, and for generating and output of high quality image data. Pixel with saturated white colour is detected from image data, at that in the area around detected pixel having saturated white colour the pixel of wrong colour and pixels having colour corresponding to wrong colour such as purple fringe are detected out of specified area. Detected pixels are determined as wrong colour pixels, and correction processing on the base of surrounding pixels values is performed over detected wrong colour pixels.

EFFECT: design of image processing device which allows to detect effectively an area of wrong colour.

25 cl, 22 dwg

FIELD: physics.

SUBSTANCE: invention concerns image processing technology, particularly YCbCr-format colour image data coding/decoding to smaller data volume by finding correlation between Cb and Cr chroma signal components of colour image data. The invention claims colour image coding method involving stages of: chroma signal component conversion in each of two or more mutual prediction modes; cost calculation for conversion values in each of two or more mutual prediction modes with the help of cost function defined preliminarily; selection of one or more mutual prediction modes on the basis of calculation result and conversion value output for the selected mutual prediction mode; entropic coding of output conversion values, where preliminarily defined cost function is selected out of cost function defining distortion in dependence of transfer rate, function of absolute subtract value amount, function of absolute converted subtract, function of square subtract sum and function of average absolute subtract.

EFFECT: increased efficiency of image coding.

88 cl, 23 dwg

FIELD: image processing systems, in particular, methods and systems for encoding and decoding images.

SUBSTANCE: in accordance to the invention, input image is divided onto several image blocks (600), containing several image elements (610), further image blocks (600) are encoded to form encoded representations (700) of blocks, which contains color code word (710), intensity code word (720) and intensity representations series (730). Color code word (710) is a representation of colors of elements (610) of image block (600). Intensity code word (720) is a representation of a set of several intensity modifiers for modification of intensity of elements (610) in image block (600), and series (730) of representations includes representation of intensity for each element (610) in image block (600), where the series identifies one of intensity modifiers in a set of intensity modifiers. In process of decoding, code words (710, 720) of colors and intensity and intensity representation (730) are used to generate decoded representation of elements (610) in image block (600).

EFFECT: increased efficiency of processing, encoding/decoding of images for adaptation in mobile devices with low volume and productivity of memory.

9 cl, 21 dwg, 3 tbl

FIELD: method and device for video encoding and decoding which is scalable across color space.

SUBSTANCE: in the method, encoder may inform decoder about position of brightness data in bit stream, and decoder may transform colored image to halftone image when necessary. In accordance to the invention, brightness data are serially inserted from all macro-blocks contained in a section, into bit stream, chromaticity data are inserted serially from all macro-blocks contained in a section, into bit stream, after inserted brightness data and bit stream which contains inserted brightness data and chromaticity data is transmitted.

EFFECT: creation of method for video encoding and decoding which is scalable across color space.

4 cl, 12 dwg

FIELD: engineering of systems for analyzing digital images, and, in particular, systems for showing hidden objects on digital images.

SUBSTANCE: in accordance to the invention, method is claimed for visual display of first object, hidden by second object, where first object has color contrasting with color of second object, and second object is made of material letting passage of visible light through it, where amount of visible light passing through second object is insufficient for first object to be visible to human eye. The method includes production of digital image of first and second objects with usage of visible light sensor. Digital data of image, received by computer system, contains both data of first object and data of second object, where data of first object and data of second object contains color information, and value of contrast between first and second objects must amount to approximately 10% of full scale in such a way, that along color scale of 256 levels the difference equals approximately 25 levels, then data of second object is filtered, after that values, associated with data of first object, are increased until these values become discernible during reproduction on a display.

EFFECT: creation of the method for showing hidden objects in digital image without affecting it with special signals.

3 cl, 6 dwg

FIELD: radio communications; color television sets.

SUBSTANCE: novelty is that proposed color television set that has radio channel unit, horizontal sweep unit, vertical sweep unit, chrominance unit, sound accompaniment unit, and color picture tube is provided in addition with three identical line doubling channels, pulse generator, and switch, second set of three planar cathodes mounted above first set and second set of three cathode heaters are introduced in its color picture tube. Reproduced frame has 1156 active lines and 1 664 640.5 resolving elements.

EFFECT: enhanced resolving power.

1 cl, 5 dwg, 1 tbl

The invention relates to techniques for color television and can be used in the decoder SECAM color TVs and video devices

FIELD: information technologies.

SUBSTANCE: method and the device for stabilisation of the image containing set of shots is offered, and estimates motion vectors at level of a shot for each shot, and is adaptive integrates motion vectors to yield, for each shot, the vector of a motion which is subject to use for stabilisation of the image. The copy of the reference image of a shot is biased by means of the corresponding is adaptive the integrated vector of a motion. In one version of realisation of the invention, the perimetre of the data unit of the image is supplemented with the reserve of regulation which is subject to use for neutralisation of images, in other variant vertical and horizontal builders are handled independently, and plans of motion evaluation related to the MPEG-4 coder, used for evaluation of vectors at level of macroblocks, and histograms.

EFFECT: possibility to delete astable motions at maintenance of a natural motion of type of scanning of the film-making plan, with the underload requirement for additional specialised plans and the underload magnification of computing complexity.

26 cl, 4 dwg

FIELD: physics, processing of images.

SUBSTANCE: invention refers to the sphere of liquid-crystal display (LCD) control immediately dealing with a method of automatic estimate of LCD data display correctness. Attainment of the result desired is enabled by import of LCD images into a PC memory with the help of a web-camera which is done within the framework of testing activities. The PC generates a neuron network specially trained for each LCD type. Prior to commencement of testing proper the LCD image to have been transmitted undergoes digital filtering. After that the polychrome image is converted into a monochrome (black-and-white) one under an adaptive algorithm with the connected regions of the resultant monochrome image duly identified and analysed. Based on the connected regions analysis one specifies the set of morphometric characteristics used for classification of regions as per configuration, position and orientation. Then, relying on the morphometric characteristics specified, one carries out programme-controlled estimate of the LCD dimensions and position relative to the plane point taken to be the coordinate system zero which information is subsequently taken into consideration when the image imported into the PC memory is processed. Comparison is carried out automatically with the help of the neuron network stored in the PC memory and specially trained to identify the areas of the LCD work surface image that contain the pre-specified connected region morphometric characteristics (with every LCD type). Digital filtering of the image transmitted followed by conversion of the polychrome image into a monochrome (black-and-white) one under an adaptive algorithm as well as the image connected regions identification and analysis and morphometric characteristics specification provides for elimination of LCD lighting irregularity interference with the testing and eventually enables programme-controlled estimate of the LCD dimensions and position relative to the plane point taken to be the coordinate system zero regardless of the LCD occasional turn or displacement which, combined with usage of a trainable neuron network for comparison of the tested and the reference images in terms of morphometric characteristics ensures possibility of high-quality automatic testing of various LCD types without modification of the existing programme code which ultimately results in enhanced testing quality and extended functional capabilities of the technique.

EFFECT: extension of the technique functional capabilities combined with testing quality enhancement and control simplification and cheapening.

5 dwg

FIELD: physics; image processing.

SUBSTANCE: present invention pertains to device for recoding images and method of processing the recorded images, and can be used in camera which uses a CMOS type solid-state image recording element. The proposed device comprises image recording medium with recording surface, consisting of pixels arranged in matrix, meant for output of the image formed on the recording surface. The device also has motion detecting apparatus, meant for detecting displacement values after recording image in each unit, where the recording surface is divided horizontally and/or vertically, control apparatus, meant for controlling exposure time of the recording medium in each unit, based on the detection result, obtained from the motion detecting apparatus, and signal level correction apparatus, designed for correcting and outputting from each unit, the signal level from recording the image, which varies in accordance with exposure time control.

EFFECT: improved quality of recorded image.

7 cl, 9 dwg

FIELD: physics, processing of images.

SUBSTANCE: invention concerns field of technics of processing of images, and, in particular, to coding and video information decoding without losses. The method of coding of the video information lost-free in which for each of set of devices of the image in block MxN of devices of the image for which the prediction is carried out spot in block MxN of devices of the image at least one device of the image which is the closest to a predicted device of the image in the direction of a prediction spotted by a mode of coding is offered; predict value of a predicted device of the image with use of value of one device of the image mentioned at least which is the closest to a predicted device of the image in a prediction direction; spot a difference between the predicted value of a device of the image and predicted value of a device of the image; also carry out entropy difference coding between the predicted value of a device of the image and predicted value of a device of the image.

EFFECT: creation of system of coding/decoding of video information without losses in which at performance of interior prediction of block having in advance given size, compression coefficient increases at the expense of use of device of the image in predicted block.

15 cl, 16 dwg, 2 tbl

FIELD: coding systems.

SUBSTANCE: proposal is given of a system for coding video data, which comprises: a first coding module, second coding module and a header information generating module. The first coding module codes input video data in accordance with a pre-determined syntax and generates the first stream of bits. The second coding module codes input video data in accordance with another syntax, which differs from the one mentioned above, and generates a second stream of bits. The header information generating module receives the first of second stream of bits and adds the header information, which includes of information on the syntax type, which indicates which syntax is used for coding the first or second stream of bits.

EFFECT: scalable coding.

27 cl, 10 dwg

FIELD: information technologies.

SUBSTANCE: invention refers to video signal coding and decoding systems based on weighed prediction. Offered video signal decoding procedure includes stages as follows. Coded video signal bit stream is received, and then coded video signal is decoded. Bit stream contains the first section containing reference block to restore current block in enhancement layer and the second section containing information specifying whether weighed prediction factor applied to reference block, is formed using information on base layer corresponding to enhancement layer.

EFFECT: higher video signal coding/decoding efficiency due by means of reduction error in compressed current block and predicted image.

12 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: video decoding method supports an algorithm of precise scalability on quality (FGS). The method involves: obtaining a predicted image for the current frame, using a motion vector, evaluated with a predetermined accuracy, quantisation of the difference between the current frame and the predicted image, inverse quantisation of the difference and formation of a reconstructed image of the current frame, compensation of motion on the reference frame of the FGS level and on the reference frame of the main level, using an estimated motion vector, calculation of the difference between the FGS level reference frame with compensated motion and the reference frame of the main level with compensated motion, subtraction of the reconstructed image from the current frame and the calculated remainder from the current frame and coding of the subtraction result.

EFFECT: reduced volume of calculations, required for a multi-level algorithm of progressive precise scalability on quality (PFGS).

49 cl, 12 dwg

FIELD: physics, image processing.

SUBSTANCE: the invention claims vide image coding method involving coding of video image frames by first video image coding scheme, interlayer filtration for frames coded by the first video image coding scheme, coding of video image frames by second video image coding scheme addressing the frames subjected to interlayer filtration, and generation of bit stream consisting of frames coded by first and second video image coding schemes. First video image coding scheme is Advanced Video Coding (AVC) scheme, second coding scheme is a wavelet coding scheme, and interlayer filtration includes discretisation increase for AVC-coded frames by wavelet filter and subdiscretisation of frames with high discretisation by MPEG filter.

EFFECT: improved efficiency of multilayer video image coding.

44 cl, 13 dwg

FIELD: physics, video technics.

SUBSTANCE: invention concerns video signal encoding/decoding, particularly adaptive choice of context model for entropy encoding, and video decoder. The invention claims a method of residual prediction flag encoding, indicating prediction of residual data for enhanced layer unit of multilayer video signal, based on residual data from bottom layer unit correlating to residual data from enhanced layer unit. The method involves stages of residual data energy calculation for bottom layer unit, determination of residual prediction flag encoding method according to the energy, and residual prediction flag encoding by the determined encoding method.

EFFECT: method and device for efficient flag compression.

66 cl, 17 dwg

FIELD: physics, video technology.

SUBSTANCE: invention concerns video signal processing, coding and decoding technology, particularly to method of video frame cutting with the help of frame identifier. According to the invention, the method involves defining frame identifier by coding/decoding rules as identifier intended for video signal stream cutting. If encoded video signal bit stream is cut, the said identifier is inserted into bit stream or used to replace initial identifier which is used as start flag. Bits indicating possibility of correct or incorrect decoding and displaying of cut video signal bit stream are not required in uncut bit stream; if a definite point is cut, the said identifier appears in this point to indicate a problem which would impair correct image decoding due to separation of reference data for assumption.

EFFECT: method of frame cutting with the help of frame identifier without leaving flag bits in uncut bit stream.

9 cl, 3 ex

FIELD: systems for recognizing and tracking objects.

SUBSTANCE: system has matrix sensors, each of which is meant for performing functions of first type sensor, providing for possible detection of object presence in working zone of sensor and determining position thereof, and second type sensor, providing for possible use of this object position, determined by first type sensor, for identification or recognition of object, and possible focusing or operation with greater resolution then first type sensor.

EFFECT: higher efficiency.

16 cl, 12 dwg

Up!