Method and system for illumination compensation and transition for video coding and processing

FIELD: physics, video.

SUBSTANCE: invention relates to image processing means. The method includes creating a plurality of frames of a picture and related prediction reference frames; for each frame and related prediction reference frame, calculating the intensity value and the colour value in a first colour domain; for each frame and related prediction reference frame, calculating weighted prediction gains; if said gains are non-negative, determining that a global transition with zero offset is occurs in a second colour domain; and if not all of said gains are non-negative, determining that a global transition with gradual change in illumination does not occur.

EFFECT: high efficiency of an image display means when encoding and processing video.

28 cl, 16 dwg

 

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the priority of provisional patent application U.S. No. 61/380111, filed September 3, 2010, which by reference is included in this disclosure fully in all respects.

AREA TECHNICAL APPLICATIONS

[0002] the Present invention, in General, relates to image processing. In particular, one embodiment of the present invention relates to the preparation of compensation parameters of illumination and detection of the predominant types of transitions light when encoding and video processing.

Background of the INVENTION

[0003] the Transitions from one picture to the next picture of the video signal often differ in any movement on a new picture relative to the previous picture, which could be subjected to compression processing or the appearance or disappearance of an object or its part, or the emergence of a new scene, all of which may make the previous encoded, processed, or the original image is less suitable for use as reference pictures for prediction. Most of these events can be modeled using motion compensation (special case of the interframe prediction and intra-frame predictions. Motion compensation ispolzuetsya predicting discrete values on the current picture using discrete values on one or more previously coded pictures. More specifically, when motion compensation on the basis of the block of discrete values in the current picture is predicted on the basis of the block of discrete values on some of the already decoded reference picture. The latter is known as the prediction block. Block prediction can be as simple as block located at the same position on any previously coded picture, which corresponds to the motion vector of zeros. However, to account for the movement is transmitted to the motion vector, which instructs the decoder to use a different offset to the block that more closely matches the block that is predicted. The motion model can be as simple as broadcasting, where the motion parameters comprise motion vectors in the horizontal and vertical move, or as complex as affine or perspective transformation models that require 6 or 8 motion parameters. More complex patterns of motion compensation can also lead to the prediction blocks, which are combinations of several blocks, which correspond to different movement parameters. However, the video signals may also contain global or local illumination changes, which cannot be effectively modeled using motion compensation (interframe prediction) �whether intra-frame prediction. Such illumination changes are usually detected as gradual illumination changes, transitions, flash, or other local changes in luminance, which can be caused, for example, the presence of multiple light sources. Weighted prediction (WP), for example, compensation of illumination, can be of benefit to the efficiency of the prediction of gradual changes of illumination, transitions, flares and other local variations of illumination. Weighted prediction comprises weighting (multiplying) the discrete values of the color components, e.g., discrete values of luminance and/or chrominance gain, which further advances by adding additional bias. It should be noted that in this disclosure the terms "color settings" or "color components" can be used to refer to individual components, which include color area, or space. It should also be noted that in some areas, or spaces, the color components may include components related to the intensity, and components related to color. Components related to the intensity, can include one or more of the following quantities: the brightness value or the value of the luminosity, and components related to color, can include one or more of the following values: value as color or chromatic value. Located at the modern level of development of codecs, such as H. 264/AVC weighted prediction support discrete values that can be in one of several possible color spaces/areas. Weighted prediction is also suitable for temporal pre-filtering or postfiltration that can be used to reduce sensor noise, compression or other distortion, and, among others, temporal flicker/incompatibilities. In practice, the operation of image processing, are responsible for changing light conditions, may not necessarily occur in the area used for compression or image processing such as YCbCr. Intuition, based on experimental data, suggests that these operations are usually conducted in another area, usually in the area of sRGB (sRGB is a widely used color space for the PC and digital cameras) which, moreover, more closely notions of color information in human perception. It should be noted that there are many possible formulas for converting YCbCr to RGB color space and RGB color space. Also note that before or after operations,�, who created the change of illumination, the RGB values were subjected to gamma correction. In addition to changes of illumination, which were created by processing (mainly gradual illumination changes and transitions), there are also changes in luminance that are part of the content, such as, among others, global or local outbreaks, changing the illumination of the flashing lights and light fittings, as well as the changing natural light.

[0002] the weighted prediction Parameters, for example, the boost factor w and the offset f, which are used to compensate for illumination of the discrete values of s on the basis of discrete values of p asresult of any search/evaluation of weighted prediction parameters (search WP). In the most direct, the most difficult form, you can use the search schema by the method of sequential search that considers all possible combinations of gains and offsets within a limited search window in an analogous fashion to the full search method sequential search for motion estimation, computes the difference/similarity to a reference signal with the compensated illumination compared to the initial signals, and selects the parameters of the compensation light situations that lead to the minimum RAS�ogdenyou. The search should also be taken into account estimation and motion compensation. However, the full search requires a large amount of computation. Offered many ways to assess the weighted parameters that define the "optimal" gain and offset for a given unit, region or even the entire frame (with global compensation light). See, for example, the publication of K. Kamikura et al., "Global Brightness - Variation Compensation for Video Coding", IEEE Transactions on Circuits and Systems for Video Technology, vol. 8, No. 8, December 1998, pp. 988 - 1000, which describes the schema of the global compensation of illumination changes designed to improve coding efficiency for scenes video with global changes in brightness caused by the gradual emergence/disappearance of the image, adjustment of the iris diaphragm of the camera, flickering, brightness changes, etc. publications Y. Kikuchi, T. Chujoh, "Interpolation coefficient adaptation in multi-frame interpolative prediction", Joint Video Team of ISO/IEC MPEG & ITU-T VCEG, JVT-C103, Mar. 2002, and H. Kato, Y. Nakajima, "Weighting factor determination algorithm for H. 264/MPEG-4 AVC weighted prediction," Proc. IEEE 6th Workshop on Multimedia Signal Proc. , Siena, Italy, Oct. 2004.

[0003] To achieve the best possible results the methods discussed in the above links, it is necessary to apply separately for each color component. Search weighted parameters may also benefit from compensation and evaluation �of viginia. See, for example, the publication, J. M. Boyce, "Weighted prediction in the H. 264/MPEG-4 AVC video coding standard," Proc. IEEE International Symposium on Circuits and Systems, Vancouver, Canada, May 2004, vol. 3, pp. 789-792. In H. 264/AVC standard for motion compensation for discrete values should be the use of weighted prediction, which leads to the final predicted discrete values. Therefore, the search for WP there is independence between the motion vectors and weighting parameters. To address this task, performing multiple iterations as follows: using any simple algorithm obtained the initial values of the weighted parameters used to scale and offset of the reference frame. Scaled and shifted reference frame is then used for motion estimation, which leads to motion vectors. In an alternative embodiment, the scaling and offset of discrete values may be included in the evaluation phase of the movement. In this implementation, the scaling and offset are discussed in real time, and in a separate stage, which creates a weighted reference picture, there is no need. At the second iteration, these motion vectors are used during search WP so that for each MW when receiving the WP settings to the current reference block prediction. This again should generate scaled and mixed�military reference frame, which undergoes motion estimation, or single stage motion estimation, which considers already obtained a scale factor and offset. Usually these algorithms are terminated when convergence is achieved the WP settings.

[0004] As indicated above, known on the current state of the art methods aimed at compensation of illumination, may require large computational resources, which leads to reduced device performance display images or to the high cost of the hardware intended to ensure specified performance. Note that although the terms "frame" and "picture" may be used interchangeably, this interchangeable use should not be interpreted as precluding such content is interlaced, like pictures of half-frames. The ideas in this disclosure are applicable to frames with sequential scan, and the images of half-frames interlaced (top or bottom).

BRIEF DESCRIPTION of GRAPHIC MATERIALS

[0005] the Accompanying graphic materials, which are incorporated herein and constitute a part of, illustrate one or more embodiments of the present disclosure and, together with the detailed description and examples, serve to explain Principe� and implementations of this disclosure.

[0006] FIG. 1 shows a block diagram of the algorithm for determining local or global illumination changes.

[0007] FIG. 2 shows a block diagram of the evaluation of the color settings for WP gradual changes of illumination.

[0008] FIG. 3 shows a block diagram of the algorithm for obtaining the color parameters in the case when the gain isis not equal to one, and the offset isis non-zero.

[0009] FIG. 4 shows a block diagram of the algorithm for obtaining color settings for WP outbreaks.

[0010] FIG. 5 shows a block diagram of the algorithm for obtaining parameters for WP gradual changes of illumination in the linear region.

[0011] FIG. 6 shows the termination scheme of gradual illumination changes, where DC is reduced to frame m+2.

[0012] FIG. 7 shows the termination scheme of gradual illumination changes, where DC is increased to frame m+2.

[0013] FIG. 8 shows a scheme of the beginning of the gradual change of illumination, where DC is incremented from frame m+2.

[0014] FIG. 9 shows the termination scheme of gradual changes in light conditions where DC decreases from frame m+2.

[0015] FIG. 10 shows a block diagram of the algorithm for detecting gradual changes in luminance using the correlations of illumination.

[0016] FIG. 11 shows a block diagram of the algorithm of the detector transition from �iscoe complexity.

[0017] FIG. 12 shows an example of global illumination with a rich background and with the gradual emergence of the image of black color.

[0018] FIG. 13 shows a General example of global illumination with a rich background.

[0019] FIG. 14 shows a block diagram of the algorithm of the search parameters in the WP on the basis of labeled sets.

[0020] FIG. 15 shows a block diagram of the search algorithm using WP iterative exceptions.

[0021] FIG. 16 shows a block diagram of a hybrid video decoder.

DESCRIPTION of ILLUSTRATIVE embodiments of the INVENTION

[0022] As indicated in examples described below, one of the embodiments of the invention is a method of detecting a gradual change of illumination and the determination of the global or local entity gradual illumination changes during the transition from one picture to the next picture of the video signal, where the method includes the stages on which:

create multiple frames of images and associated reference frames to predict;

for each frame and an associated reference frame of the prediction is calculated in the first color region of one or more quantities related to the intensity, and one or more quantities related to the color;

for each frame and an associated reference frame prediction calculate the gain weighted �of redskazanie for each value of the color component in the first color region;

if all the weighted gain predictions are non-negative and, to a large extent, are similar to each other, determines that the second color region occurs mainly global transition with zero offset; and

if not all weighted the gain predictions are non-negative and, to a large extent, are similar to each other, we determine that there is at least one of the following events: a global transition with a gradual change of illumination; global crossing with a gradual change of illumination with zero offset; or global transition with a gradual change of illumination with zero offset for the second color region.

[0023] As further indicated in examples described below, another embodiment of the invention is a method of detecting a gradual change of illumination and the determination of the global or local entity gradual illumination changes during the transition from one picture to the next picture of the video signal, where the method includes the stages on which:

create multiple frames of images and associated reference frames to predict;

for each frame and an associated reference frame of the prediction is calculated in the first color region of one or more variables, otnosiashikhsia intensity, and one or more quantities related to the color;

for each frame and an associated reference frame prediction calculate the gain weighted prediction for each value of the color component in the first color region;

for each frame and an associated reference frame prediction calculates the weighted prediction parameters related to intensity;

for each frame and an associated reference frame prediction calculate the gain weighted prediction based on calculated (e.g., calculated) quantities related to the intensity and related to the color, and the estimated weighted prediction parameters related to intensity;

if all the weighted prediction coefficients are non-negative and, to a large extent, are similar to each other, determines that the second color region occurs mainly global transition with zero offset; and

if not all the gains of the weighted predictions are non-negative and, to a large extent, are similar to each other, we check whether the local transition.

[0024] Another embodiment of the invention is a method of determining the global or local entity blackout during the transition from the picture to the next picture view�of signal, where the method includes the stages on which:

create multiple frames of images and associated reference frames to predict;

for each frame and an associated reference frame prediction is calculated (e.g., calculate) the values relating to the intensity and related to the color in the first color region;

for each frame and an associated reference frame to compute the prediction gain of the weighted predictions for each variable related to the intensity and related to the color in the first color region;

for each frame and an associated reference frame prediction to compare the values of the gain weighted prediction with each other;

if all the gain is weighted predictions are non-negative and, to a large extent, are similar to each other, it is determined that the gradual illumination change is global; and

if not all the gains of the weighted predictions are non-negative and, to a large extent, are similar to each other, determine that the gradual illumination change is local.

[0025] Another embodiment of the invention is a method of calculating the weighted prediction parameters for the second and third color components on the basis of the color conversion Pro�of transtv and weighted prediction parameters of the first color component in the event of image transfer, where the method includes the stages on which:

compute the gain and offset weighted prediction for the first color component;

and based on the values of gain and offset weighted prediction, the first color component, calculates gains and offsets weighted prediction second and third color component as a function of the displacements during the color conversion, the matrix coefficients of the color conversion, and gain and offset weighted prediction, the first color component.

[0026] Another embodiment of the invention is a method of calculating the parameters relating to color, based on the parameters related to the intensity, and information about the color space conversion when an outbreak of the image, where the method includes the stages on which:

compute the gain and offset related to intensity;

if the gain related to the intensity, is not equal to one, and the displacement is related to the intensity is non-zero, then equate the coefficients of the gain relating to the color, to the gain related to the intensity, and calculate the offsets related to the color, as a function of offsets when converting the color format is unwise�of yenta gain related to intensity, bias related to the intensity, and the matrix coefficients of the color conversion;

if the gain related to the intensity, is not equal to one, and the displacement is related to the intensity is zero, we equate the coefficients of the gain relating to the color, to the gain related to the intensity, and calculate the offsets related to the color, as a function of offsets when converting the color format of the gain related to the intensity, and the matrix coefficients of the color conversion; and

if the gain related to the intensity, is equal to unity or close to unity, we equate the coefficients of the gain relating to the color, to 1, and calculate the offsets related to the color, as a function of bias related to the intensity, and the matrix coefficients of the color conversion.

[0027] Another embodiment of the invention is a method of detecting a gradual change of illumination during the transition from one scene to the next scene of the video signal, where the method includes the following steps:

step A: create multiple frames of video signal;

step B: select the current frame from multiple frames;

step C: compute the set of properties for one or more color components�CSO space of the current frame based on the values of the frame for one or more color components of the frames, preceding the current frame and the frames following the current frame;

step D: calculate the set of properties for one or more components of the color space of the previous frame before the current frame, based on the values of the frame for one or more color components of the frame preceding the previous frame and the frames following the previous frame; and

step E: compare the set of properties for one or more parameters of the color space for the current frame with the properties of the set of parameters for one or more parameters of the color space of the preceding frame to determine whether the current frame end frame gradual changes in luminosity with increasing, or decreasing a value of the frame, whether the frame preceding the current frame, the initial frame of gradual illumination changes with increasing or decreasing value of the frame.

[0028] Another embodiment of the invention is a method of detecting transition, which includes the stages on which:

create multiple frames of the video sequence and associated reference frames bidirectional prediction; and

determines whether the transitions based on the average gain, the calculation�tions for the components of the color space of the first color region of the current frame and associated reference frames bidirectional prediction, and secondary gain factors calculated for the components of the color space to the second color region of the current frame and associated reference frames bidirectional prediction.

[0029] Another embodiment of the invention is a method of determining the weighted parameters in the presence of gradual changes of illumination, where the method includes the following steps:

step A: create multiple frames of images and associated reference frames to predict;

step B: choose the color component;

step C: for each frame and an associated reference frame prediction determine the saturated sections for the selected color component within a frame and the associated reference frame prediction;

stage D: for each frame and an associated reference frame prediction determine whether both frames together large areas with saturated values for the selected color component, and if not they both share large areas with rich values, then move on to stage N;

stage E: for each frame and an associated reference frame prediction determine the saturated sections within the frame and the associated reference frame prediction;

step F: for each frame and an associated reference frame prediction determine whether together�but both frames large areas with saturated values for the selected color component, and if there is no shared large areas with rich values, then move on to stage N;

stage G: for each frame and an associated reference frame to compute the prediction gain and the weighted prediction factors based on shared and optionally normalized to the same number of pixels large areas with rich values; and

step N: for each frame and an associated reference frame to compute the prediction gain and the weighted prediction factors based on the entire frame.

[0030] Another embodiment of the invention is a method of determining the weighted parameters in the presence of gradual changes of illumination, where the method includes the following steps:

step A: create several picture frames and key frames associated predictions containing sample color data;

step B: choose the color component;

step C: for each frame set the current lowest value of the saturation and the current high value of saturation for the selected color component based on the selected color area for sample color data;

stage D: for each of the associated reference frame prediction set current lowest reference value of the saturation and the current high reference value�their saturation for the selected color component based on the selected color area for sample color data;

stage E: for each frame and an associated reference frame prediction to estimate the parameters of weighted prediction based on the current minimum value of the saturation current of the maximum value of the saturation current lowest reference value saturation and most current reference saturation value;

step F: for each of the associated reference frame prediction calculate an updated current lowest reference value of the saturation current and updated greatest reference value of the saturation on the basis of the estimated weighted prediction parameters;

stage G: equate the current lowest reference value of the saturation is updated to the current lowest reference value saturation, and equate the current high reference value of the saturation is updated to most current reference saturation value;

stage N: repeat steps D - G in successive iterations, if the weighted prediction parameters for the current iteration differ from the weighted prediction parameters for the immediately preceding iteration for the selected value, or if the number of iterations is greater than the selected value of the counter of iterations.

[0031] Another embodiment of the invention is a method of estimating the gains and offsets weighted predskazaniyami mounting the transition from one picture to the next picture video where the method includes the stages on which:

create a current frame and an associated reference frame prediction image in the video signal;

for each current frame and an associated reference frame prediction calculates the values of the color components in the color field, where Amdenotes a first color component for the current frame, Bmdenotes the second color component for the current frame, Cmindicates a third color component for the current frame, and where Am+1denotes a first color component for the associated reference frame prediction, Inm+1denotes the second color component for the associated reference frame prediction, Withm+1indicates a third color component for the associated reference frame prediction;

equate the coefficients of the weighted prediction gain for all color components, where w denotes a value of gain factor weighted prediction, equal for all color components;

equate each other offset weighted prediction for two of the color components, where fAindicates the offset for the first color component, and fWithindicates the offset value equal to two color components; and solve the formula for the gain of w weighted prediction and CME�eny f Aand fWithweighted prediction.

[0032] Another variant embodiment of the invention is a method of estimating the gains and offsets weighted prediction when mounting the transition from one picture to the next picture of the video signal, where the method includes the stages on which:

create a current frame and an associated reference frame prediction image in the video signal;

for each current frame and an associated reference frame prediction calculates the values of the color components in the color field, where Amdenotes a first color component for the current frame, Bmdenotes the second color component for the current frame, Cmindicates a third color component for the current frame, and where Am+1denotes a first color component for the associated reference frame prediction, Inm+1denotes the second color component for the associated reference frame prediction, Withm+1indicates a third color component for the associated reference frame prediction;

equate each other offset weighted prediction for all color components, where f denotes an offset value weighted prediction, equal for all color components;

equate each other gain factors weighted performance�Azania for two color components, where wAdenotes the weighted prediction offset for the first color component, and wWithindicates the offset value weighted prediction, equal to two color components;

solve the formula for the gain of wAand wWithand the offset f of the weighted predictions.

[0033] Another embodiment of the invention is a conversion method of weighted prediction parameters from the first color region to the second color region, where the conversion from the first color region to the second color area is not linear, where the method includes the stage at which: compute the parameters of the weighted conversion to the second color region for one or more frames in the second area on the basis of the expression of the conversion from the first color region to the second color area.

OVERVIEW

[0034] This disclosure describes methods and embodiments of the invention, which relate to compensation of illumination on color space considerations. In particular, these methods and embodiments of the invention are directed to compensation of illumination, which may be included in devices and software that uses encoding and decoding video signal for two-dimensional and three-dimensional applications These devices can activate VCD, system wireless radio, Internet-television (IPTV), or other similar device.

[0035] the Most widely used color space for reproduction of video images is the RGB color space. Each component represents the intensity of each primary color, in this case, red, green and blue primary colors. The RGB region very effective in terms of color information. In addition, the RGB values before sending it to the display or graphics card are subjected to gamma correction. Operation gamma correction can be summarized as shown below in equation 1:

. (1)

andare the parameters of gamma, while the gain isandcontrol the contrast, and biasandcontrol the black level and intensity. In the remainder of this disclosure assumes thatand. And although the rates of gamma correction for each component may vary, in the remainder of this disclosure in consideration of simplification of the legend and analysis . However, it should be understood that many of the methods described in this disclosure is applicable even in cases where indicators are different. For basic and has the greatest practical significance the application gains with gamma correction and offset when the gamma correction is also assumed as agreed in all color channels.

[0036] an RGB Color space effectively in terms of presentation and display of colors, but not always efficient for video compression, because between color channels may exist a significant correlation. To improve the performance of compression in the television broadcasting systems have been proposed and standardized multiple transformations that perform the decorrelation of RGB signals in a new color space, where one of the components concentrates the information about the brightness and is often denoted as Y (Y' if it is subjected to gamma correction), and two other components mostly contain information about color. These transformations include analog conversion according to the American standard NTSC - VQ Y', and analog conversion according to the European standard EBU - Y U'v, and various varieties of digital Y'cbcr conversion. See, for example, the publication of A. Ford and A. Roberts, "Color Space Conversions", available at . Below description�are the most widely used version of these reforms and issues which may arise in their application. It should be noted that conversion described below and the equations are presented with illustrative purposes, and should not be interpreted as limiting the invention to the described specific transformations and equations. Specialists in this field will understand that within the scope of the invention and are other transforms and equations.

[0037] the International standard for encoding digital video images with a resolution of standard definition (SD) is the ITU standard BT.601. Matrix nonlinear encoding to convert from analog RGB to analog YCbCr space is shown below in equation 2:

. (2)

[0038] Equation 2 is true for representations of RGB color channels, which is in the range from 0 to 1,and leads to "analog" valuesthat take the values:and. In digital computers RGB color components are usually represented by an unsigned integer value of N bits, e.g. 8 bits, allowing values from 0 to 255. Let. The conversion of analog valuesin digital value of (which, essentially, is the process of quantization) can be performed in accordance with the specifications of the ITU and SMPTE by conversion as shown below in equation 3:

. (3)

[0039] using the equations (2) and (3) an 8-bit representation of Y'cbcr components is obtained as shown below in equation 4:

. (4)

[0040] In the 8-bit digital representation of the RGB components in the range from 0 to 255. When you apply the transform in the Y'cbcr space, this dynamic range can be limited. When using a transformation according to the following equation 5 effective ranges become:and:

. (5)

[0041] the Expression given in equation 5, can be further simplified to allow rapid computation with only integer SIMD operations, as shown in equation 6:

. (6)

[0042] In the derivation of equation 5 and equation 6 for the luminance component, the offset 16. In addition to equation 3, which is recommended by ITU and SMPTE, standard JFIF JPEG group was standartizator alternative equation for analog-to-digital conversion, designed mainly for image compression, p�and which it is desirable to keep the majority of the dynamic range. Specified alternative equation shown below in equation 7:

. (7)

[0043] the Transform shown in equation 7, retains most of the dynamic range of the original analog signal. In addition, the brightness is no longer limited as values greater than or equal to 16 and less than 235. The values of the chromaticity also have a greater dynamic range. However, also note the lack of an offset for the value Y'. Note that you should not restrict this analysis and the methods of the present invention constituting the color space, discretized with 8 bits. Content with a higher bit depth of color has been and continues to be available for such applications as, among others, digital cinematography. In these cases, displacement, for example, transformations are scaled to the corresponding bit color depth. In addition, the condition that the transformed color space can retain the same dynamic range as the original color space is not necessary. Finally, the dynamic range (or bit depth of color) may differ based on the components.

[0044] the International standard for encoding digital video images with a resolution high definition (HD) is the ITU standard BT.709. Matrix the nonlinear�tion coding is shown below in equation 8:

. (8)

[0045] the Formula shown in equation 8, fair to represent RGB color channels that are in the range from 0 to 1 (), and leads to "analog" values,andthat take the values:and.

[0046] Using as the basis of equation 5 can be summarized conversion of RGB to Y'cbcr as shown below in equation 9:

. (9)

[0047] the above expression is suitable for modeling the illumination changes. It should be noted that, unless otherwise specified, denote Y'cbcr and RGB values are gamma-corrected. When RGB is used in the same expression and the context with R'g'b', the first designation refers to linear values, and the last expression denotes the magnitude of the gamma-corrected.

[0048] With the above description convert RGB to Y'cbcr illumination changes, are sent to some embodiments of the present invention, can be categorized as described below.

[0049] the Gradual appearance of the image is a global illumination change, which begins to play with devoid�keep (usually blank or monochrome) and ends with the beginning of a new scene. If the initial color is black, for example, the values Y' is less than 17, and the values Cb, Cr according to the standard BT.601 for 8-bit color space is close to 128, then there has been a gradual emergence of the image of black color, and if color primary white frame, for example, the values Y' 234 more, and the values Cb, Cr according to the standard BT.601 for 8-bit color space is close to 128, then there has been a gradual appearance of an image from white. Frames between the starting and ending frames (extreme frames) can be modeled as a linear combination of the two outermost frames. Nevertheless, the gradual appearance images can also predskazivati unidirectionally: depending on whether the initial light or dark frame, the frame before the last frame will constitute or lighter, or darker version of the final frame. Let DC for the set denotes the average value for the set. Let the first frame is frame n, and the last frame is frame n+N. Let Y'ndenotes the value of the DC luminance for the frame n. Cbn, Crn, Rn, Gnand Bnyou can define in a similar manner. Global illumination change is modeled as a transition gainand offsetfor each component of the cmp color�the phase space. For the gradual appearance of the image can be used for the expression given in equation 10 and equation 11:

, (10)

. (11)

[0050] the Member k is defined as k≥1 and k≤-1 for any nonzero integer). The model for the gradual appearance of the image, based on equation 11 shown below in equation 12:

. (12)

[0051] Based on equation 12, we can assume that the change in the RGB color space can only simulate the gain w. Offset f often rely equal to zero. Note that modeling the gradual appearance of the image, as in equation 10 corresponds to the weighted prediction offset and gain realized, as in the coding standard H. 264 video. Let R denotes a discrete value in the prediction block, and f denotes the final predicted value. Let alsodenotes the gain,indicates the offset, and logWD denotes a member that controls mathematical precision. Weighted prediction can be implemented as shown below in equation 13:

[0052] the Gradual disappearance of the image is a fun� global illumination change, which begins with a frame representing the end of the scene, and ends with the frame, devoid of content (usually blank or monochrome). If the final color is black, there has been a gradual disappearance of the image in black color, but if the final color is white, it has been a gradual disappearance of the image in white. Frames between the starting and ending frames (extreme frames) can be modeled as a linear combination of the two outermost frames. The gradual disappearance of the image can also predskazivati unidirectionally: depending on whether the final light or dark frame, the frame before the last frame will constitute or darker, or lighter version of the final frame.

[0053] a transition is a global illumination change, where the initial frame is a frame which belongs to the same stage, and the final frame is a frame which belongs to the next scene. Frames between the starting and ending frames (extreme frames) can be modeled as a linear combination of the two outermost frames. However, unlike the gradual appearance and disappearance of images, transitions cannot effectively be predicted from a single direction. The reason is�, the mounting frame in the transition is by definition a mixture of two frames that belong to two very different scenes. This mixture, in most cases, linear, usually located in the RGB color space and can be modelled as a weighted average of the two discrete values in each direction of prediction. Equation 14 below shows a model that can be used for transitions:

. (14)

[0054] it is Assumed that the amount of gain equal to one:. In addition, the offset f is often assumed to be zero. The parameter p is defined as less than or equal to -1, for example, p≤-1, while the parameter q is defined as greater than or equal to 1, for example, q≥ 1.

[0055] the Flash and mixed light changes represent changes of illumination that do not fit into any of the previous categories, such as gradual illumination changes and transitions, which are often artificial (synthetic) light changes imposed by means of image processing operations. Flares are changes in ambient lighting with mostly natural origin, while the mixed illumination changes can relate to artificial t�PU (for example, insertion during post-processing) and to the natural type. Note, however, that in examining the contents of computer animation, the distinction between artificial and natural types. The duration of these changes of illumination is changeable and can often be as short as a single frame, as, for example, in the case of an outbreak. In addition, they often cover only part of the picture, and cannot easily be modeled by a global compensation irradiance (global options weighted prediction). An additional complicating factor is the fact that their behavior is independent from the content and from the light source and infrequently subject to a limited simulation. For example, the establishment of relations between the changes of the components of the color space may require arduous efforts. The establishment of these relationships can contribute information about the nature of the light source and the object distance from the sensor. If, for example, the light source has a specific color, e.g., red, components that store information about the red color, will be more affected than the components affecting other colors. For comparison, during gradual changes of illumination and mounting the transitions can be done in some�bitunia assumptions which can form the basis for algorithms to compensate for ambient light. Following are some examples of the mixed illumination changes and outbreaks.

[0056] a Mixed illumination changes can be synthetic or natural. Synthetic changes include local equivalents gradual appearances/disappearances of the images and transitions. For example, news programs can be quite common localized illumination changes, such as, among others, due to the insertion of a logo or image segment. The stage can influence the abundance of natural illumination changes: (a) move and/or shift (in intensity or even color temperature) internal light sources (e.g. lamps), (b) move or shift again (for example, the sun breaks through the clouds) external sources of light (sun, light projectors, etc.), (C) the presence of several springs, which act simultaneously on an object, (d) reflection from a light source on the camera or the object, (e) the shading of the object in some other, perhaps moving object, which closes the light source or by moving the light source, (f) such highly dynamic event, like an explosion, which leads to multiple light sources, to�which also affect certain colors, (g) the presence of transparent and moving substances, such as, among others, the water that affect the direction and intensity of light coming from the light source, both in time and in space. Depending on the light intensity of the color information can be stored, although there are cases when this statement is not correct.

[0057] the Flash lasts for a few frames, sometimes within only one frame, and include local or global illumination changes. The weights and biases of the color components, for example, in the RGB colour space, not necessarily correlated. If the light source was white, all the weights can be modeled as having approximately the same value. However, it is also possible that the light source has a dominant color. In addition, even if a light source is white, it can enhance the color saturation, because the dim light tends to blur all the color information in the scene. A similar argument, however, true for very bright light, which has a similar effect. Therefore, the idea of preserving the color information, which is more or less faithful to gradual changes of illumination, in the case of outbreaks not necessarily true.

[0058] Localized, moving and directional light sources, such�to flashing lights, cause local changes in illumination and can often saturate the color or change the color component, since they reproduce the color information that was not present in the previous frame. Therefore, color information is not necessarily saved.

[0059] This disclosure describes several embodiments according to the present invention, which relate to compensation of illumination in the color space considerations. Note that the presentation of examples using specific color spaces should not be interpreted as limiting the embodiments of the invention these color spaces. In particular, the description of the embodiments of the invention using color spaces that use components of luminance and chrominance are presented for descriptive purposes, and specialists in this field understand that these examples can be extended to other color spaces. Embodiments of the invention and their examples are briefly described below and presented in more detail, as indicated below section headings.

[0060] One of the embodiments of the invention includes a method of detecting a gradual change of illumination and determining the nature of the light changes: it is local�m or global. This method is also suitable for carrying out the operation according to the embodiment of the invention, described in the next paragraph. This method needs a number of global parameters, such as, among others, the values of DC brightness and colour, or the shape of the histograms, and exploits the relationship between the various color spaces (RGB versus YCbCr) to determine whether there is a gradual change of illumination, and is it local or global.

[0061] Another embodiment of the invention includes a method of producing the missing parameters of color to compensate for the light at the set brightness settings and information about the color space conversion. If the brightness is known on a local basis (blocks or regions), the method allows for obtaining the parameters of color and also on a local basis, despite the fact that the color information a priori is internally limited and global in scope. This method allows you to obtain the compensation parameters local and global illumination components for, both luma and chroma, without need for conversion to full search for all of the components. Below are described various embodiments of the invention for each type of illumination changes. One of the additional�construction of embodiments of the invention may implement this method as part of a system of decoding a video signal. Other variants of the invention can improve the assessment of parameters of color by incorporating segmentation in obtaining local settings. Segmentation can benefit from motion compensation and is useful for tracking objects in the scene that share a similar color information. Thus, the color settings can benefit from some initialization values that increase the chances of obtaining valid missing parameters of color and speed up the process.

[0062] Another embodiment of the invention includes a method of obtaining parameters of the compensation light for when illumination changes occur in the representation of color space other than the view (area) used in the compression, which can include cases where the matrix/the transformation equations are not linear. One of the purposes of the definition of the parameters in one of the areas with the available knowledge about the values, or characteristics, parameters in any other area. For example, it is possible that gradual illumination changes and transitions were generated by processing of discrete values directly in the linear region or the gamma region of the color space RGB 4:4:4, while the trail�abuser for compression prediction, for example, in the gamma region or logarithmic region of the color space YCbCr 4:2:0. This can be extended to applications that include color space XYZ space with wide color gamut (WCG) or any combination of existing and future color spaces. Similar issues may arise when scalable coding of video signal, where one layer (baseline; BL) contains content in the same color space, for example, the version of the frame with a narrow dynamic range (LDR), and the layer quality improvement (EL) contains content in another color space, for example, the version of the frame with visual dynamic range (VDR). Another application of this embodiment of the invention has to do with pairs of stereoscopic images. In the stereoscopic pair of cameras, each camera may have different characteristics color space, which may be the result of poor calibration, but can also be a result of the fact that none of the lenses or none of the camera sensors will not have identical characteristics transmission frequency. As a result, images captured by each of cameras can show color shifts. Therefore, the parameters of the compensation light obtained for one image may not be directly applicable to another depicted�Yu. However, when available (through calibration) knowledge about the transformation that relates the coordinates of the color space one camera and another camera, you can use this variant of the invention for obtaining parameters for the second image. This variant of the invention, thus, can be used in the system scalable coding, where BL encodes the first image, and EL - second image of a stereoscopic pair of images. Below is further described a method to optimize the parameters for these and similar cases.

[0063] Still another embodiment of the invention includes the detection of gradual changes of illumination and transition using the correlations of changes of illumination. Below are described methods with low complexity that can be used to distinguish sequences gradual illumination changes using the parameters extracted by the search parameters of the compensation light conditions.

[0064] Another embodiment of the invention includes obtaining the parameters of the compensation light from a compensation cut-off and saturation. Following are methods that take account of the fact that discrete values into brightness components and chrominance clipped and saturated�I to pre-defined values. Operations such as clipping and saturation, violate many of the standard assumptions that enable weighted prediction to work the way it supposed to. The basis for these methods is the categorization of discrete values to those that are far away from the borders, and those that are close to the boundaries. Also identifies additional variants of the invention that use segmentation and motion compensation to improve tracking and reception of the above mentioned areas/categories.

[0065] Other embodiments of the invention include methods of estimating the parameters of the compensation light with low complexity.

[0066] a Method of detecting a gradual change of illumination and determining the nature of the gradual change of illumination (global or local). Modern codecs like H. 264/AVC, usually operating in the field of Y'cbcr because of its very good decoralia properties. Note, however, that for the case of H. 264/AVC input and decoded images can be in any of the following color spaces: only Y (grayscale), Y'cbcr, or YCgCo, RGB and others not specified in the technical specifications monochrome tri-color color space (for example, YZX, also known as XYZ). However, before the demonstration uncompacted video con� is converted back to the original RGB region. The RGB region closer to the action of the human visual system, and the transitions are gradual changes of illumination, as described above, to understand better when they study in this area. Software final phase of creating a video that is often used to create faux gradual changes of illumination, likely acts in RGB. Note that for illustrative embodiments of the invention in this disclosure are used in the field of RGB and YCbCr. However, disclosed in the present disclosure, the methods can be applied for any given combination of color spaces (for example, XYZ in combination with YCgCo) as long as you know what matrix/transformation algorithms. Note that for simplicity of notation keys in the remainder of this disclosure is Y'cbcr to use YCbCr.

[0067] during the transition, with a gradual change of illumination it is assumed that the color information is preserved, while the illumination is changed. This leads to an overall gain of w for all components and is a consequence of the second law Grassmann's for color mixture. Equation 12 can be further simplified under the assumption that the (small) displacement constant for all components, as shown below in equation 15:

. (15)

[068] Let denotes the Y'cbcr components for the frame m. equation 16 below is obtained by combining equation 15 and equation 9:

. (16)

[0069] For the widely used transformation matrices RGB to Y'cbcr according to the standard ITU BT.601 equation 16 can, as shown below, to simplify to equation 17 is:

. (17)

[0070] the above equation 17 it is quite clear intuitively, since it shows that a constant shift in all components of the RGB field is translated into a single offset for the Y component (brightness) in the Y'cbcr. Also, depending on how the analog quantities are quantized to their digital equivalents, you can get two cases. One case that uses analog to digital conversion ITU/SMPTE (equation 3), the following equation 18:

. (18)

Another case that uses analog to digital conversion in JFIF format JPEG (equation 7) shown below in equation 19:

. (19)

[0071] the above expression give rise to algorithms (described below), which broadcast the gains and offsets for component Y in the weights and offsets for components Cb and Cr. Some features, such as classification as local or global, can help in raising more about�of yema information underlying the gradual changes in light without resorting to the time consuming search for weighted prediction. Setting f=0, the operation shown in equation 17 can be decomposed to obtain the following expression, shown below in equation 20 as:

. (20)

Assuming that the gain factors for each component can be unequal, we obtain the following equation 21:

. (21)

[0072] In some embodiments, the memberor equal to 0, or 16, while for most practical purposes membersand128. These members take on different values when considering content with bit color depths greater than 8. Note that embodiments of the present invention is applicable to content with arbitrary bit color depths. Note that, despite the fact thatandlimited to values greater thansimilar statement is not true forand. In fact, since wYis only guaranteed to be nonnegative, it is possible that negative values ofand. Based on this approval, to test are met if the assumptions made up to the present�status (active or inactive as of the moment, you can use the test. Until now it was assumed that (a) f=0, (b) a gradual change of illumination occurs in the RGB, and that (C) the gradual change of illumination is modeled by the equation 15.

[0073] Then, to determine that you meet the above assumptions, uses the following test:

(a) For the current frame m and reference frame prediction m+k - to calculate averages of the components of brightness and color and label them as Y'm, Y'm+kCbmCbm+k, Crmand Crm+k. In one of alternative embodiments of the invention instead of the DC values you can use the information histograms. Can be used the largest and second largest peaks in the histogram, or a combination, as they are less prone to outliers compared to the consideration of only the DC values.

(b) Calculate the gain of wY, wCband wSGby equation 21 and compare them. If they are all non-negative or sufficiently similar, to determine that the assumptions made above to evaluate a pair of frames, or frames m and m+k are satisfied. The similarity of the gain can be set by checking whether the coefficients in the range of 5 to 10% relative to one another. This can be done by PR�education coefficients in a scale between 0 and 1 using the logarithmic function and check is the differential gain of less than 0.05 or 0.1.

[0074] the above test is run for each frame in the sequence of frames starting with the first. Alternatively, you can use it only for frames that are marked as containing the changes of light through any external means such as a classifier scenes or the preprocessor. If the above test determines that the assumptions are satisfied, which means that (a) the transition from the global to the gradual change of illumination with (b) zero bias, (C) initially occurs in the RGB, it's optional, color options can be obtained using only the parameters of the global weighted prediction (as described directly below), and you can avoid time-consuming local weighted prediction. If these assumptions are not satisfied, it is possible to suspect, that illumination change or not is global, or not happening in the field of RGB, or the offset is not zero. In this case, to changes in lighting can optionally be addressed on a local basis. The algorithm of determination of changes of illumination as local or global is shown in FIG. 1.

[0075] a Negative result of the above tests also �may indicate the absence of any transitions with a gradual change of illumination. Therefore, the above algorithm can also serve as a detector of gradual changes of illumination or as a component of the scheme of detection of gradual changes of illumination. In addition, if there is any external knowledge that the frame contains a gradual change of illumination, the result can help to classify it as a local, not global gradual illumination change. Additional embodiments of the invention are possible by modification of the algorithm shown in FIG. 1, in the following ways: (a) determining the type of gradual change of illumination may be existing alternative methods, (b) the existing methods can be used to obtain the weights in step (b) the above tests, and (C) other methods (e.g., methods DC, ways of histograms, iterative methods, motion compensation etc.) is able to evaluate the color settings without recourse to a simple search WP for chroma. Note that the evaluation of the chromaticity can provide a seed for the iterative search algorithm WP, which specify the initial values and, thus, reduce the number of iterations.

[0076] FIG. 1 shows that the computation of the local parameters for WP brightness can be performed before the calculation and comparison of ratios at�ylenia, shown above in equation 21. However, this is only optional. In one of alternative embodiments of the invention the gain is shown in equation 21 can be calculated only from the global DC values for chroma and brightness. These gain factors can then be compared to determine the gradual changes in light of local or global gradual change of illumination, for example, if the gains are non-negative and such, gradual illumination change is a global gradual change of illumination, and color settings of WP (for luma and chroma) can be estimated without local search. However, if the gains are negative and/or are not similar, it's optional, you must search parameters local WP as for chroma, and brightness.

Missing parameters of color to compensate for the light

[0077] In this embodiment, the improved algorithms of compensation of illumination are used when a priori knowledge exists certain parameters. It is assumed that is known in advance the following statistics: (a) the gain and bias of the weighted predictions of the brightness component (Y) �La block (for example, 8×8 pixels) or region (for example, for a series of consecutive macroblocks), or at the level of the image; and (b) information regarding the color space conversion. Note that the blocks or the area can be overlapping. The goal is to obtain the missing parameters weighted prediction chroma and for global and local (e.g. block) compensation of illumination. This method reduces the computational complexity by searching for the parameters of the local weighted prediction on a single component color space instead of all three components. This method, in addition to speeding up the evaluation of the missing color setting that can also serve as a component of the scheme weighted prediction in the decoder: in this scenario, the use of a compressed bit stream carries only the above information (the parameters Y and only global settings for DC color components), and the decoder using the proposed method concludes (in a similar manner with implicit weighted prediction in H. 264/AVC) about the missing color options. In another embodiment, the decoder may also transmit the information to the decoder about the ratio of brightness and colour.

[0078] first, attention will be about�Ameno on the situation incorporating a gradual change of illumination. We start with the case when the gains and offsets WP brightness are estimated on the basis of the block (e.g., block size 8×8 pixels) or region, and the compression system lacks the WP settings for chroma. If the WP settings for chroma is not available, will suffer compression efficiency. Usually used, but a suboptimal solution to this problem is the use of some default weighted prediction parameters for the components of the color: is unity gain and zero offset for weighted prediction chroma. Another fairly straightforward solution is to use a gain brightness as well as gain chroma. However, both the above decisions may be inaccurate. In the following paragraphs are presented and described an algorithm that converts the parameters WP brightness in the settings of WP color for effective weighted prediction of a gradual rise and gradual disappearance of the image.

[0079] the Following describes an algorithm for estimating the parameters of the weighted prediction chroma for a given global and local parameters of the brightness Y and the conversion information �councils space. This algorithm can be run, if the assumptions are satisfied, as described above in the section on determining the nature of the gradual change of illumination. This, however, is optional and does not need to be binding to this method. Suppose that for the pixel block, the field or the frame m the gain and offset of brightness are served at a given support from frame m+k:. The aim is to estimate parameters of colorandfrom equation 10. There are three main cases:

(a) the gain isis not equal to one, and the offset isis non-zero;

(b) the gain isis not equal to one, and the offset isequal to zero;

(c) a gain ofequal to one and the offset isis non-zero.

[0080] FIG. 2 is a flow chart to estimate the parameters of WP when referring to the three cases described directly above. Following are the details pertaining to the estimation algorithm for each of these three cases.

[0081] In the case where the gain isis not equal to one, and the offset is is non-zero (case (a)), using equation 17 ensures that the gain of the chroma equal to the weight of brightness, as shown below in equation 22:

. (22)

[0082] Equation 17 requires that the gain of the RGB field was equal to the gain of brightnessand the offset in the field Y'cbcr was equal to the offset in the RGB field plus a modifier based on the gain and offset in the field for RGB brightness component:that allows to calculate the offset f in the area of RGB as:. For offsets chrominance components using equation 16 below is obtained equation 23:

. (23)

[0083] Note that the offset chroma components are computed as functions of the displacementswhen converting a color format, gain,brightness Y displacementbrightness and coefficients of the transformation matrix RGB to Y'cbcr.

[0084] Now consider more significant and practically more complex change in this case. It can be assumed that the transition with a gradual change of illumination is performed in the RGB field only by using gain factors that should be pretty brutal common practice. In this embodiment, the offset in the RGB field is equal to zero, and further it is assumed that the offset when converting the color format is non-zero: d0≠0. This means that for the preservation of this algorithm, to determine that it remains fair, the following expression is evaluated:. If both these values are close enough, then the model is satisfied, and shift the chromaticity can be calculated as shown below in equation 24:

. (24)

[0085] on the other hand, if the previous expression is not fair:that there's something wrong or assumptions, or with the mechanism that deliversand. In this case, you can consider one of the following options:

(a) a Decision in favor of the fact thatis the most credible of the two parameters. Replace, which is calculated as:. Check out the new offset based on the satisfaction of any reasonable restrictions, such as sufficient proximity to deliver originalfor example , for 8-bit precision within the interval . However, note that these limits may also be dependent of the content and can be adaptive. If so, to declare them valid.

(b) a Decision in favor of the fact thatis the most credible of the two parameters. Replacecalculated as:. Check out the new gain factor, based on the satisfaction of certain reasonable restrictions, such as being in the range. Note that, although the use of negative gain is acceptable, the gains for gradual changes of illumination is likely to be non-negative. If so, to declare them valid.

(c) a Decision in favor of the fact thatis the most credible of the two parameters. To equatezero.

(d) a Decision in favor of the fact thatis the most credible of the two parameters. To equateunit.

[0086] For each of the above four possibilities above described method, consisting of equation 22, equation 23 and equation 24, is applied to obtain the gain and drift� color. Depending on complexity considerations, may be tested by one of four possibilities, for example, through a Desk review or other external knowledge, on the subject of what she considered to be the most likely, more than one but not all four - for reasons similar to the reasons for the case with one of them - or all of them, in parallel or sequentially, and the parameters of one of these options can be selected according to some criteria. This criteria may include minimizing or maximizing some metric (e.g., metric differences) or the satisfaction of some audits correctness (for example, whether the parameters obtained within a reasonable restriction).

[0087] the Algorithm for obtaining the color parameters shown in FIG. 3 for the case when the gain isis not equal to one, and the offset isis non-zero (case (a)). Note that the algorithms of FIG. 3 can be applied locally (e.g., on the basis of the unit) or globally (for the frame). If the information of the brightness settings are available on a local basis, in order to provide the best possible performance, it is preferable estimation of the parameters of the color components is also on a local basis. Note that not all codecs, and�scheme and pre-processing/post-processing, using motion compensation, including H. 264/AVC, support the transmission and reception of signals settings of WP on a local basis. However, in the case of H. 264, you can use the local settings through some of the features of the codec: for example, features multiple requests to the prediction with motion compensation and modification of the list of reference images. These features allow to use in the frame prediction with motion compensation up to 16 different sets of parameters for each WP list predictions. Thus, in one of the additional embodiments of the invention the parameters of the local WP brightness and colour are processed so that for use in the weighted prediction with motion compensation using the signals the modification of the list of reference frames is selected and transmitted in the bit stream up to the most significant 16 sets for each list predictions. Note that the P-coded frames use prediction with one list, while the In-coded frames support the prediction with motion compensation with one list and double prediction with motion compensation from two lists.

[0088] For the case when the gain isis not equal to one, and the offset isis zero (case (b)), coef�iciency enhance the color the identical gains, calculated for the case where the gain ofis not equal to one, and the offset isis non-zero. Offset for chrominance components Y'cbcr have the form shown below in equation 25 for the zero offset of brightness,=0:

. (25)

And again, the offset components of the chromaticity are calculated as functions of the displacementswhen converting a color format, weightthe brightness Y and the coefficients of the transformation matrix RGB to Y'cbcr.

[0089] For the case whenequal to one and the offset isis non-zero (case (C)) all the coefficients of the chroma gain equal to 1.0. Offset for chrominance components Y'cbcr have the form shown below in equation 26 for displacementbrightness:

. (26)

Offset chroma components are computed as a function of displacementbrightness and coefficients of the transformation matrix RGB to Y'cbcr.

[0090] Note that in the transformation RGB to Y'cbcr having the greatest practical value, the sums of the rows constituting the brightness in the transformation matrix are equal to zero, as shown below in equation 27:

. (27)

[0091] with�of will, whenequal to one and the offset isis non-zero (case (C)), equation 27 indicates that both the offset will be zero. Provided analog-to-digital conversion ITU/SMPTE (equation 3) or JFIF (equation 7) for the previous three cases, you can get the following simplification:

Case (a): offset chrominance written as:.

Case (b): offset chrominance written as:.

Case (c): offset chrominance written as:.

[0092] Below, attention will be paid to situations that include transitions. For modeling transitions used equation 14, which further simplified under the assumption that all the offsets are zero, and by renaming the reference indices as shown below in equation 28:

. (28)

[0093] it is assumed that w1+w2=1. By combining equation 28 with equation 9 is obtained the expression shown below in equation 29 and equation 30:

, (29)

, (30)

designed to obtain the following equation 31:

. (31)

[0094] Note that, although the displacement can be non-zero, the condition w1+w2=1 is sufficient to ensure that a weighted linear combination of the RGB field and was also weighted linear combination in the field of Y'cbcr. Therefore, when the bidirectional prediction for transitions the gain is identical in both color spaces. Thus, if a frame m belongs to mounting the transition, the gain of chrominance components (Cb and Cr) for each reference frame are equal to the gain component of brightness. The model in equation 31 refers to the weighted bi-directional implementation of the standard H. 264 encoding, which will be briefly described below. Letanddenote the discrete values in each block of the prediction, and f denotes the final predicted value. Let alsoanddenote the gain factors, andanddenote the displacement, and logWD denotes a member that controls mathematical precision. Then, weighted prediction can be implemented as shown below in equation 32:

. (32)

[0095] the Outbreak and the local light conditions (other than the local postopen�e illumination changes and transitions):

Modeling of outbreaks and local illumination by its nature is more complex than modeling gradual changes of illumination and transitions. Even if you can have access to a global average Y'cbcr components, they will not be very useful to address local changes of illumination. If for brightness component available other than unity, the gain to determine the gain of chrominance components can be used methods identical to the methods discussed above for gradual changes of illumination. Otherwise, if the gain is very close to one and are non-zero offset, it is possible to obtain the missing offset color, we can make some assumptions about the specified offsets.

[0096] next, we will discuss the case when discrete values in the RGB offset. Bias in the field of RGB are modeled by the expression shown below in equation 33:

. (33)

When combining equation 33 to equation 9 is obtained the expression shown in equation 34:

. (34)

[0097] Sinceyou know,that represents an equation with three unknowns (). for the block airs in missing offsetandfor the block. This task is available if you make some simplifying assumptions. Below are four possible solutions:

(a) Suppose that. Because. As for conversion according to the standard BT.601 according to the equation 2andthen:and.

(b) Suppose that. This implies that. Then offset the chromaticity is calculated as:and.

(c) Suppose that. This implies that. Then offset the chromaticity is calculated as:and.

(d) Suppose that. This implies that. Then offset the chromaticity is calculated as:and.

[0098] the above Four solutions are assuming that, accordingly, (a) the light source was white, (b) the light source was basically Shin�m, (C) the light source was mostly green, and (d) the light source was mostly red. During encoding any of these solutions can be used in the encoding stage of the outbreak, or could have occurred in all these decisions, and the best solution can be selected by minimizing some cost functions, for example, the Lagrangian cost functions. This method is illustrated in FIG. 4. In one of the additional embodiments of the invention for faster search can be used the equation with three unknowns. The search is conducted for two of the unknowns, and the third unknown is obtained using this equation. It is possible for any pair-wise combinations of the three unknowns. Also described is another option: the parameters for the first component are obtained by conducting a search, and the second component is obtained as a by search and by testing one of the above four solutions. If the value for the second component close to one of the above solutions then the third component is assessed by the way that was discussed above. Otherwise, a search is conducted for the third component. Further acceleration is possible by consideration of the previous decisions, for example, for a transmitted frame. If frames have common characteristics, such as, among others, dispersion, brightness, color and/or texture information, this information can be used to speed up the process.

[0099] embodiments of the decoder. The above-described method of estimating the missing parameters of color from a known local or global brightness settings and knowledge about the scheme of converting the color space can be used as part of the decoder shown in FIG. 16. In such an embodiment, the compressed bit stream carries easily accessible information, and the decoder applies the methods of obtaining local or global parameters of color before performing weighted prediction as part of the compensation module mismatch. In one of the additional embodiments of the invention instead of the full obtain missing parameters of color, you can use them as predictors for the transmission of the remainder of predicting the WP settings in the encoded bit stream. The decoder receives the transferred balance and adds it to the prediction obtained using these methods to obtain missing parameters of color and formation of discrete values weighted prediction. Another embodiment of the invention transmits the bit stream parameters/information on the relationship between the brightness and the chromaticity or color space in General. This info�mation may consist of a matrix, such as the transformation matrix used to obtain the components of the color space, the equation or matrix of transition (e.g., from brightness to color, from U to V, etc.) to gain and offsets of the various components or any of the parameters of the equations. It can also include characteristic information, such as information about how satisfied or not any assumptions that take into effect the method described above. This information can be used in the decoder together with our method and other information bit stream to obtain the missing parameters of the components of the color space.

[0100] an Additional variant embodiment of the invention similar to the one described above, is determined in cases where the compressed bit stream does not pass WP settings, and they are obtained on the decoder side using the available causal information. See, for example, the publication of P. Yin, A. M. Tourapis, J. M. Boyce, "Localized weighted prediction for video coding," Proc. IEEE International Symposium on Circuits and Systems, May 2005, vol. 5, pp. 4365-4368. Instead of looking at the decoder settings for all color components, in order to limit the search, for example, only the brightness component and assistance in obtaining, for example, the missing parameters of the color component, can be used one way according to one variant�in the implementation of the present invention. This variant embodiment of the invention fair for local and global weighted prediction.

[0101] Additional embodiments of the invention. The methods described above can be applied on a local basis to obtain local parameters of color. This is possible even in such codecs, like H. 264/AVC, as described earlier, by using a modification of the list of reference images and multiple requests to the prediction with motion compensation. Increased productivity is possible through the use of different methods for each area of the image and the type of illumination changes. For example, one area may be classified as gradual appearance of the image, and the rest in a flash. The areas may overlap. Each area will be handled properly by the algorithm (by means of gradual changes of illumination for the first and a way for local light/flash for the others). Segmenting the image region may be easier when you use any of the existing algorithms or by using the color information or brightness derived from these techniques. For example, therefore, on the basis of the field/on a local basis can be calculated DC values or information of the histogram. Segmentation can extract uses� of motion compensation, and it is useful for tracking objects in the scene that share a similar color information. Thus, the color settings can benefit from some initialization values that increase the chances of obtaining the correct missing parameters of color and speed up the receipt.

[0102] Another embodiment of the invention, which is obtained by simple treatment of the use of certain members, makes possible to obtain the missing parameters brightness of the known parameters of color. Because the transformation matrices are quite common, it is easy to modify the equation to reformulate the task in the task of finding the missing parameters for any component with preset parameters for other components and the knowledge of the matrices/equations colorspace conversion. This variant of the invention may also be combined with the above variants of implementation of the decoder.

[0103] the Methods according to the options of implementing the present invention may also combine it with an iterative estimation of the parameters WP, followed by assessment and motion compensation. As described in the publication J. M. Boyce, "Weighted prediction in the H. 264/MPEG-4 AVC video coding standard", in the first place, to estimate the initial parameters of WP, then you�aniti motion estimation and then re-estimate the parameters using WP traffic information obtained in the previous step, followed by another motion estimation and further iterations until then, until you met a certain criterion (for example, among others, the criterion maximum number of iterations). In one of the additional embodiments of the invention method according to one of embodiments of the present invention can be used for priming/initialize the first iteration of a specified iterative method. To further speed up calculations, it optionally can be used with intermediate iterations. In conclusion, the information from previous iterations can be used to advance deciding which of the available modes, for example, the case (a) or (b) is correct for this content, which leads to additional acceleration.

[0104] In one embodiment of the invention weighted prediction on a local basis using the above described method is possible if the codec supports the transfer of signal gains and offsets WP at the region level, macroblock or block. Areas or blocks may overlap. For embodiments of the invention that use codecs such as H. 264, which is supported�t the transmission and reception of signals parameters WP only at the level of series of consecutive macroblocks, the remaining options in WP on a local level is possible by sharing the multiple requests to the prediction with motion compensation and rearranging/modifying reference frames. In other embodiments, the above methods can be incorporated into the module pre-processing/post-processing, which can also include estimation and motion compensation (temporary pre - or postfilter motion compensation) in combination with weighted prediction. Receiving parameters of the compensation light for when illumination changes are taking place in the area of color space other than the area used during compression, and matrix/transformation equations are not linear.

[0105] In the above descriptions, it is assumed that the conversion from the initial color space to output color space is linear. However, there are examples where this assumption is not valid. For example, consider the case where gradual illumination changes and transitions are created artificially by treating discrete values directly in the linear region of the color space RGB before applying gamma correction. The same is true and when the linear RGB space (or even gamma-ray space�nstwo RGB) is converted to logarithmic color space RGB or YCbCr. This happens, for example, when considering the encoding of video content or video signal with an expanded dynamic range, as described in logarithmic space (RGB or YCbCr. The change in WP settings from the initial color space to a destination color space is modeled as described below. Describes two variants of the invention, one for RGB with a gamma correction (R'g'b'), and the other for log RGB space (R"G"B").

[0106] the General equation for gamma correction according to equation 1, as shown below, to simplify to equation 35:

. (35)

For logarithmic space RGB is obtained the following equation 36:

. (36)

[0107] the Following describes the action of the transformation on the prediction of gradual illumination changes and transitions.

[0108] the Gradual change of illumination. For RGB gamma-corrected in the simplest case, a gradual change of illumination is modeled only gain, as shown below in equation 37:

. (37)

[0109] for the purpose of calculationandfor their equivalent values, you can use equation 38, as shown below:

. (38

By combining equation 35 and equation 38 is fashionable to obtain the expression shown in equation 39:

. (39)

[0110] the above equation shows that the gradual change of illumination in the linear region of the RGB, which is modeled by the gain of w, can be modeled in the field of R'g'b' with gamma correction using gain. It can be concluded that the gradual transitions with a change of illumination, simulated using the gain in the linear region RGB, may be reimbursed in the field of R'g'b' with gamma correction. Note that, despite the fact that the memberthis does not imply the assumption thatidentical for each component. As described above, operations can be performed on the basis of component, the described method is applicable for cases where the exponent for each component is different. For example, for the R component is obtained coefficient.

[0111] For logarithmic space RGB, similar to those described above are calculatedandfor equivalent values, as shown below in equation 40:

. (40)

By join�using equation 36 and equation 40 is obtained the expression shown in equation 41:

. (41)

[0112] the above equation 41 shows that the gradual change of illumination in the linear region of the RGB, which is modeled by the gain of w, can be modeled in the logarithmic space R"G"B" with an offset equal to logw.

[0113] transitions. Now consider one of the variants of implementation of the RGB gamma-corrected using a simple but practical model according to equation 28. When using equation 37 with a given model is obtained the expression shown in equation 42:

. (42)

[0114] When considering the above equation 42 becomes obvious that the difficulty lies in obtaining the correlations between gamma-corrected components of the two reference frames and component of the current frame m. Some understanding can be achieved through attempts to analyze one of the amounts that are erected in degreefor example,. First of all, it can be simplified as shown below in equation 43:

. (43)

[0115] the right-hand member can be expanded using the binomial series, which is determined as shown in equation 44:

. (44)

Since α - oshestvennoe number, the binomial coefficient can be calculated as shown in equation 45:

. (45)

[0116] Thus, we get the equation shown in equation 46:

. (46)

[0117] However, the decomposition in the range shown in equation 45, may not be particularly useful. One of the conclusions is that the transitions generated by processing in the linear region which includes RGB and the weighted average of the two reference frames, can be compensated for in the field of R'g'b' with gamma correction using a weighted average of the two (gamma-corrected) reference frames plus a non-zero offset. Alternatively, we can model the transitions using only the gain, but in this case they will most often be unequal, and their sum is not equal to one. If you have access to the original gain and the values of R1and R2the resultant displacement can be calculated with some accuracy. For content with regular behavior and reference frames that are equidistant from m, we can assume that. This assumption can simplify the above calculations for the gain and offsets in the area of RGB.

[0118] in addition, we note that the above solution fundamentals�recover on the disclosure of amounts by withholding multiplier outside the summation of infinitesimal quantities in brackets. It is possible to get alternative disclosure where this temporary member- there will be a multiplier alternative sum of infinitesimal quantities. Thus, can be obtained by two separate and equally correct solutions (hypotheses) for the task of identifying the gains and offsets in the field with a gamma correction for transitions that are generated in the linear region. Averaging of gains and offsets for the two solutions can improve the compensation of illumination. This method is illustrated in FIG. 5. Note that, despite the fact that the memberthis does not imply assuming thatidentical for all components. As described above, operations can be performed on the basis of component, the described method is also applicable for cases where the exponent for each component is different.

[0119] a Solution equivalent to the above solutions, you can apply for transitions in the field of logarithmic space. In this embodiment, the main difference is that instead of modeling transition, primarily through gains in this case Mont�iny the transition will be modeled, basically, using offsets that, as observed, is correct for gradual changes of illumination. Straightforward solution consists in equating the displacements weighted bi-directional prediction of the logarithms of the gains in the starting area (for example, in the linear region RGB).

[0120] Additional embodiments of the invention. Note that similar embodiments of the invention also possible to convert from other color spaces and in other color space, e.g. RGB with gamma correction in RGB in logarithmic space, or in a second color space that is obtained from the first color space by nonlinear transformation.

[0121] In one embodiment of the invention weighted prediction on a local basis using the above described method is possible if the codec supports the transmission and reception of signals of the gain factors and WP on the basis of the macroblock, block or area. Areas or blocks may overlap. For embodiments of the invention that use codecs such as H. 264, which supports the transmission and reception of signals parameters WP only at the level of series of consecutive macroblocks, the use of different parameters in the WP on a local level is possible by jointly�about using multiple requests to the prediction with motion compensation, and reorder/modification reference frames. In other embodiments, the above methods can be incorporated into the module pre-processing or post-processing, which can also include estimation and motion compensation (temporary pre - or postfilter with motion compensation) in combination with weighted prediction.

[0122] Another embodiment of the invention finds application in scalable video encoding, where content is compressed at two levels, basic level (BL) and the level of improvement of quality (EL), where EL is often predicted from BL. In these cases, it is possible that BL is compressed using a first color representation, while EL is compressed using the second color representation, which, for example, may be non-linear (logarithmic space) unlike the first, which may be linear or gamma corrected. The methods described in this disclosure can be used to convert the parameters WP, obtained for one layer to another layer. This transformation can be implemented in the encoder and in the decoder, which, therefore, eliminates the need for passing parameters for two different color representations. Optionally, the residue after payment of light and movement, designed by visit�of predicted (for example, for EL) parameters WP to one of the layers (e.g., BL) of the actual parameters EL, can be transmitted to the decoder in EL for assistance in the reconstruction and use of actual parameters by adding the predictions of BL.

Detection of gradual changes of illumination and transitions using the correlations between changes in light conditions.

[0123] this section describes an algorithm that is useful for detection, as gradual illumination changes and transition. He performs the specified detection by detecting the start and end frames of gradual illumination changes and transitions. The algorithm is based on the assumption that a gradual change of illumination or the transitions are almost linear and therefore are modeled by such equations as equation 31 and equation 37. This assumption leads to gainsthat is proportional to the distance between current frame and reference frame (frames). Presents one possible implementation of the algorithm for component Y' note, however, that can be considered any other component in the RGB field or any other field linear color space, and a gradual change of illumination and mounting goto�dy still differentiated. Under certain conditions this algorithm can be extended to the components Cb and Cr, but DC can be unreliable for the purposes of detection of gradual changes of illumination and transitions. In another embodiment, the reliability of this method of detection of gradual changes of illumination can be increased by sharing one or more components of the color space. For purposes of this description we will assume thatdenotes the average value of the DC component of Y' in the frame m.

[0124] First of all, below are some properties that are a consequence of the assumption that transitions with gradual illumination changes are linear.

A. a DC Value for the frame m is approximately equal to the average DC values for the previous and subsequent frames:(A). It contains the result of the equation 31 and equation 37.

B. the DC Value for the frame m is approximately equal to twice the DC value for the next frame minus the value of DC for the frame following the next frame is:(B). Here is a consequence of the above properties A.

C. a DC Value for the frame m is approximately equal to the sum of twice the DC value for the previous frame PLU� a DC value for the frame, next for his next picture to be divided by three:(C). Here is a consequence of the properties of A and B. the Cases (C) and (d) can also be defined in the reverse order. These definitions follow.

D. the DC Value for the frame m is approximately equal to twice the DC value for the previous frame minus the value of DC for previous frame to its previous frame:(D). It contains the result of the properties of A.

E. the DC Value for the frame m is approximately equal to the sum of the doubled value of DC for the next frame plus a DC value for the previous frame to its previous frame divided by three:(E). Here is a consequence of the properties of A and D.

[0125] the Following describes the use cases described above to situations that are described in FIG. 6, FIG. 7, FIG. 8 and FIG. 9 along with getting some of the conditions associated with each scenario.

[0126] Condition 1: For the case of the end of the gradual changes in luminosity with decreasing DC value (see FIG. 6) will be true the following inequality: the value of DC for frame m+2 by a property will be more than the value of the property And that will be more than the value of the property In:and the same will be true for values of DC for frame m+1:.

[0127] Condition 2: for the case of a gradual end and�light changes with increasing DC value (see FIG. 7) will be true the following inequality: the value of DC for frame m+2 by a property would be less than the DC value on the property And that will be less than the DC value for a property In:and the same will be true for values of DC for frame m+1:.

[0128] Condition 3: for the case of the beginning of the gradual changes in luminosity with increasing DC value (see FIG. 8) is true the following inequality: the value of DC for frame m+2 by the property of E will be greater than the DC value on the property And that will be bigger than the DC value for property D:and the same will be true for values of DC for frame m+3:.

[0129] Condition 4: for the case of the beginning of the gradual changes in luminosity with decreasing DC value (see FIG. 9) is true the following inequality: the value of DC for frame m+2 E property would be less than the DC value on the property And that will be less than the DC value for property D:and the same will be true for values of DC for frame m+3:.

[0130] In an alternative embodiment, the above conditions may be tested by downsampling the input sequence in time and space. This can be advantageous for a faster calculation, for example, in the case of more DL�relatively gradual transitions with a change of illumination. Also it might help to avoid emissions and, thus, increase the efficiency of the detector. Downsampling along the time axis can also benefit from temporal filtering of statistical indicators used in the detection algorithm.

[0131] the Following describes an algorithm for the detection of gradual changes of illumination and transitions following from analysis of previous situations and conditions. The flow chart of this algorithm is illustrated in FIG. 10 (number of steps refer to the numbers of labels used in FIG. 10).

(a) stage 101: to initialize the frame counter and start parsing the frames. Go to step 102.

(b) step 102: for the current frame m is to calculate the values of DC components of the space Y'cbcr and RGB. Go to step 103.

(c) stage 103: for the current frame m - compute members,andfor all DC components of the field for RGB and component Y' region Y'cbcr . Go to step 104.

(d) step 104: select component one area that has not yet been tested.

(e) stage 105: test condition 1 in respect of frames m-1 and m. If the condition is satisfied, then mark m frame as the end frame of the gradual changes in luminosity with increasing DC value.

(f) stage 106: test condition 2 in respect of frames m-1 and m. If the condition is satisfied, then mark m frame as the end frame of the gradual changes in luminosity with increasing DC value.

(g) stage 107: test condition 3 in respect of frames m-1 and m. If the condition is satisfied, then mark the frame m-1 as the initial frame of the gradual changes in luminosity with increasing DC value.

(h) stage 108: test condition 4 in respect of frames m-1 and m. If the condition is satisfied, then mark the frame m-1 as the initial frame of the gradual changes in luminosity with decreasing DC value.

(i) the step 109: if there are still unselected components of the color space is to update the frame counter stage 114 and go to step 104; otherwise, go to step 110.

(j) stage 110: process labeled: if there are incompatible labels to choose the one that has the highest prevalence. Here you can also weigh label certain components as more important than others (for example, Y is more important than Cb or Cr). Selected tag may, therefore, be weighed. To keep in memory the final marking. Go to step 111.

(k) stage 111: to check whether you have previously saved in the memory of any previous final marking after processing frames before the current frame m. If not, go to step 112. Otherwise - check whether "compatible" current and previously sohranena� marking (step 115): for example, if the previous label was the label "start frame gradual changes in luminosity with increasing DC", we can declare that the sequence of these frames is a gradual appearance of the image. However, if the current label is the label "start frame gradual changes in luminosity with decreasing DC", then this was a false alarm. Alternatively, if the current label is the label "end frame gradual changes in luminosity with decreasing DC", we can assume that the sequence represents a transition. Go to step 112.

(l) stage 112 to determine whether there are more frames to parse. If Yes, update the frame counter (step 115) and go to step 102. Otherwise - go to step 113.

(m) stage 113: complete parsing of frames.

[0132] Note that in one of the additional embodiments of the invention instead of considering DC for frames can be considered the shape of the histogram of the frame or a combination of the highest values in the histogram. To limit the computational complexity can also be used temporal and/or spatial downsampling. Downsampling may vary based on color components: components of the color space that are known to be more susceptible �eastview gradual changes of illumination can benefit from smaller subsampling factor. When for simultaneous detection of more than one component may further accelerated by the adoption of a hierarchy of decisions: components can be ordered according to how much they contribute to the detection of gradual changes of illumination. If the first component is processed in this way and the result is negative, then checking the other components is not necessary. In another embodiment, this method can also be applied on the basis of the area, not necessarily, with the improvement by segmentation to detect local gradual illumination changes and transitions. In this case, statistical indicators, such as DC or even the histogram should be calculated for a given region. In another embodiment, the above methods can be incorporated into the module pre-processing/post-processing, which can also include estimation and motion compensation (temporary pre - or postfilter with motion compensation) in combination with weighted prediction.

[0133] Detection of transitions with low complexity. The classification of the frame as the frame of a gradual change of illumination can be handled by the algorithm of third parties or posredstvennaia algorithm, as shown in FIG. 11 and described below:

(a) For the current frame m is to compute the average values for the components of color spaces RGB and Y'cbcr:and. Also calculate the average values for the components of color spaces RGB and Y'cbcr reference frames for bi-directional prediction:and. These average values can be calculated as DC values for the entire frame or for some area, or as a bar chart or histogram (on the assumption that for each frame is calculated multiple histograms).

(b) Confirm that the average gainsandfrom each reference frame sum to 1. Also confirm whether the absolute value of the gain is inversely proportional to the distance between the reference frame from the current frame, for example, equidistant frames, the expectation is that there may be gains to the magnitude of one second.

(c) finally, to test that shall be satisfied if the equation 28 and equation 31, with the substitution of values from the previous two stages. If the gain of chrominance components is unknown (as in this case), assign the values of the gain factors in drawing up�setup portion of the brightness for both color spaces.

[0134] If the above tests are satisfied, the frame is declared to be a frame transition, and the weight of the components of the chromaticity region Y'cbcr are set as described above.

Get parameters for compensating the luminance clipping and saturation

[0135] Prior to obtaining a quantized digital versions of the field Y'cbcr shows that there is a possibility that the components of Y'cbcr could not use the full range [0, 255] for 8-bit unsigned integers. In fact, for the conversion according to the standard BT.601 (equation 6) and recommended by the analog-to-digital conversion (equation 3) the resulting ranges are:,and. Even when you use the entire range (analog-to-digital conversion JFIF equation 7), the values will continue to nibble and be satisfied with 0 and 255. Note that, although in the above analysis of content was used with 8-bit color depth, the issues of clipping and saturation will occur (note that the terms "rich", "saturation" and "clipping" in this description may be used interchangeably). Content with higher bit color depth, such as, for example, 10 - or 12-bit contents, is also suffering about� the same difficulties. Embodiments of the present invention apply regardless of the bit depth of the color content and color space (e.g., XYZ, YCbCr, YCgCo, etc.). The operation of the clipping and saturation can be detrimental to the efficiency of prediction when the global weighted prediction. In addition, they also complicate the search for the best global and local parameters weighted prediction (gain and offset). Below describes the effect of two widely used methods of search for weighted prediction in cases with saturated pixel values.

[0136] FIG. 12 illustrates the case of global illumination changes. The background is dark and remains dark: he is clipped at the value 16. FIG. 12 it is assumed that these numbers represent the values of the component Y'. The first part And the object is slightly visible with a value of 32 in the frame n and becomes even lighter in intensity with the value 64 in the frame n+1. The other part B of the same object with the value 32 appears in the frame n+1. Let the partition will be equal to one-half of the frame. Let each of sections A and b will be equal in area to one-fourth of the frame. It is also assumed that the content in the section is very homogeneous and therefore can be encoded by a very small number of bits. For comparison, sections A and b are texturebank�mi and therefore more difficult to encode. This situation reflects the gradual appearance or the gradual disappearance of the logo, which is very common in the trailers of films.

[0137] For the prediction of frame n+1 frame n as the reference frame prediction can be used global weighted prediction (offset f and the gain w). Evaluation of gain and offset can be carried out, among others, methods based on the DC methods based iterative compensation of motion and the methods based on the histogram. The following description presents three existing method based on DC:

Method 1: the first way is to assign offset values to 0 and the computation of the gain as.

Method 2: the second method is to assign the gain value 1, and in calculating the offset as.

Method 3: the third method is to assign the gain valuein calculating the offset as. This method is the result of the treatment of the task as the minimization of a least-squares as described in the publications K. Kamikura et al., "Global Brightness-Variation Compensation for Video Coding", Y. Kikuchi, and T. Chujoh, "Interpolation coefficient adaptation in multi-frame interpolative prediction". Value� DC is defined as where- the value of the pixel in frame n, and the operationrepresents the mean value of X.

[0138] the Following describes the results of the evaluation of the gain and offset of the methods described above and their use to compensate for illumination changes.

Method 1: using method 1 to get. In the application of global compensation light with these settings to a reference frame n Sections b and C have a value of 19.2 and section A has a value of 38.4. Thus, all the topics are predicted incorrectly.

Method 2: using the 2 turns. In the application of global compensation light with these settings to a reference frame n sections b and C have a value of 20, and section A has a value of 36. Thus, all the topics are predicted incorrectly. Since section And more difficult to encode, it can be concluded that in this situation the prediction method 1 outperforms method 2.

Method 3: using method 3, it turns out. In the application of global compensation light with these settings to a reference frame n sections b and C have a value of 2.66, and section A has a value of 56.1 in. Thus, all the topics are predicted incorrectly. Because sections A and b more difficult d�I encoding than section C, for this situation, predictions cannot be concluded about the effectiveness of the method 3 with respect to methods 1 and 2.

[0139] From the above analysis it becomes evident that these three methods based on the DC cannot access the transition values near clipping/saturation. In fact, although the above description is the gradual appearance of an image from black, which is initially saturated at 16, similar conclusions can be drawn if one looks at the gradual disappearance of the image in black color which is saturated with 16 or 0, or the gradual disappearance of the image in white color which is saturated at 255 or 240. Similar conclusions are valid for higher bit depths, color, and other color spaces. In General, the estimation of weighted parameters suffers when working near the points of saturation, for example, minimum and maximum of the dynamic range of the luma and chroma. In this description, the values of saturation can be summarized asandthat are not necessarily non-zero (zero is still a point of saturation, since the values are clipped so that they are not less than this value). In the following two subsections describe two algorithms that can apply to the task Globa�professional weighted prediction on the edges of gradual changes of illumination, where usually there is a saturation of pixel values.

[0140] the Algorithm on the basis of labeled sets. This algorithm is based on the mark of the provisions of pixels belonging to the pixel that:

(a) is or has been saturated for the frame n-1 and took the unsaturated value at frame n, for example, the values of the pixels were saturated with the lowest value of saturationor with the highest value of saturationand at the moment they are, respectively, larger than the smallest values of saturationor less of the maximum value of saturation;

(b) unsaturated or was in the frame n-1 and took a saturated value at frame n, for example, the pixel values were larger than the smallest values of saturationor less of the maximum value of saturationand at the moment they are saturated, respectively, c is the smallest value of saturation,or maximum value of saturation,.

[0141] In FIG. 13 for the frame n+k section remains saturated, while section b is initially saturated, and then adopts unsaturated values (in frame n+k). Section A is defined as a set of sections whose values are unique� unsaturated and in the predicted frame, and in the reference frame. Section D is defined as a set of sections, the values of which are unsaturated for a given frame. Note that, in contrast to the illustration shown in FIG. 13, the sections need not be contiguous. As long as they satisfy the above conditions, they may include several areas. For generalization of this statement to the gradual appearance and gradual disappearance of the image in black and white and white and black (or any other uniform colours) suppose that k is any nonzero integer. Then, depending on the situation, the frame n can serve either the current frame or a reference frame prediction. For frame n+k, the reverse is true. It is worth noting that. Also note that.

[0142] the parameter search Algorithm WP on the basis of labeled sets shown in FIG. 14 (number of steps refer to the numbers of the marks shown in the figure) and is described in more detail below:

(a) stage 201: for each pair of the predicted frame and the reference frame to initialize the frame number and go to step 202;

(b) stage 202: for each pair of the predicted frame and the reference frame is to initialize the counter reference frames and go to step 203;

(c) the step 203: determine the saturated sections, if applicable, within �aidou pairs from the predicted frame and the reference frame and go to step 204;

(d) step 204: determine whether the two frames together large areas with intense brightness. This can be done by existing methods or, alternatively, by testing and classification of the pixels in the respective frames. If Yes, go to step 205; otherwise, go to step 208;

(e) a Step 205: determine within each pair of the predicted frame and the reference frame unsaturated sections, if applicable, and go to step 206;

(f) a step 206: determine whether the two frames together big area with unsaturated values of brightness (). For this task you can use the existing methods of test and classification by pixels. If Yes, go to step 207; otherwise, go to step 208;

(g) stage 207: apply any way to search WP (among others, methods based on DC, the methods based on the histogram, iterative methods with estimation and motion compensation) on the normalized (to the same number of pixels) sets theand. Re-use the WP settings from the previous frame to initialize the search for the current frame. To obtain the gain factor. Go to step 209;

(h) stage 208: apply any way to search WP (among others, methods based on DC, ways to DOS�ve histograms, iterative methods with estimation and motion compensation) on the normalized (to the same number of pixels) frames n and n+k. Re-use the WP settings from the previous frame to initialize the search for the current frame. To obtain the gain factor. Go to step 209;

(i) step 209: determine whether the analysis of additional constituents, if so, to update the count of the reference frame (step 211) and go to step 203; otherwise, go to step 210;

(j) step 210: if it is necessary to evaluate more pairs from the predicted frame and the reference frame, to choose a pair, to keep in memory the parameters obtained for the current pair of frames to update the frame counter (step 212) and go to step 202; otherwise, terminate the algorithm.

[0143] Additional embodiments of the invention. Getting saturated and unsaturated regions can benefit from the use of methods of segmentation. To track these regions from one frame to the next can be used for the motion compensation, which, thus, serves as the seed for the formation of sections A, b, C and D, which are necessary components of these methods. In one of alternative embodiments of the invention, instead of using the methods of estimating the parameters of WP on the sections As well, though they are the same the only plus-all�tively region, segmentation can categorize them into various objects, and algorithms can be applied individually to each object/region. In one of the embodiments of the invention weighted prediction on a local basis as described above may, if the codec supports the transmission and reception of signals of the gain factors and on the basis of the macroblock, block or area. Areas or blocks may be overlapping. For embodiments of the invention that use codecs such as H. 264, which supports the transmission and reception of signals only at the level of series of consecutive macroblocks, it is possible to use other parameters WP on a local level by sharing the multiple requests to the prediction with motion compensation, and reorder/modification reference frames. In another embodiment, the methods described above can be incorporated into the module pre-processing/post-processing, which can also include estimation and motion compensation (temporary pre - or postfilter with motion compensation) in combination with weighted prediction.

[0144] the Algorithm based on iterative exception. This algorithm addresses the problem of saturated values in a different way. In addition, he refers to the decision of the main crudest� the previous algorithm: high complexity with respect to the operations branch. Determination of the total cross section setmay look like direct, but requires the conditional branch operations for each pixel to determine these sets. In addition, even after the determination of these sets and depending on the algorithm for estimating the weighted prediction parameters for result sets still need to re-calculate the value of DC. For some ways to search WP, such as the complex method 3, it also entails the summation for each pixel, which also should be limited to a specific set. Therefore, it may be helpful another algorithm, which is based on histograms and multiple iterations. Note that this algorithm is amenable to use for such means of assessment weighted parameters, such as, among others, methods based on DC, the methods based on the histogram, iterative methods with estimation and motion compensation.

[0145] Suppose that in the algorithm based on iterative deletion, frame n is the current frame and a frame n+k - reference frame used for weighted prediction. The reference frame may constitute or original shot, or uncompacted frame after the reconstruction residue subjected to compression. The following parameters can be initialized as follows:

(a) the number of current�her iteration t=0;

(b) the current smallest value of saturation for the current frame n;

(c) the current high value of saturation for the current frame n;

(d) the current smallest value of saturation for the reference frame n+k;

(e) the current high value of saturation for the reference frame n+k.

[0146] Let the number of the current iteration t. The algorithm optionally may initially calculate the histograms for both frames. This operation has a low complexity because it avoids branching for each pixel and replaces accesses to the matrix of memory for each pixel, which, in General, are faster when implemented as a matrix with 256 elements can easily be embedded in the cache of any processor. Specified optional calculation useful for the methods of search for WP, which depend, for example, from the harmonization of the histogram or from the DC values. The next stage includes the evaluation of weighted parameters (search WP), which predict frame n, using the frame n+k as a reference frame. Search WP may consider, among others, methods based on DC, the methods based on the histogram (e.g., matching of histograms), iterative methods with estimation and motion compensation. In addition, to improve and accelerate the search on WP it�radio t can be used in the WP settings, obtained at the iteration t-1.

[0147] Searching WP may also consider algorithms that use the histogram to determine the offset and gain values. For a given frame m these histograms can now be limited using theand. In General, the novelty of this method lies in the fact that the search is limited to WP at each iteration, the consideration of the pre-defined minimum and maximum saturation level for the current and reference frames.

[0148] After completion of the evaluation of the weighted prediction parameters, you must update the parameters restrictionsand. In General, the parameters of the current frameandremain unchanged irrespective of iterations. Soand. The parameters of the reference frameandnevertheless, updated by the algorithm as described in the next paragraph.

[0149] Suppose that in the algorithm updates the parameters of the reference framedenote the offset and the gain factor defined by the estimation algorithm of weighted prediction. The parameters that will be used for next and�Ereli: and, can be determined by satisfying the following inequalities shown in equation 47:

. (47)

[0150] Letrepresents the value of a pixel in the reference frame n+k. The pixel values that will be enough, or be truncated, re-obtained gain and offset are marked as spam. In General, the algorithm updates according to one of the variants of the present invention will be set lower and upper limits for the reference frame so that when using weighted prediction to predict the current frame from the reference frame, the resulting values of the pixels were not saturated or cut off. Thus, the new saturation level for the next iteration t+opredelyayutsya, as shown below in equation 48:

. (48)

[0151] it is Then determined whether the gain and offsetquite different, for example, assuming 8-bit content and deistvitelnosti gain, so that the absolute difference of gains of more than 0.03 and the absolute difference of the displacements greater than 3, from those of the previous iteration. And if not, the algorithm fell�I, and evaluation of weighted parameters is completed. The algorithm is also terminated if a maximum number of iterations. Otherwise, the iteration counter is increased to t+1, and execution returns to the evaluation of weighted options. This algorithm is illustrated in FIG. 15.

[0152] Additional embodiments of the invention. This variant implementation of the present invention can also be extended by considering the segmented regions that correspond to different objects or content in the scene. After segmentation and receiving areas you can apply the method described above separately for each area. Motion compensation can help in tracing. The method described above considers only one component of the color space. Improved performance may be obtained if the effect of each choice in the gain and offset are considered simultaneously in the whole color space. There may be cases in which two of the components are unsaturated, but the latter may be the difficulties associated with saturation. In addition, more importantly, it can also take into account the impact on the sample values after they undergo color space conversion: for example, irrespective�, that sample values can be unsaturated in the source color space that is used to compress, for example, in the Y'cbcr space, as a result of converting discrete values into another space, for example, in the RGB space, for demonstration purposes or, for example, interlayer prediction (in the case of scalable coding with discrete values having extended dynamic range in the layer improve the quality) can lead to saturated values in one or several components. This algorithm can take this into account and limit the boundaries for the unsaturated components of the color space. In one of the embodiments of the invention weighted prediction on a local basis as described above may, if the codec supports the transmission and reception of signals of the gain factors and factors at the level of the macroblock or block. For embodiments of the invention that use codecs such as H. 264, which supports the transmission and reception of signals only at the level of series of consecutive macroblocks, it is possible to use other parameters WP on a local level by sharing the multiple requests to the prediction with motion compensation, and reorder/modification reference frames. In another embodiment, be implemented thr�ment of the invention the methods described above can be incorporated into the module pre-processing/post-processing, which can also include estimation and motion compensation (temporary pre - or postfilter with motion compensation) in combination with weighted prediction.

Estimation of the parameters of the compensation light with low complexity.

[0153] the Previously proposed ways of estimating the parameters of WP have a common characteristic: all of these three methods operate on a single color component. Therefore, the offset and gain for, say, Y' is calculated irrespective of the result for components Cb and Cr. Similarly, the offset and the gain for the component G will be calculated regardless of the components R and B. the above discussion, however, showed that based on the type of gradual change of illumination there are many relationships. We can formulate the following problem: for the current frame m and frame m+k, where k is any nonzero integer, the known values of the DC components of the RGB and Y'cbcr. In addition, we assume a global illumination change. Then you want to find the gains and offsets are designed to perform weighted prediction. In General, such action always leads to three equations with six unknowns (e.g., equation 10 and equation 11), which lead to infinite solutions. However, if we impose certain restrictions that�, to the number of unknowns equal the number of equations, one can obtain reliable and unique solution. Limitations/assumptions can be the result of a priori knowledge. In accordance with several assumptions/constraints can generate multiple solutions, and then by performing validation/correctness (ignoring the solutions for values that are outside the limits or are irrational), you can choose the best solution.

[0154] One solution is possible if the restriction of equality of weights for all components, for example, for Y'cbcr, which is usually true for most scenarios, a gradual change of illumination. In addition, they are offset color grade, as shown below in equation 49:

. (49)

Then, to obtain gain and w displacementsandsolve the equation 48.

[0155] In the field of RGB (but without limitation to these two areas) another possible variant embodiment of the invention. Offset RGB are equal, while the gains are limited as shown below in equation 50:

. (50)

Alternative output may be the output, shown below in equation 51:

. (51)

[0156] the above system of equations have solutions, and the gains and offsets can be converted to region Y'cbcr for use in prediction systems, which operate in the field of Y'cbcr. The decision about where to perform the calculations, namely: or RGB or Y'cbcr, is depends on the knowledge of the sequence there is a priori: if the majority of the processing is performed in the area of RGB, it makes sense to evaluate in this area. Return true if the processing was carried out in the area of Y'cbcr.

[0157] Additional embodiments of the invention. The methods described above can be applied globally (to the entire frame or component) or local basis, probably by segmentation, which precedes the main method and displays the field with content that has more uniform characteristics. For example, the DC value can thus be calculated on the basis of the field/on a local basis. In one of the embodiments of the invention weighted prediction on a local basis as described above may, if the codec supports the transmission and reception of signals of the gain factors and factors at the level of the macroblock or block. For embodiments of the invention that use codecs such as H. 264, which supports re�the Acha and reception of signals only at the level of series of consecutive macroblocks, you can use other WP settings on the local level by sharing the multiple requests to the prediction with motion compensation, and reorder/modification reference frames. In another embodiment of the invention, the methods described above can be incorporated into the module pre-processing/post-processing, which can also include estimation and motion compensation (temporary pre - or postfilter with motion compensation) in combination with weighted prediction.

[0158] In conclusion, according to several embodiments of the invention, the present disclosure considers the systems and methods of improving data quality and processing, such as processing in a loop (as part of the process of encoding/decoding) or outside the loop (at the degree of pre-processing or post-processing), such as the release and elimination of noise data that can be discretized and multiplexed in a variety of ways. These systems and methods can be applied to existing codecs (coders and decoders), but apply to future coders and decoders, which also provides for the modification of the kernel components. Applications can include video encoders and Blu-ray players, STB, software coders and players, as well as solutions for �wide broadcasting and downloading, more limited bandwidth. Additional applications include video encoders, players, and BD video discs created in the appropriate format, or even content and systems aimed at other applications, such as wide broadcasting, satellite broadcasting, IPTV, etc.

[0159] the Methods and systems described in this disclosure may be implemented as hardware, software, firmware or a combination thereof. The characteristics described as blocks, modules or components may be implemented together (e.g., in this logical device, as an integrated logic device or separately (as a separate associated logic devices). The software portion of the methods of the present disclosure may include a machine-readable medium that includes commands that, when performed by at least partially perform the described methods. Machine-readable medium may include, for example, a memory with random access (RAM) and/or read only memory (ROM). Commands can be executed by a processor (e.g., processor, digital signal processing (DSP) integrated circuit (ASIC), a logical matrix with operational programming(FPGA)).

[0160] As op�Sano in this description, one of the embodiments of the present invention may thus relate to one or more illustrative embodiments of the invention, which are listed in table 1 below. Accordingly, the invention may be implemented in any of the forms described herein, including as non-limiting examples, the following numbered illustrative embodiments (EEE), which describe the design, the characteristics and functions of some of the parts of the present invention.

TABLE 1

NUMBERED ILLUSTRATIVE embodiments of the INVENTION

EEE1. Method of detecting a gradual change of illumination and the determination of the global or local entity gradual illumination changes during the transition from one picture to the next picture of the video signal, where the method includes the stages on which:

create multiple frames of images and associated reference frames to predict;

for each frame and an associated reference frame prediction compute one or more quantities related to the intensity, and one or more quantities related to the color in the first color region;

for each frame and an associated reference frame prediction calculate the gain weighted predskazana� for the magnitude of each component in the first color region;

if all the gain is weighted predictions are non-negative and, to a large extent, are similar to each other, determines that the second color region occurs mainly global transition with zero offset; and

if not all the gains of the weighted predictions are non-negative and, to a large extent, are similar to each other, determines that occurs one of the following events: a global transition with a gradual change of illumination with zero offset; or global transition with a gradual change of illumination with zero offset for the second color region.

EEE2. Method according to numbered illustrative embodiment of the invention 1, where the values relating to the intensity, include one or more of the following values: the value of brightness or luminosity value.

EEE3. Method according to numbered illustrative embodiment of the invention 1, wherein the value relating to color, include one or more of the following values: value as color or chromatic value.

EEE4. Method according to numbered illustrative embodiment of the invention 1, wherein the first color area is an area YCbCr, and the gain of weighted prediction �assoiciate by the following formula:

,

whereand- components of the luminance for the frame and the reference frame prediction frame;

and- the value components of the color for the frame;

and- the values of the color components for the reference frame prediction frame; and

andoffset when converting a color format.

EEE5. Method according to numbered illustrative embodiment of the invention 1, where the values relating to the intensity and is related to color, represent the average values related to the intensity and is related to color.

EEE6. Method according to numbered illustrative embodiment of the invention 1, where the values relating to the intensity and related to the color are calculated based on the information of the histogram.

EEE7. Method according to numbered illustrative embodiment of the invention 1, which is a few frames of the frame sequence.

EEE8. Method according to numbered illustrative embodiment of the invention 1, wherein the multiple frames is a part of the recruitment to to�which indicated the presence of illumination changes.

EEE9. Method according to numbered illustrative embodiment of the invention 1, where the matrix of the color space defining the first color space to the second color space is linear.

EEE10. Method according to numbered illustrative embodiment of the invention 1, where the gain of the weighted predictions are largely similar, if the gains are in the range of 5 to 10% relative to one another.

EEE11. Method according to numbered illustrative embodiment of the invention 1 also includes a stage on which:

perform a logarithmic scaling of the gain weighted prediction so that they have values between 0 and 1, and determine whether the gain is largely similar, by calculating whether the difference between the coefficients of the weighted prediction gain is less than 0.1.

EEE12. Method of detecting a gradual change of illumination and the determination of the global or local entity gradual illumination changes during the transition from one picture to the next picture of the video signal, where the method includes the stages on which:

create multiple frames of images and associated reference frames PR�of wskazania;

for each frame and an associated reference frame prediction compute quantities related to the intensity and related to the color in the first color region;

for each frame and an associated reference frame prediction calculates the weighted prediction parameters related to intensity;

for each frame and an associated reference frame prediction calculate the gain weighted prediction based on calculated quantities related to the intensity and is related to color, and from the calculated weighted prediction parameters related to intensity;

if all the gain is weighted predictions are non-negative and, to a large extent, are similar to each other, determines that the second color region occurs mainly global transition with zero offset; and

if not all the gains of the weighted predictions are non-negative and, to a large extent, are similar to each other, we check whether the local transition.

EEE13. Method according to numbered illustrative embodiment of the invention 12, wherein the value related to the intensity, include one or more of the following values: the value of brightness or luminosity value.

EEE14. Method according to numbered �llustrative embodiment of the invention 13, where values related to color, include one or more of the following values: value as color or chromatic value.

EEE15. Method according to numbered illustrative embodiment of the invention 12, wherein the first color area is an area YCbCr, and the weighted prediction coefficients are calculated by the following formula:

,

where

and- components of the luminance for the frame and the reference frame prediction frame;

and- the value components of the color for the frame;

and- the values of the color components for the reference frame prediction frame; and

andoffset when converting a color format.

EEE16. Method according to numbered illustrative embodiment of the invention 12, wherein the value related to the intensity and is related to color, represent the average values related to the intensity and is related to color.

EEE17. Method according to numbered illustrative embodiment of the invention 12, wherein the value related to the intensity and related to the color, the calculated�yaytsa based on the information histograms.

EEE18. Method according to numbered illustrative embodiment of the invention 12, which is a few frames of the frame sequence.

EEE19. Method according to numbered illustrative embodiment of the invention 12, wherein the multiple frames is a part of a set of frames that specify the presence of illumination changes.

EEE20. Method according to numbered illustrative embodiment of the invention 12, wherein the matrix of the color space defining the first color space to the second color space is linear.

EEE21. Method according to numbered illustrative embodiment of the invention 12, wherein the gain of the weighted predictions are largely similar, if the gains are in the range of 5 to 10% relative to one another.

EEE22. Method according to numbered illustrative embodiment, the 12, also includes a stage on which:

perform a logarithmic scaling of the gain weighted prediction so that they have values between 0 and 1, and determine whether the gain is largely similar, by calculating whether the difference between Ko�efficiently gain weighted prediction less than 0.1.

EEE23. Method of detecting a gradual change of illumination and the determination of the global or local entity gradual illumination changes during the transition from one picture to the next picture of the video signal, where the method includes the stages on which:

create multiple frames of images and associated reference frames to predict;

for each frame and an associated reference frame prediction compute quantities related to the intensity and related to the color in the first color region;

for each frame and an associated reference frame prediction calculate the gain weighted prediction for each of the quantities related to the intensity and related to the color in the first color region;

for each frame and an associated reference frame prediction to compare the values of weighted prediction with each other;

if all the gain is weighted predictions are non-negative and, to a large extent, are similar to each other, it is determined that the gradual illumination change is global; and

if not all the gains of the weighted predictions are non-negative and, to a large extent, are similar to each other, it is determined that the gradual illumination change is local.

EEE24. Method according �numerowana illustrative embodiment of the invention 23, where values relating to the intensity, include one or more of the following values: the value of brightness or luminosity value.

EEE25. Method according to numbered illustrative embodiment of the invention 23, where the values related to the color, include one or more of the following values: value as color or chromatic value.

EEE26. A way to calculate the weighted prediction parameters for the second and third color components, based on the information about the color space conversion and the weighted prediction parameters of the first color component when the transition of the image, where the method includes the stages on which:

compute the gain and offset weighted prediction for the first color component; and

based on the calculated values of gain and offset weighted prediction, the first color component calculates the gain and offset weighted prediction of the second and third color components as functions of the displacements during the color conversion, the matrix coefficients of the color conversion, and gain and offset weighted prediction, the first color component.

EEE27. Method according to numbered illustrative embodiment of the Fig�plants 26, where the weighted prediction parameters for the second and third color components include the gains and offsets related to the color, and the weighted prediction parameters of the first color component include gain and offset related to the intensity, and the transition image comprises a gradual change of illumination, where the method includes the stages on which:

compute the gain and offset related to intensity;

if the gain related to the intensity, is not equal to one, and the displacement is related to the intensity is non-zero, then equate the coefficients of the gain relating to the color, to the gain related to the intensity, and calculate the offsets related to the color, as a function of offsets when converting the color format of the gain related to the intensity, bias related to the intensity, and the matrix coefficients of the color conversion;

if the gain related to the intensity, is not equal to one, and the displacement is related to the intensity is zero, we equate the coefficients of the gain relating to the color, to the gain related to the intensity, and calculate the offsets related to the color, as a function of offsets when converting color�tovo format gain related to the intensity, and the matrix coefficients of the color conversion;

if the gain related to the intensity, equal to one and the offset is related to the intensity is zero, then the gain is related to the color, equal to 1, and calculate the offsets related to the color, as a function of bias related to the intensity, and the matrix coefficients of the color conversion.

EEE28. Method according to numbered illustrative embodiment of the invention 27, where the gain and offset are related to the intensity, turn the gain and offset of brightness.

EEE29. Method according to numbered illustrative embodiment of the invention 27, where the gain and offset relating to color, include the gains and offsets of color.

EEE30. Method according to numbered illustrative embodiment of the invention 27, where the color space conversion is the conversion of RGB to Y'cbcr, and if the gain related to the intensity, is not equal to one, and the displacement is related to the intensity is zero, the offsets related to the color computed by the following formula:

,

where and- color offset;

andthe matrix coefficients of the color conversion;

- gain brightness; and

andoffset when converting a color format.

EEE31. Method according to numbered illustrative embodiment of the invention 27, where the color space conversion is the conversion of RGB to Y'cbcr, and if the gain related to the intensity, equal to one and the offset is related to the intensity, is non-zero, the offsets related to the color computed by the following formula:

,

whereand- color offset;

- the offset of the brightness; and

andthe matrix coefficients of the color conversion.

EEE32. Method according to numbered illustrative embodiment of the invention 27, where the color space conversion is the conversion of RGB to Y'cbcr, and where if the gain related to the intensity, is not equal to one, and the displacement is related to the intensity, is the Nene�the energy, the offsets related to the color computed by the following formula:

,

whereand- color offset;

andthe matrix coefficients of the color conversion;

- gain brightness;

- the offset of the brightness; and

andoffset when converting a color format.

EEE33. Method according to numbered illustrative embodiment of the invention 27, where the color space conversion is the conversion of RGB to Y'cbcr, where the gain and offset are related to the intensity, represent the gain and offset of brightness, and where the gains and offsets related to color, include the gains and offsets of the color, and whereandoffset when converting the color format of,- the brightness offset, and- gain brightness, where if the brightness offset is non-zero, and the gain brightness is not equal to one, andapproximately equal to the gain of the chroma equal to the gain of brightness and color offset are calculated according to the following formula:

,

whereand- color offset.

EEE34. Method according to numbered illustrative embodiment of the invention 33, wheredon't care aboutwhere the method also includes selecting one of the possibilities A, B, C or D:

possibility A: the choice of the gain brightness as credible, and calculating the offset of the brightness according to the following formula:

,

validation;

equating the coefficients of the chroma gain to the gain of brightness;

the computation of chroma offsets according to the following formula:

;

possibility B: selection bias brightness as accurate, and the calculation of the gain of the luminance according to the following formula:

;

validation;

equating the coefficients of the chroma gain to the gain of brightness;

the computation of chroma offsets according to the following formula:

;

possibility C: the choice kOe�of ficient enhance the brightness as credible, and setting brightness offset is equal to zero:

equating the coefficients of the chroma gain to the gain of brightness;

the computation of chroma offsets according to the following formula:

,

whereandthe matrix coefficients of the color conversion;

possibility D: selection bias brightness as credible, and the gain set brightness equal to one;

equating the coefficients of the chroma gain to the gain of brightness;

the computation of chroma offsets according to the following formula:

,

whereandthe matrix coefficients of the color conversion.

EEE35. Method according to numbered illustrative embodiment of the invention 34, where one of the possibilities A, B, C or D is chosen to satisfy the selected criterion.

EEE36. Method according to numbered illustrative embodiment of the invention 35 where the selected criterion involves minimizing or maximizing the metrics or determining whether the calculated gain and offset in the selected range.

EEE37. Method according to numbered illustrative embodiment of the invention, 35, DG� selected criterion is based on a single chrominance component or both components of the color.

EEE38. Method according to numbered illustrative embodiment of the invention 27, where the gains and offsets related to the intensity and related to the color computed from the data for the frame image.

EEE39. Method according to numbered illustrative embodiment of the invention 27, where the gains and offsets related to the intensity and related to the color are calculated based on data for the selected portion of the image frame.

EEE40. Method according to numbered illustrative embodiment of the invention 27, where the color space conversion is the conversion of RGB to Y'cbcr in accordance with the technical descriptions ITU/SMPTE or JFIF, and where the gain and offset are related to the intensity, turn the gain and offset of brightness, and where the gains and offsets related to color, include the gains and offsets of the color, where the method includes the stages on which:

if the gain brightness is not equal to one, equate the coefficients of the chroma gain to the gain of brightness and calculate the offset color according to the following formula:

,

whereand- shift�Oia color and - gain brightness, and,

if the gain brightness is zero, then the offset chroma equate to zero and the gain of the chroma equal to one.

EEE41. Method according to numbered illustrative embodiment of the invention 27, where the image occurs, the transitions, and where the method includes the stages on which:

calculate related to the intensity of the gain and offset of brightness, and

equate the coefficients of the gain relating to the color, to the gain related to the brightness.

EEE42. Method according to numbered illustrative embodiment of the invention 27, where the weighted prediction parameters of the first color component includes first values of a pair of gain factors and offsets of the color, and the weighted prediction parameters for the second and third color components include a second value of the pair of gain factors and offsets of the color and gain of the brightness and the brightness offset.

EEE43. Method of calculation of parameters related to color, based on the parameters related to the intensity, and information about the color space conversion, in the event of the outbreak of the image, where the method includes the stages on which:

calculate the coefficient of high�tion and offset, related to intensity;

if the gain related to the intensity, is not equal to one, and the displacement is related to the intensity is non-zero, then equate the coefficients of the gain relating to the color, to the gain related to the intensity, and calculate the offsets related to the color, as a function of offsets when converting the color format of the gain related to the intensity, bias related to the intensity, and the matrix coefficients of the color conversion;

if the gain related to the intensity, is not equal to one, and the displacement is related to the intensity is zero, we equate the coefficients of the gain relating to the color, to the gain related to the intensity, and calculate the offsets related to the color, as a function of offsets when converting the color format of the gain related to the intensity, and the matrix coefficients of the color conversion; and

if the gain related to the intensity, equal to unity or close to unity, the gain related to the color, equal to 1, and calculate the offsets related to the color, as a function of bias related to the intensity, and the matrix coefficients of the color conversion.

EEE44. Method with�line numbered illustrative embodiment of the invention 43, where the gain and offset are related to the intensity, turn the gain and offset of brightness.

EEE45. Method according to numbered illustrative embodiment, the 43, where the gains and offsets related to color, include the gains and offsets of color.

EEE46. Method according to numbered illustrative embodiment, the 43, where if the gain related to the intensity, is close to unity, the calculation of the displacements related to the color, involves the calculation of the displacements related to the color based on the color of the primary light source for the flash on the image.

EEE47. Method according to numbered illustrative embodiment, the 43, where the color space conversion is the conversion of RGB to Y'cbcr, and where the gain and offset are related to the intensity, turn the gain and offset of brightness, and the gains and offsets related to color, include the gains and offsets of the color, and where, if the gain related to the intensity, is equal to unity or close to unity, the computation of chroma offsets includes a selection of at least one of the following calculations:

and;

and;

and; and

and,

whereand- the coefficients of the color conversion matrix, andand- color offset.

EEE48. Method according to numbered illustrative embodiment, the 43, where the color space conversion is the conversion of RGB to Y'cbcr, and where the gain and offset are related to the intensity, turn the gain and offset of brightness, and the gains and offsets related to color, include the gains and offsets of the color, and where, if the gain related to the intensity, is equal to unity or close to unity, the computation of chroma offsets include:

the assumption that the outbreak of the image includes a white light, and

the computation of chroma offsetsandas follows:

and.

EEE49. Method according to numbered illustrative embodiment of the invention 43, �de color space conversion is the conversion of RGB to Y'cbcr, and where the gain and offset are related to the intensity, turn the gain and offset of brightness, and the gains and offsets related to color, include the gains and offsets of the color, and where if the gain related to the intensity, is equal to unity or close to unity, the computation of chroma offsets include:

the assumption that the flash image includes blue light, and

the computation of chroma offsetsandas follows:

and.

and,

whereandthe matrix coefficients of the color conversion.

EEE50. Method according to numbered illustrative embodiment, the 43, where the color space conversion is the conversion of RGB to Y'cbcr, and where the gain and offset are related to the intensity, turn the gain and offset of brightness, and the gains and offsets related to color, include the gains and offsets of the color, and where, if the gain related to the intensity, is equal to one or close� to unit, the calculation of displacements color includes:

the assumption that the outbreak of the image comprises green light, and

the computation of chroma offsetsandas follows:

and,

whereandthe matrix coefficients of the color conversion.

EEE51. Method according to numbered illustrative embodiment, the 43, where the color space conversion is the conversion of RGB to Y'cbcr, and where the gain and offset are related to the intensity, turn the gain and offset of brightness, and the gains and offsets related to color, include the gains and offsets of the color, and where, if the gain related to the intensity, is equal to unity or close to unity, the computation of chroma offsets include:

the assumption that the flash image includes red light, and

the computation of chroma offsetsandas follows:

and,

whereand- the coefficients of the transformation matrix�number of colors.

EEE52. Method according to numbered illustrative embodiment, the 47, where you select more than one calculation, and the solution of one of the calculations is selected based on the selected criteria.

EEE53. Method according to numbered illustrative embodiment, the 47, where the calculation is selected on the basis of information relating to the previous frame.

EEE54. Method of detecting gradual changes in luminance in the transition from one scene to the next scene of the video signal, where the method includes the following steps:

step A: create multiple frames of video signal;

step B: select the current frame from multiple frames;

step C: compute the set of properties for one or more components of the color space of the current frame based on the values of the frame for one or more color components in the frame preceding the current frame and the frames following the current frame;

step D: calculate the set of properties for one or more components of the color space of the preceding frame before the current frame, based on the values of the frame for one or more color components in the frame preceding the previous frame and the frames following the previous frame; and

step E: compare the set of properties for one or �several parameters of the color space of the current frame with a set of properties for one or more parameters of the color space of the previous frame for the purpose of determining whether the current frame end frame gradual illumination changes with increasing or decreasing value of the frame or is the frame preceding the current frame, the initial frame of gradual illumination changes with increasing or decreasing value of the frame.

EEE55. Method according to numbered illustrative embodiment of the invention 54, where m denotes a frame of the selected frame from multiple frames, the frame m-1 denotes a frame from multiple frames prior to frame m, frame m-2 frame from multiple frames preceding the frame m-1, m+1 denotes the frame from multiple frames following the frame m and frame m+2 denotes the frame from multiple frames following the frame m+1, where calculating the set of properties for one or more components involves calculating the following properties:

property a, where the value of the frame for the frame m on the property And is equal to the average values of the frame for frame m+1 and the frame m-1;

property B, where the value of the frame for the frame m for a property In equal to twice the value of the frame for frame m+1 minus the value of the frame for frame m+2;

a property where the value of the frame for the frame m by a property equally divisible by a property divided by the divider by property C, where the dividend by a property equal to twice the value of the frame for frame m-1 plus the value of the frame for frame m+2, and cases�tel Aviv by a property equal to 3;

property D, where the value of the frame for the frame m by property D is equal to twice the value of the frame for frame m-1 minus the value of the frame for frame m-2; and

property E: where the value of the frame for the frame m by a property E is divisible by property E divided by a divisor on a property E is divisible by a property E is equal to twice the value of the frame for frame m+1, plus the value of the frame for frame m-2, and the divisor by a property E is equal to 3.

EEE56. Method according to numbered illustrative embodiment of the invention 54, where the comparison of the set of properties for one or more parameters of the color space of the current frame with a set of properties for one or more parameters of the color space of the previous frame includes testing at least one of the following conditions:

condition 1 where the condition 1 is satisfied if the value of the frame by a property for the current frame is greater than the value of the frame property And the current frame, and the frame value for the property And for the current frame is greater than the value of the frame property In the current frame, and the frame value for the property for the previous frame is greater than the value of the frame property for the preceding frame, and the frame value for the property And for the previous frame is larger than the frame value for a property In the previous frame;

condition 2, where conditions�s 2 is satisfied, if the value of the frame by a property for the current frame is less than the value of the frame property And the current frame, and the frame value for the property And for the current frame is less than the value of the frame property In the current frame, and the frame value for the property for the previous frame is less than the value of the frame property for the preceding frame, and the frame value for the property And for the preceding frame is less than the value of the frame property for the preceding frame;

condition 3 where the condition 3 is satisfied if the value of the frame property for the previous frame is greater than the value of the frame property for the preceding frame, and the frame value for the property And for the previous frame is greater than the value of the frame property D to the preceding frame, and the frame value for the property for the current frame is greater than the value of the frame property And the current frame, and the frame value for the property And for the current frame is greater than the value of the frame property of D for the current frame; and

condition 4 where the condition 4 is satisfied if the value of the frame property for the preceding frame is less than the value of the frame property for the preceding frame, and the frame value for the property And for the preceding frame is less than the value of the frame property D to the preceding frame, and the frame value according to Saint�ISTU E for the current frame is smaller than the value of the frame property And the current frame, and the frame value for the property And for the current frame is less than the value of the frame property of D for the current frame;

and where:

if condition 1 is satisfied, then the current frame is denoted as the end frame of the gradual changes in luminosity with decreasing value of the frame;

if condition 2 is satisfied, then the current frame is denoted as the end frame of the gradual changes in luminosity with increasing value of the frame;

if condition 3 is satisfied, then the immediately preceding frame is denoted as the start frame of the gradual changes in luminosity with increasing value of the frame; and

if condition 4 is satisfied, then the immediately preceding frame is denoted as the start frame of the gradual changes in luminosity with decreasing value of the frame.

EEE57. Method according to numbered illustrative embodiment, the 49, which also includes the stages on which:

choose another frame from multiple frames;

denote the selected frame as a current frame;

repeat steps C - E;

comparing the results obtained from stage E to the previous current frame with this current frame to determine whether the results are compatible; and

determined on the basis of one or NESCO�kih values of frames whether the gradual lighting changes and the gradual appearance of the image, the gradual disappearance of an image or transition.

EEE58. Method according to numbered illustrative embodiment of the invention 55 where the value of the frame for the frame includes a DC value of the entire frame for one or more components of the color space.

EEE59. Method according to numbered illustrative embodiment of the invention 55 where the value of the frame for the frame includes a DC value of a particular part, or parts, of the frame for one or more components of the color space.

EEE60. Method according to numbered illustrative embodiment of the invention 55 where the value of the frame for the frame includes a histogram of the frame for one or more components of the color space.

EEE61. Method according to numbered illustrative embodiment of the invention 55 where the value of the frame for the frame includes a histogram of a particular part, or parts, of the frame for one or more components of the color space.

EEE62. Method according to numbered illustrative embodiment of the invention 55 where frame m-1 immediately preceding frame m frame m-2 immediately preceding the frame m-1 frame m+1 direct�about follows the frame m, and frame m+2 directly follows the frame m+1.

EEE63. Method according to numbered illustrative embodiment of the invention 55 where frame m-1 denotes the frame prior in time frame m frame m-2 denotes the frame prior in time frame m-1 frame m+1 denotes the frame following in time frame m and frame m+2 denotes the frame following in time frame m+1.

EEE64. Method according to numbered illustrative embodiment of the invention 63, where the timing for any of the frames m-1, m-2, m+1, m+2 is variable.

EEE65. Method according to numbered illustrative embodiment, the 64, where the timing changes depending on the contents of the frame.

EEE66. Method according to numbered illustrative embodiment of the invention 54, where the components of the color space include components related to the intensity and is related to color, and a set of properties for each frame is calculated for brightness component.

EEE67. Method according to numbered illustrative embodiment of the invention 66, where the components related to the intensity, include one or more of the following values: brightness value or the value of lightness.

EEE68. Method according to numbered illustrative variation�the invention 66, where the components related to color, include one or more of the following values: color value or a value of chromatic values.

EEE69. Method according to numbered illustrative embodiment of the invention 54, which also involves performing steps C, D and E for all components of the color space.

EEE70. Method according to numbered illustrative embodiment of the invention 54, which also includes: performing steps C, D and E for the first selected set of components of the color space and after the completion of step E, if the definition of the conditions for a gradual illumination changes is based only on the first test the selected set of components of the color space.

EEE71. Method of detection of the transition, which includes the stages on which:

create multiple frames of the video sequence and associated reference frames bidirectional prediction; and

determines whether the transitions based on the average of the gain values calculated for the components of the color space to the first color region of the current frame and associated reference frames bidirectional prediction, and secondary gain factors calculated for the components of the color space into a second color area d�I of the current frame and associated reference frames bidirectional prediction.

EEE72. Method according to numbered illustrative embodiment of the invention 71, where determining whether the transitions, includes the stages on which:

calculate the average values of the components of the color space to the first color region for the current frame and the average values of the components of the color space to the first color region related to the reference frame of the bidirectional prediction;

calculate the average values of the components of the color space to the second color region for the current frame and the average values of the components of the color space to the second color region related to the reference frame of the bidirectional prediction;

determine, satisfied each of the conditions 1 - 6 are:

condition 1: is equal to unity or approximately unity, the sum of the average gain for each of the reference frame of the bidirectional prediction in the first color space;

condition 2: is equal to unity or approximately unity, the sum of the average gain for each reference frame bidirectional prediction;

condition 3: if the absolute value of the average gain for each of the reference frame of the bidirectional prediction in the area of the first tsvetovospriatia inversely proportional to the distances of the reference frame bidirectional prediction from the current frame;

condition 4: if the absolute value of the average gain for each of the reference frame of the bidirectional prediction in the area of the second color space is inversely proportional to the distances of the reference frame bidirectional prediction from the current frame;

condition 5: if the average gains in the area of the first color space correct model transition in the area of the first color space;

condition 6: if the average gains in the area of the first color space correct model transition in the area of the second color space;

where if all the conditions 1 to 6 are satisfied, then determine the presence of a transition.

EEE73. Method according to numbered illustrative embodiment of the invention 72, where a region of the first color space is an RGB region, the region of the second color space is a YCbCr region, and where determining whether the conditions are satisfied 5 and 6, includes a definition of satisfaction with the following equality:

,

where

denotes the vector for the current frame in the area of RGB

andonly�Ute vectors of the reference frames for bi-directional prediction in the field of RGB

denotes the vector for the current frame in the field YCbCr,

anddenote the vectors of the reference frames for bi-directional prediction in the field of YCbCr,

anddenote the average gain.

EEE74. Method according to numbered illustrative embodiment of the invention 71, where the method also includes equating the coefficients of the chroma gain to the gain of brightness, if a transition is present.

EEE75. The method of determining the weighted parameters in the presence of gradual changes of illumination, where the method includes the following steps:

step A: create a few frames, pictures and frames associated weighted prediction;

step B: choose the color component;

step C: for each frame and an associated reference frame prediction determine the saturated sections for the selected color component within a frame and the associated reference frame prediction;

stage D: for each frame and an associated reference frame prediction determine whether both frames together large areas with saturated values for the selected color component, and if the shared areas with saturated�and no values, then move on to stage N;

stage E: for each frame and an associated reference frame prediction determine unsaturated sections for the selected color component within a frame and its associated reference frame prediction;

step F: for each frame and an associated reference frame prediction determine whether both frames together large areas with unsaturated values for the selected color component, and if the shared regions with unsaturated values not, then move on to stage N;

stage G: for each frame and an associated reference frame to compute the prediction gain and the weighted prediction factors based on shared and optionally normalized to the same number of pixels larger areas with unsaturated values; and

stage H: for each frame and an associated reference frame to compute the prediction gain and the weighted prediction factors based on the entire frame.

EEE76. Method according to numbered illustrative embodiment of the invention 75, which also includes a choice of another color component and the repetition of the steps C - H for the selected color component.

EEE77. Method according to numbered illustrative embodiment of the invention 75, where the computation to�of efficient amplification factors and weighted prediction is based on the amplification coefficients and the weighted prediction factors for the previous frame.

EEE78. Method according to numbered illustrative embodiment of the invention 75, where each frame and the associated frame predictions are segmented, and segments A - N are executed on one or more segments of each frame or of the associated reference frame prediction, and the amplification coefficients and the weighted prediction factors are calculated for each segment.

EEE79. Method according to numbered illustrative embodiment, the 78, where motion compensation monitors the segments from one frame to the next.

EEE80. The method for determining the weighted prediction parameters in the presence of gradual changes of illumination, where the method includes the following steps:

step A: create a few frames, pictures and frames associated weighted prediction containing sample color data;

step B: choose the color component;

step C: for each frame set the current lowest value of the saturation and the current high value of saturation for the selected color component based on the selected color area for sample color data;

stage D: for each of the associated frame prediction set current lowest reference value of the saturation and the current high reference value of the saturation for the selected color in drawing up�setup portion based on the selected color area for sample color data;

stage E: for each frame and the associated frame prediction to estimate the parameters of weighted prediction based on the current minimum value of the saturation current of the maximum value of the saturation current lowest reference value saturation and most current reference saturation value;

step F: for each of the associated frame prediction calculate updated of lowest reference value of the saturation and most updated reference value of the saturation on the basis of the estimated weighted prediction parameters;

stage G: equate the current lowest reference value of the saturation is updated to the current lowest reference value saturation, and equate the current high reference value of the saturation is updated to most current reference saturation value; and

stage H: repeat steps D - G in successive iterations, if the weighted prediction parameters for the current iteration differ from the weighted prediction parameters for the immediately preceding iteration on the value selected, or if the number of iterations exceeds the value of the counter of iterations.

EEE81. Method according to numbered illustrative embodiment of the invention 80, where the weighted prediction parameters for the iteration t includes a balanced ratio, low gas consumpt�UNT gain and the factorand updated the current lowest reference value of the saturation current and updated greatest reference value of the saturation calculated by the following formulas:

,

where- current lowest value of saturation,- the current high value of saturation,- updated the current lowest reference value of the saturation, and- updated most current reference value of the saturation.

EEE82. Method according to numbered illustrative embodiment, the 81, which additionally computes the histogram of the frame and an associated reference frame prediction before evaluation of options weighted prediction, where the estimation of the weighted prediction parameters based on the calculated histograms.

EEE83. Method according to numbered illustrative embodiment, the 81, which also includes a choice of another color component and the repetition of the steps C - H for the selected color component.

EEE84. Method according to numbered illustrative embodiment, the 81, where the estimation of the weighted prediction parameters based on the weighted prediction parameters, �anenih on the immediately preceding iteration.

EEE85. Method according to numbered illustrative embodiment of the invention 80, where each frame and an associated reference picture prediction to be binned, and the steps C - H are performed on one or more segments of each frame and an associated reference frame prediction.

EEE86. Method according to numbered illustrative embodiment, the 84, where the computation of the updated current lowest reference value of the saturation current and updated most of the sample values of saturation based on the estimated weighted prediction parameters includes calculating an updated current lowest reference value of the saturation current and updated most of the sample values of saturation so as to ensure that no pixel values, saturated or cut off, resulting in weighted prediction.

EEE87. Method according to numbered illustrative embodiment, the 84, where the selected color region includes the original color space.

EEE88. Method according to numbered illustrative embodiment, the 84, where the selected color region includes the second color region obtained by the conversion from the first color region, and where the estimated parameters of weighted predskazana� include the weighted prediction parameters for the first color area.

EEE89. Method according to numbered illustrative embodiment, the 84, where several color areas obtained by conversion from the first color space, and the selected color region includes the choice of any of several color areas, and where the estimated weighted prediction parameters based on each of the selected color areas.

EEE90. An evaluation of the gains and offsets weighted prediction in the transition with gradual illumination changes from one picture to the next picture of the video signal, where the method includes the stages on which:

create a current frame and an associated reference frame prediction image in the video signal;

for each current frame and an associated reference frame prediction calculates the values of the color components in one of the color regions, wheredenotes a first color component for the current frame,denotes the second color component for the current frame,indicates a third color component for the current frame, and wheredenotes a first color component for the associated reference frame predictiondenotes the second color component d�I linked the reference frame prediction, indicates a third color component for the associated reference frame prediction;

equate the coefficients of the weighted prediction gain for all color components, where w denotes a value of gain factor weighted prediction, equal for all color components;

equate to each other offset weighted prediction for two of the color components, whereindicates the offset for the first color component, anddenotes the displacement of the weighted prediction of equal value for the other two color components;

solve the following formula for the weighted prediction factor w and offsetsandweighted prediction:

.

EEE91. Method according to numbered illustrative embodiment of the invention 90, where the color region is a region YCbCr, and the weighted prediction offset for two color components equal to one another, include the offset weighted predictions for chrominance components of a color.

EEE92. Method according to numbered illustrative embodiment of the invention 90, where the values of the color components are calculated for about�nogo or more segments of the current frame and the associated reference frame prediction.

EEE93. An evaluation of the gains and offsets weighted prediction in the transition with gradual illumination changes from one picture to the next picture of the video signal, where the method includes the stages on which:

create a current frame and an associated reference frame prediction image in the video signal;

for each current frame and an associated reference frame prediction calculates the values of the color components in one of the color regions, wheredenotes a first color component for the current frame,denotes the second color component for the current frame,indicates a third color component for the current frame, and wheredenotes a first color component for the associated reference frame predictiondenotes the second color component for the associated reference frame predictionindicates a third color component for the associated reference frame prediction;

equate offset weighted prediction for all color components, where f denotes an offset value weighted prediction, equal for all color components;

equate to each other coefficients strengthened�I weighted prediction for two of the color components, whereindicates the offset for the first color component, anddenotes the displacement of the weighted prediction of equal value for the other two color components;

solve the following formula for the coefficientsandgain weighted prediction weighted prediction offset f:

.

EEE94. Method according to numbered illustrative embodiment, the 93 where the color region is a color area the RGB, and the gain of weighted prediction for two of the color components equal to each other, include offset weighted prediction for green and blue components.

EEE95. Method according to numbered illustrative embodiment, the 93 where the color region is a color area the RGB, and the gain of weighted prediction for two of the color components equal to each other, include offset weighted prediction for the red and blue components.

EEE96. Method according to numbered illustrative embodiment, the 93, where the values of the color components are calculated for one or more �of Ahmetov current frame and an associated reference frame prediction.

EEE97. Conversion method of weighted prediction parameters from the first color region to the second color region, where the conversion from the first color region to the second color area is not linear, where the method includes a stage on which:

calculate the weighted prediction parameters for the second color region for one or more frames in the second area on the basis of the expression of the conversion from the first color region to the second color area.

EEE98. Method according to numbered illustrative embodiment, the 97, where the conversion of options weighted prediction in the presence of a transition image.

EEE99. Method according to numbered illustrative embodiment, the 97, where the expression of the conversion from the first color region to the second color region includes the exponential transformation, and where the method also includes a stage on which:

calculate the weighted prediction parameters for the first color area for one or more frames in a first color region, to obtain the value of the gain of the first color region, and

where the calculation of the weighted prediction parameters for the second color region includes calculating a value of the gain of the second color�flange area, equal to the value of the gain of the first color area raised to the power exponent indicative of the conversion.

EEE100. Method according to numbered illustrative embodiment, the 99, where the conversion of options weighted prediction in the presence of a gradual change of illumination.

EEE101. Method according to numbered illustrative embodiment, the 99 where the exponential transformation includes gamma correction, wheredenotes the indicator value in the first color region is denoted as w, and the value of the second color area is designated as

EEE102. Method according to numbered illustrative embodiment, the 97, where the expression of the conversion from the first color region to the second color region includes an expression transformation in the logarithmic space, and image transfer involves a gradual change of illumination, where the method also includes a stage on which:

calculate the weighted prediction parameters for the first color area for one or more frames in a first color area for the purpose of obtaining the values of the gain of the first color region, and

where the calculation parameters�s weighted prediction in the second color region includes calculating the offset values in the second color region, equal to the logarithm of the value of the gain of the first color area.

EEE103. Method according to numbered illustrative embodiment, the 98, where the expression of the conversion from the first color region to the second color region includes the exponential transformation, and image transfer involves a gradual change of illumination, and where the computation of weighted prediction parameters includes calculating a value of the gain of the second color area, is not equal to one, and the offset values in the second color region is equal to zero.

EEE104. Method according to numbered illustrative embodiment, the 98, where the expression of the conversion from the first color space into a second color space includes an expression transformation in the logarithmic space, and image transfer involves a gradual change of illumination, and where the calculation of the weighted prediction parameters for the second color region includes calculating a value of the gain of the second color area equal to one, and the offset for the second color field is not equal to zero.

EEE105. Method according to numbered illustrative embodiment, the 97, where the image transition includes a transition, and in�to Express the conversion from the first color space into a second color space includes demonstration conversion and where the method also includes a stage on which:

calculate the weighted prediction parameters for the first color region, which includes the value of the first reference gain factor for the first reference frame and the value of the second reference gain factor for the second reference frame, and

where the calculation of the weighted prediction parameters for the second color region includes calculating values of the gain of the second color area based on the value of the first reference gain value of the second reference gain value.

EEE106. Method according to numbered illustrative embodiment of the invention 105, where the calculation of the weighted prediction parameters for the second color region includes two binomial expansions based on the value of the first reference gain factor and one or more of the following values: the value of the first reference frame and the values of the second gain and one or more values of the second key frame.

EEE107. Method according to numbered illustrative embodiment, the 97, where the image transition includes a transition, and the expression of the conversion from the first color region to the second color region includes the logarithmic transformation in p�stranstvo, and method also includes a stage on which:

calculate the weighted prediction parameters for the first color region, which includes the value of the first reference gain factor for the first reference frame and the value of the second reference gain factor for the second reference frame; and

where the calculation of the weighted prediction parameters for the second color region includes calculating values of the gain of the second color area based on the value of the gain of the first color area and the value of the second reference gain value.

EEE108. Method according to numbered illustrative embodiment, the 107, where the displacement in the weighted parameters in the second color region are equal to the logarithms of the values of the first reference gain value of the second reference gain value.

EEE109. Method according to numbered illustrative embodiment, the 97 also includes a stage on which:

compress the content in the first layer by using the first color region, and

compress the content in the second layer using a second color region,

whereby the weighted prediction parameters calculated for the first layer, is converted to options weighted predskazana� for the second layer.

EEE110. Method according to numbered illustrative embodiment, the 109, where the first layer includes a base layer, and the second layer includes a layer of quality improvement.

EEE111. Method according to numbered illustrative embodiment of the invention 110, where the weighted prediction parameters for the first layer are passed to the decoder or the encoder for the second layer.

EEE112. Encoder, designed to encode the video signal according to the method described in one or more numbered illustrative embodiments 1 to 111, inclusive.

EEE113. A device for video coding according to the method described in one or more numbered illustrative embodiments 1 to 111, inclusive.

EEE114. A system for video coding according to the method described in one or more numbered illustrative embodiments 1 to 111, inclusive.

EEE115. A decoder designed to decode the video signal according to the method described in one or more numbered illustrative embodiments 1 to 111, inclusive.

EEE116. Decoder numbered according to the illustrative embodiment of the invent�Oia 115, where the weighted prediction parameters are not passed to the decoder, and where the weighted prediction parameters are obtained from the image information sent to the decoder.

EEE117. A device for decoding video signal according to the method listed in one or more numbered illustrative embodiments 1 to 111, inclusive.

EEE118. A device according to numbered illustrative embodiment of the invention 117, where the weighted prediction parameters are not passed to the device, and where the weighted prediction parameters are obtained from the image information sent to the device.

EEE119. A system for decoding video signals according to the method listed in one or more numbered illustrative embodiments 1 to 111, inclusive.

EEE120. The system is numbered according to the illustrative embodiment of the invention 119, where the weighted prediction parameters are not passed to the system, and where the weighted prediction parameters are obtained from the image information sent to the system.

EEE121. The machine-readable storage medium containing a set of commands that cause execution of the computer method listed in one or more of pronom�aligned illustrative embodiments 1 - 111 inclusive.

EEE122. Using the method listed in one or more numbered illustrative embodiments 1 to 111, inclusive, for encoding the video signal.

EEE123. Using the method described in one or more numbered illustrative embodiments, -111 1 inclusive, to decode video.

EEE124. Machine-readable data storage medium comprising a set of commands that cause, govern, program or configure one or more of the following devices: a computer or an integrated circuit (IC) for performing the method, described in one or more numbered illustrative embodiments, -111 1 inclusive.

EEE125. The IC device, which is configured to be programmed is controlled by, or constructed to perform the process described in one or more numbered illustrative embodiments 1 to 111, inclusive.

[0161] it Should be understood that the disclosure is not limited to specific methods or systems that can certainly change. It should also be understood as used in this disclosure, the terminology is not serving the purpose of description only private embodiments of the invention and is not intended to be limited�of Iceni. In the sense used in this description and the attached claims, the singular form include reference objects in the plural unless the content clearly dictates otherwise. The term "multiple" includes two or more reference object, if the content is not explicitly instructs otherwise. If not defined otherwise, all technical and scientific terms used herein have the same meaning as that usually understood medium-sized specialists in the field to which this disclosure belongs.

[0162] the above examples are presented in order to provide medium-sized specialists in this field a complete disclosure and description of how to create and use embodiments of the methods to improve the quality of sampled and multiplexed image data and video data according to the disclosure, and they are not intended to limit the scope of what the inventors consider as their disclosure. Specialists in this field can be used modifications of the above embodiments of the disclosure, and they are assumed as being within the scope of the following claims.

[0163] Described several embodiments of the disclosure. However, it should be understood that various modification�ation may be made without deviation from the spirit and scope of the present disclosure. Accordingly, other embodiments of the invention are within the scope of the claims which follows below, reference materials.

REFERENCE MATERIALS

[1] Ford and A. Roberts, "Color Space Conversions", http://www.poynton.com/PDFs/coloureq.pdf.

[2] K. Kamikura, H. Watanabe, H. Jozawa, H. Kotera, and S. Ichinose, "Global Brightness-Variation Compensation for Video Coding," IEEE Transactions on

Circuits and Systems for Video Technology, vol. 8, no. 8, Wavelet Applications in Industrial Processing, Dec. 1998, pp. 988-1000.

[3] Y. Kikuchi and T. Chujoh, "Interpolation coefficient adaptation in multi-frame interpolative prediction", Joint Video Team of ISO/IEC MPEG & ITU-T VCEG, JVT-C103, Mar. 2002.

[4] H. Kato and Y. Nakajima, "Weighting factor determination algorithm for H. 264/MPEG-4 AVC weighted prediction," Proc. IEEE 6th Workshop on Multimedia Signal Proc., Siena, Italy, Oct. 2004.

[5] J. M. Boyce, "Weighted prediction in the H. 264/MPEG-4 AVC video coding standard," Proc. IEEE International Symposium on Circuits and Systems, Vancouver, Canada, May 2004, vol. 3, pp. 789-792.

[6] P. Yin, A. M. Tourapis, and J. M. Boyce, "Localized weighted prediction for video coding," Proc. IEEE International Symposium on Circuits and Systems, May 2005, vol. 5, pp. 4365-4368.

In addition, all patents and publications mentioned in the description, may indicate the levels of qualification of specialists in the field to which this disclosure belongs. All references cited in this disclosure by reference incorporated into this disclosure to the extent as if each reference was included in this disclosure completely separately.

1. Method of detecting a gradual change of illumination and the determination of the global or local entity gradual illumination changes during the transition from one picture to the next picture of the video signal, where the method includes the stages at which:
create multiple frames of images and associated reference frames to predict;
for each frame and an associated reference frame prediction calculates or obtains one or more quantities related to the intensity, and one or more quantities related to the color in the first color region, where the calculation of one or more quantities related to the intensity, and one or more quantities related to the color, include the color components;
for each frame and an associated reference frame prediction calculate the gain weighted prediction for each value of the calculated color component in the first color region;
if all the gain is weighted predictions are non-negative and largely similar to each other, determines that the second color region occurs mainly global transition with zero offset; and
if not all the gains of the weighted predictions are non-negative and largely similar to each other, the Oprah�elaut, what is prevented in at least one of the following: a global transition with a gradual change of illumination; global crossing with a gradual change of illumination with zero offset or global transition with a gradual change of illumination with zero offset for the second color region.

2. A method according to claim 1, characterized in that the values relating to the intensity, include one or more of the following values: the value of brightness or luminosity value.

3. A method according to claim 1, characterized in that the value relating to color, include one or more of the following values: value as color or chromatic value.

4. A method according to claim 1, characterized in that the first color area is an area YCbCr, and the gain of the weighted predictions are calculated from values that include:
components of the luminance frame and the reference frame prediction frame;
the values of the chrominance component of the frame;
the values of the chrominance component of the reference frame prediction frame; and the values of the offsets when converting a color format.

5. A method according to claim 1, characterized in that the values relating to the intensity and is related to color, include the average values related to the intensity and is related to color.

6. A method according to claim 1, characterized in, Thu� value related to intensity and related to the color, calculated on the basis of information histograms.

7. A method according to claim 1, characterized in that is a few frames of the frame sequence.

8. A method according to claim 1, characterized in that the multiple frames includes a portion of the set of frames that specify the presence of illumination changes.

9. A method according to claim 1, characterized in that the matrix of the color space defining the first color space to the second color space is linear.

10. A method according to claim 1, characterized in that the gain of the weighted predictions are largely similar, if the gains are in the range of 5-10% relative to one another.

11. A method according to claim 1, characterized in that it includes the stages on which:
perform a logarithmic scaling of the gain weighted prediction so that they have values between 0 and 1, and determine whether the gain is largely similar, by calculating whether the difference between the values of the coefficients of the weighted prediction gain is less than 0.1.

12. Method of detecting a gradual change of illumination and the determination of the global or local entity gradual change of light�nasty in the transition from one picture to the next picture video where the method includes the stages at which:
create multiple frames of images and associated reference frames to predict;
for each frame and an associated reference frame prediction compute quantities related to the intensity and related to the color in the first color region;
for each frame and an associated reference frame prediction calculates the weighted prediction parameters related to intensity;
for each frame and an associated reference frame prediction calculate the gain weighted prediction based on calculated values related to the intensity and is related to color, and from the calculated weighted prediction parameters related to intensity;
if all the gain is weighted predictions are non-negative and largely similar to each other, determines that the second color region occurs mainly global transition with zero offset; and
if not all the gains of the weighted predictions are non-negative and largely similar to each other, we check whether the local transition.

13. Method of determining the global or local entity gradual illumination changes during the transition from one picture to the next picture of the video signal, where STRs�about includes the steps in which:
create multiple frames of images and associated reference frames to predict;
for each frame and an associated reference frame prediction compute quantities related to the intensity and related to the color in the first color region;
for each frame and an associated reference frame prediction calculate the gain weighted prediction for each of the quantities related to the intensity and related to the color in the first color region;
for each frame and an associated reference frame prediction to compare the values of the gain weighted prediction with each other;
if all the gain is weighted predictions are non-negative and largely similar to each other, it is determined that the gradual illumination change is global; and
if not all the gains of the weighted predictions are non-negative and largely similar to each other, it is determined that the gradual illumination change is local.

14. Method of detecting gradual changes in luminance in the transition from one scene to the next scene of the video signal, where the method includes the following stages:
step A: create multiple frames of video;
step B: select the current frame from multiple frames;
step C: you�islet set of properties for one or more components of the color space of the current frame based on the values of the frame for one or more color components in the frame preceding the current frame and the frames following the current frame value of the frame one or more color components include DC values of one or more color components;
step D: calculate the set of properties for one or more components of the color space of the preceding frame before the current frame, based on the values of the frame for one or more color components in the frame preceding the previous frame and the frames following the preceding frame, the values of the frame one or more color components include DC values of one or more color components; and
step E: compare the set of properties for one or more parameters of the color space of the current frame with a set of properties for one or more parameters of the color space of the preceding frame to determine whether the current frame is the final frame of gradual illumination changes with increasing or decreasing value of the frame or is the frame preceding the current frame, the initial frame of gradual illumination changes with increasing or decreasing value of the frame;
where m denotes a frame of the selected frame from multiple frames, the frame m-1 denotes a frame from multiple frames prior to frame m, frame m-2 �ADRs from multiple frames, previous frame m-1, m+1 denotes the frame from multiple frames following the frame m and frame m+2 denotes the frame from multiple frames following the frame m+1, where calculating the set of properties for one or more components involves calculating the following properties:
property And where the value of the frame for the frame m on the property And is equal to the average values of the frame for frame m+1 and the frame m-1;
property, where the value of the frame for the frame m for a property In equal to twice the value of the frame for frame m+1 minus the value of the frame for frame m+2;
property D, where the value of the frame for the frame m by property D is equal to twice the value of the frame for frame m-1 minus the value of the frame for frame m-2; and
property E: where the value of the frame for the frame m by property (E equally divisible by property (E divided by a divisor on a property E is divisible by a property E is equal to twice the value of the frame for frame m+1, plus the value of the frame for frame m-2, and the divisor by a property E is equal to 3, and
where the comparison of the set of properties for one or more parameters of the color space of the current frame with a set of properties for one or more parameters of the color space of the previous frame includes testing at least one of the following conditions:
condition 1 where the condition 1 is satisfied if the value of the frame by a property for the current frame is larger than Znaniecka property And the current frame, and the value of the frame property And the current frame is greater than the value of the frame property In the current frame, and the frame value for the property for the previous frame is greater than the value of the frame property for the preceding frame, and the frame value for the property And for the previous frame is larger than the frame value for a property In the previous frame;
condition 2 where condition 2 is satisfied if the value of the frame by a property for the current frame is less than the value of the frame property And the current frame, and the frame value for the property And for the current frame is less than the value of the frame property In the current frame, and the frame value for the property for the previous frame is less than the value of the frame property for the preceding frame, and the frame value for the property And for the preceding frame is less than the value of the frame property for the preceding frame;
condition 3 where the condition 3 is satisfied if the value of the frame property for the previous frame is greater than the value of the frame property for the preceding frame, and the frame value for the property And for the previous frame is greater than the value of the frame property D to the preceding frame, and the frame value for the property for the current frame is greater than the value of the frame property And the current frame, and the frame value for property � for the current frame is greater than the value of the frame property of D for the current frame; and
condition 4 where the condition 4 is satisfied if the value of the frame property for the preceding frame is less than the value of the frame property for the preceding frame, and the frame value for the property And for the preceding frame is less than the value of the frame property D to the preceding frame, and the frame value for the property for the current frame is less than the value of the frame property And the current frame, and the frame value for the property And for the current frame is less than the value of the frame property of D for the current frame;
where:
if condition 1 is satisfied, then the current frame is denoted as the end frame of the gradual changes in luminosity with decreasing value of the frame;
if condition 2 is satisfied, then the current frame is denoted as the end frame of the gradual changes in luminosity with increasing value of the frame;
if condition 3 is satisfied, then the immediately preceding frame is denoted as the start frame of the gradual changes in luminosity with increasing value of the frame; and if the condition 4 is satisfied, then the immediately preceding frame is denoted as the start frame of the gradual changes in luminosity with decreasing value of the frame.

15. The method of determining the weighted parameters in the presence of conventional�Vij gradual change of illumination, where the method includes the following stages:
step A: create several picture frames and key frames associated predictions containing sample color data;
step B: choose the color component;
step C: for each frame set the current lowest value of the saturation and the current high value of saturation for the selected color component based on the selected color area for sample color data;
stage D: for each of the associated reference frame prediction set current lowest reference value of the saturation and the current high reference value of the saturation for the selected color component based on the selected color area for sample color data;
stage E: for each frame and an associated reference frame prediction to estimate the parameters of weighted prediction based on the current minimum value of the saturation current of the maximum value of the saturation current lowest reference value saturation and most current reference saturation value;
step F: for each of the associated frame prediction calculate an updated current lowest reference value of the saturation current and updated greatest reference value of the saturation on the basis of the estimated weighted prediction parameters;
stage G: equate the current lowest Zn reference�increase the saturation to updated current lowest reference value saturation and equate the current high reference value of the saturation is updated to most current reference saturation value; and
stage N: repeat steps D-G in successive iterations, if the weighted prediction parameters for the current iteration differ from the weighted prediction parameters for the immediately preceding iteration for the selected value or if the number of iterations exceeds the value of the counter of iterations.

16. Encoder, designed to encode the video signal in accordance with the method according to one or more of claims. 1-15 inclusive.

17. A device for video coding in accordance with the method according to one or more of claims. 1-15 inclusive.

18. A system for video coding in accordance with the method according to one or more of claims. 1-15 inclusive.

19. A decoder designed to decode the video signal in accordance with the method according to one or more of claims. 1-15 inclusive.

20. The decoder according to claim 19, characterized in that the weighted prediction parameters are obtained from the information symbols that are sent to the decoder.

21. A device for decoding video signal in accordance with the method according to one or more of claims. 1-15 inclusive.

22. The device according to claim 21, characterized in that the weighted prediction parameters are obtained from the information symbols that are sent to the device.

23. With�system, designed for decoding video signal in accordance with the method according to one or more of claims. 1-15 inclusive.

24. A system according to claim 23, characterized in that the weighted prediction parameters are obtained from the information symbols sent in the system.

25. Machine-readable data storage medium comprising a set of commands that cause, govern, program or configure one or more of the following devices: a computer or device on an integrated circuit (1C), to perform the method according to one or more of claims. 1-15 inclusive.

26. The device 1C, configured, programmed, operated, or constructed to perform a process according to one or more of claims. 1-15 inclusive.

27. Application of the method according to one or more of claims. 1-15 inclusive for decoding the video.

28. Application of the method according to one or more of claims. 1-15 inclusive for video coding.



 

Same patents:

FIELD: physics, video.

SUBSTANCE: invention relates to the field of digital signal processing and, in particular, to the field of video signal compression using the movement compensation. The coding method includes the obtaining of target number of movement information predictors to be used for the coded image section and generation of the set of movement information predictors using the obtained target quantity. The set is generated by means of: obtaining of the first set of movement information predictors, each of which is connected with the image section having the pre-set spatial and/or time ratio with the coded image section; modifications of the first set of movement information predictors by removal of the duplicated movement information predictors for obtaining of the reduced set of movement information predictors containing the first number of movement information predictors, and each movement information predictor from the reduced set differs from any other movement information predictor from the reduced set; comparisons of the first number of movement information predictors with the target quantity obtained, and if the first quantity is less than the target quantity, obtaining of the additional movement information predictor and its addition to the reduced set of movement information predictors.

EFFECT: decrease of spatial and time redundancies in video flows.

26 cl, 8 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to a broadcasting system for transmitting a digital television program, particularly a transmission device and a transmission method, in which content which meets needs can be acquired. A server generates a script PDI-S for obtaining a user side PDI-A representative of an answer of a user to a question about user preferences; generates launch information for executing the PDI-A; and transmits the launch information and PDI-S in response to the delivery of broadcast content, and transmits to the client in response to the delivery of reference content a provider side PDI-A representative of an answer set by a provider to the question. The client executes the PDI-S based on detection of launch information and carries out matching between the user side PDI-A and the provider side PDI-A to determine acquisition of reference content delivered by the server.

EFFECT: facilitating delivery of content to a client which satisfies the needs thereof at that time.

10 cl, 48 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. An apparatus for encoding an image using intraframe prediction comprises a unit for determining an intraframe prediction mode, which determines the intraframe prediction of the current unit to be encoded, wherein the intraframe prediction mode indicates a defined direction from a plurality of directions, wherein the defined direction is indicated by one number dx in the horizontal direction and a constant number in the vertical direction and a number dy in the vertical direction and a constant number in the horizontal direction; and a unit for performing intraframe prediction, which performs intraframe prediction applied to the current unit in accordance with the intraframe prediction mode, wherein the intraframe prediction includes a step of determining the position of adjacent pixels through a shift procedure based on the position of the current pixel and one of the parameters dx and dy, indicating the defined direction, wherein adjacent pixels are located on the left side of the current unit or on the upper side of the current unit.

EFFECT: high efficiency of compressing images through the use of intraframe prediction modes having different directions.

9 cl, 21 dwg, 4 tbl

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a method for bit-plane coding of signals, for example, an image or video signal in the DCT transform domain. The bit planes of the DCT blocks are transmitted plane by plane in order of significance. As each plane contains more signal energy than the less significant layers together, the resulting bitstream is scalable in the sense that it may be truncated at any position. The later the bitstream is truncated, the smaller the residual error when the image is reconstructed. For each bit plane, a zone or partition of bit plane is created that encompasses all the non-zero bits of the DCT coefficients in that bit plane. The partition is created in accordance with a strategy which is selected from a number of options depending on the content of the overall signal and/or the actual bit plane. A different zoning strategy may be used for natural images than for graphic content, and the strategy may vary from bit plane to bit plane. The form as well as other properties such as the size of each partition can thus be optimally adapted to the content. Two-dimensional rectangular zones and one-dimensional zigzag scan zones may be mixed within an image or even within a DCT block. The selected zone creating strategy is embedded in the bitstream, along with the DCT coefficient bits in the actual partition.

EFFECT: high efficiency of a scalable method of compressing signal content.

13 cl, 5 dwg

FIELD: radio engineering, communication.

SUBSTANCE: invention relates to means of detecting illegal use of a processing device of a security system, used to descramble various media data distributed over multiple corresponding channels. The method includes counting new messages ECMj,c, received by the processing device of the security systems for channels, other than channel i, after the last received message ECMi,p; verifying that the message ECMi,c was received during said time interval by verifying that the number of new messages ECMj,c, received for channels other than i, reaches or exceeds a given threshold greater than two; increasing the counter Kchi by the given value each time when after verification a message ECMi,c is received during a given time interval, immediately after a message ECMi,p, otherwise the counter Kchi is reset to the initial value; detecting illegal use once the counter Kchi reaches said threshold.

EFFECT: reducing the probability of illegal use of a processing device.

10 cl, 3 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. The method of decoding video comprises obtaining from a bit stream information on pixel value compensation in accordance with a pixel value band or a limiting value level, if information on pixel value compensation indicates a band, applying a compensation value of the predefined band obtained from the bit stream to the pixel included in the predefined band among pixels of the current block; and if information on pixel value compensation indicates a limiting value level, applying a compensation value of the predefined boundary direction obtained from the bit stream to the pixel in the predefined boundary direction among pixels of the current block, wherein the predefined band is one of bands formed by breaking down the full range of pixel values.

EFFECT: high quality of the reconstructed image.

3 cl, 22 dwg, 2 tbl

FIELD: physics, video.

SUBSTANCE: invention relates to means of encoding and decoding video. The method includes determining a first most probable intra-prediction mode and a second most probable intra-prediction mode for a current block of video data based on a context for the current block; performing a context-based adaptive binary arithmetic coding (CABAC) process to determine a received codeword, corresponding to a modified intra-prediction mode index; determining the intra-prediction mode index; selecting the intra-prediction mode.

EFFECT: high efficiency of signalling an intra-prediction mode used to encode a data block by providing relative saving of bits for an encoded bit stream.

50 cl, 13 dwg, 7 tbl

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a media device and a system for controlling access of a user to media content. Disclosed is a device (100, 200) for controlling access of a user to media content, the device comprising: an identification code output (102, 103, 202) for providing an identification code to the user, the identification code identifying the media device; a control code generator (104, 204) for generating a control code depending on the identification code and an access right; an access code input (106, 107, 206) for receiving an access code from the user. The access code is generated depending on the identification code and the access right by a certain access code device, and an access controller (108, 208) enables to compare the access code to the control code, and when the access code matches the control code, grants the user access to the media content in accordance with the access right.

EFFECT: managing user access to media content, wherein access is granted specifically on the selected media device.

14 cl, 6 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a method and an apparatus for controlling settings of a device for playback of a content item. Disclosed is a method of controlling settings of a rendering device for playback of a content item, said rendering device being configured to connect with at least one source device, said at least one source device providing at least one content item, wherein the method comprises steps of: generating a plurality of entries for said at least one source device, each of the plurality of entries corresponding to a different profile, each profile comprising settings for playback of a content item received from the corresponding source device. A user can request generation of a plurality of entries for the same source device and select one of said entries, wherein the rendering device is connected with the source device which corresponds to said selected entry; and settings of the rendering device for playback of the received content item are controlled according to the profile corresponding to said selected entry.

EFFECT: providing corresponding settings for playback of different types of content items.

9 cl, 2 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to video encoding/decoding techniques which employ a loop filter which reduces blocking noise. The technical result is achieved due to that a video encoding/decoding device, which encodes or decodes video using a loop filter, includes a deviation calculating unit which calculates deviation between a target noise cancellation pixel and a neighbouring pixel of the target pixel using a decoded image. A pattern form establishing unit limits the pattern form such that the less the deviation from the maximum deviation in the decoded image, the smaller the pattern form. When removing target pixel noise, using a weight coefficient in accordance with the degree of similarity between the pattern of the target pixel and the pattern of each search point in the form of a search and a weighted sum of pixel values at search points, the loop filter compares patterns using the limited pattern form and removes the target pixel noise.

EFFECT: reduced computational complexity of the noise cancellation filter, thereby preventing deterioration of encoding efficiency.

5 cl, 19 dwg

FIELD: information technology.

SUBSTANCE: like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.

EFFECT: method for personalised filtration of content elements which does not require logic input or user identification procedures.

5 cl, 1 dwg

FIELD: information technology.

SUBSTANCE: like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.

EFFECT: method for personalised filtration of content elements which does not require logic input or user identification procedures.

5 cl, 1 dwg

FIELD: information technologies.

SUBSTANCE: method of a conversion system operation to manage digital rights to grant a license to a client's device corresponding to coded content consists in the following. The first content of the first type of digital rights content and the first license corresponding to the first content are converted to manage digital rights in order to generate the second content of the second type of digital rights content and the second license corresponding to the second content. A license request is received, corresponding to the second content distributed by means of superdistribution to a third party. The second license corresponding to the second content distributed by means of superdistribution is requested from a server corresponding to the second management of digital rights. The second license corresponding to the second content distributed by means of superdistribution is received and sent to a third party.

EFFECT: expansion of functional resources due to development of a license granting mechanism for appropriate content distributed by means of superdistribution.

17 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: network server of television server sets in random manner according to Internet protocol (IPTV) time of request for receiving main license within time period starting from time of broadcast transmission and ending at preset time in accordance with request for receiving license for playback of encrypted content, where request for receive comes from IPTV client terminal, and transmits to IPTV client terminal information about time of request for receiving main license and temporary license comprising temporary key of content which key corresponds to playback of broadcast transmission content from time of broadcast transmission start till preset time. License server transmits main license including content main key which corresponds to full playback of content according to request for receiving main license which request is executed using IPTV client terminal based on information about request for receive.

EFFECT: stabilisation of license server operation by eliminating concentration of license receive requests from large number of clients during time just after starting broadcast transmission of content.

6 cl, 11 dwg

FIELD: information technology.

SUBSTANCE: multimedia content purchasing system comprising: a memory area associated with a multimedia service; a multimedia server connected to the multimedia service via a data communication network; a portable computing device associated with a user; and a processor associated with the portable computing device, said processor being configured to execute computer-executable instructions for: establishing a connection to the multimedia server when the multimedia server and the portable computing device are within a predefined proximity; authenticating the multimedia server and the user with respect to the authenticated multimedia server; transmitting digital content distribution criteria; receiving, in response, promotional copies of one or more of the multimedia content items and associated metadata; and purchasing, when the multimedia server and the portable computing device are outside the predefined proximity, at least one of said one or more multimedia content items.

EFFECT: enabling flexible sharing of multimedia content between subjects.

17 cl, 9 dwg

FIELD: information technologies.

SUBSTANCE: device (600) to process stored data packets (110; 112) in a container of media data (104) and stored related meta information in a container of meta data (106); related meta information, including information on timing of transportation and information on location, indicating location of storage of saved data packets in the media data container (104); a device, comprising a processor (602) for detection, based on stored data packets (110; 112) and stored related meta information (124; 128); information on decoding (604; 704) for media useful load of stored data packets (110; 112), where information on decoding (604; 704) indicates, at which moment of time to repeatedly reproduce which useful load of stored data packets.

EFFECT: immediate accurate timing of synchronisation between different recorded media streams without complicated processing during each reproduction of recorded media streams.

21 cl, 12 dwg

FIELD: information technology.

SUBSTANCE: provided is an integrated interface device for performing a hierarchical operation for specifying a desired content list. The interface device has a function to display a content list, content specified by the content list, or the like by efficiently using a vacant area in the lower part of the display by displaying icons which display a hierarchical relationship, for example, "display in a row", in the upper part of the screen, thereby freeing a large space in the lower part of the display.

EFFECT: efficient use of the entire screen even after displaying an interface for performing an operation.

17 cl, 42 dwg

FIELD: radio engineering, communication.

SUBSTANCE: channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.

EFFECT: provision of a method to generate a recommendation for an additional element of content, the method is specially adapted for use with channels of individualised content.

13 cl, 1 dwg

FIELD: radio engineering, communication.

SUBSTANCE: channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.

EFFECT: provision of a method to generate a recommendation for an additional element of content, the method is specially adapted for use with channels of individualised content.

13 cl, 1 dwg

FIELD: information technology.

SUBSTANCE: wireless transmission system includes: a device (1) which wirelessly transmits AV content and a plurality of wireless recipient devices (5, 6) for reproducing the transmitted AV content. The device (1) for transmitting content has a group identification table which stores a group identifier for identification of a group formed by the wireless recipient device (5, 6). The device (1) adds the group identifier extracted from the group identification table to a control command for controlling recipient devices (5, 6) and wirelessly transmits the control command having the group identifier. The recipient devices (5, 6) receive the wirelessly transmitted control command from the device (1) if the corresponding group identifier has been added to the control command. The device (1) for transmitting content consists of a wired source device and a relay device which is connected by wire to the wired source device, and the relay device is wirelessly connected to the wireless recipient device and mutually converts the wired control command transmitted to the wired source device, and the wireless control command transmitted to the wireless recipient device, wherein the wired source device and the relay device are connected via HDMI (High-Definition Multimedia Interface).

EFFECT: providing the minimum required volume of transmitting control commands during wireless audio/video transmission.

21 cl, 13 dwg

Up!