# Method of encoding/decoding multi-view video sequence based on local adjustment of brightness and contract of reference frames without transmitting additional service data

FIELD: information technology.

SUBSTANCE: method for local adjustment of brightness and contrast of a reference frame for encoding a multi-view video sequence including: obtaining pixel values of the current encoded block belonging to the encoded frame, and pixel values of a reference block belonging to a reference frame; obtaining restored pixel values neighbouring with respect to the current block of the encoded frame, and pixel values neighbouring with respect to the reference block of the reference frame; determining numerical relationships between pixel values of the reference block and pixel values neighbouring with respect to the reference block, and relationships between the restored pixel values, neighbouring with respect to the current encoded block, and pixel values neighbouring with respect to the reference block; numerical relationships found at the previous step are used to determine parameters for adjusting brightness and contrast for adjusting differences in brightness and contrast for the reference block compared to the current encoded block; and adjusting differences in brightness and contrast for the reference block using the found adjustment parameters.

EFFECT: high encoding efficiency.

13 cl, 10 dwg

The present invention relates to a method for correcting differences in brightness and contrast, which can occur between frames Multiview video sequences. In particular, the present invention can be used for encoding and decoding multi-view video sequences.

One of the methods used for coding Multiview video sequences, is to use the frames that belong to a neighboring species (angles), as well as personnel synthesized using frames neighboring species (angles) and depth charts. These frames serve as reference frames when performing coding with prediction [1]. This is done by eliminating the offset of the object in the current frame relative to one of the reference frames. Under the offset can be understood, the movement of the object or the difference in position of the object between the current encoded frame and the frames that belong to neighboring species (angles), or a synthesized frame. Goal eliminate this bias is to obtain the minimum interframe difference. The obtained inter-frame difference is then encoded (for example, by applying decorrelates transformation, quantization and entropy coding) and placed in the output bit stream.

Possible differences in the parameters of the cameras used by the s to capture multi-view video sequences, and also the difference in the light flux coming from the subject to the camera, lead to differences in brightness and contrast between the frames belonging to different angles. These differences in brightness and contrast also affect the characteristics of the synthesized frame. This can lead to an increase in the absolute values of the interframe difference, which adversely affects the efficiency of encoding.

To solve the above problems in the standard H.264 [2] used the weighted forecast, originally designed for efficient encoding single-species (odnorotornyj) video sequences, where the effects of the smooth introduction and removal of image flicker or change scenes. The weighted forecast allows to eliminate the difference in brightness between the encoded frame and the reference frames at the level of macroblocks. Using the same values of the weighting factors for all macroblocks belonging to the same layer. The weighting coefficients can be determined in the encoding process and stored in the output bit stream ("explicit" weighted prediction) or be calculated in the process of encoding/decoding ("implicit" weighted prediction). However, in the case of multi-view sequences, which can be observed local changes in brightness and/or contrast, such a method may be ineffective.

Another solution to this problem is adaptive block-correction of differences in brightness [3]. One way of implementing this approach is the method of one-step affine brightness correction for multi-view video sequences (Multiview One-Step Affine Illumination Compensation - MOSAIC) [4, 5]. This method involves the combination of block correction of differences in brightness as described in the standard H.264 modes interframe prediction. In the process of coding each macroblock calculate the average pixel value of the current encoding block and the reference block candidate. For these blocks form the modified blocks by subtracting the average value for each pixel block. Then the blocks received calculate the sum of absolute differences (Mean-Removed Sun of Absolute Difference - MRSAD). The result of the interframe prediction are the relative coordinates of the reference block (shift vector), which give the minimum value of the encoding, as well as the difference between the modified coded block and the modified reference block. The calculation of the cost of coding is based on the evaluated value MRSAD and evaluation of bit transfer costs additional information necessary for subsequent decoding. In addition to the displacement vector, additional information in the cancel in the difference between the average values of the current and reference blocks. This difference is defined as DVIC (Difference Value of Illumination Compensation) and is a parameter of the brightness correction. The value of the DVIC is subjected to differential encoding and placed in the output bit stream. It should be noted that in the case of the mode "P Skip" value DVIC is determined based on the values DVIC neighboring macroblocks that have already been encoded at the time of encoding the current macroblock. Thus, the above method does not completely eliminate the need for explicit transmission of additional information needed for subsequent decoding.

The parameters required for the correction of differences in brightness and contrast can be obtained by analyzing the recovered (encoded, and then decoded) frames. This helps to reduce the amount of additional information that must be encoded and explicitly placed in the output bit stream. This approach was implemented in the method of weighted prediction using neighboring pixels (WPNP - Weighted Prediction using Neighboring Pixels) [6]. This method uses the pixel values of the encoded frame, adjacent to the current coded block, and pixel values of the reference frame adjacent to the reference block, the pixel estimates of changes in brightness. This changes the brightness to select two neighboring pixels to multiply who are on the weights and work out, forming an assessment of the change in brightness and contrast between the individual pixels of the current and reference blocks. It should be noted that the weighting coefficients are calculated separately for each position of the pixel of the encoded block. The values of weight coefficients are determined based on the mutual distances between the pixels of the encoding block and the selected neighboring pixels. The main disadvantage of this method is that the reduction of the amount of additional information is achieved through a possible reduction in the quality of the correction. The reason for the decline in quality is that the change in brightness of pixels adjacent in relation to the encoded and the reference block may differ from changes in the brightness of the pixels belonging directly encoded and the reference block.

Another option that implements the approach associated with the estimation of the parameters changes the brightness and contrast by analyzing restored (encoded, and then decoded) frames described in the patent application U.S. 2011/0286678 [7]. Described in the application method of encoding multi-view video sequences includes the correction of the brightness differences in the process of coding with prediction. The correction parameters changes the brightness based on the estimates of changes in brightness of adjacent, with respect to encode the reference blocks, areas. Since these adjacent areas are available both for encoding and decoding, there is no need of explicit correction parameters in the output bit stream. The obtained parameters are used for the correction of the reference block. Reliability estimates of the parameters changes the brightness determined by the brightness correction for the area of the reference frame adjacent to the reference block, and comparing the corrected area with restored (encoded and decoded) by region coded frame adjacent to the current coded block. The disadvantage of this method is that the reliability of the correction changes the luminance is determined only by analyzing the related fields. The data contained in the reference block are not used in the analysis of reliability of the correction changes the luminance, which can lead to erroneous correction, thereby reducing its effectiveness.

Closest to the claimed invention is a method described in patent application U.S. 2008/0304760 [8]. This method of correction of brightness and contrast for the reference block includes the following steps: obtaining the restored values of pixels neighboring to the current coded block and the restored values of pixels neighboring in relation to the reference block, as the input information; prediction of average values for the current encoded and the reference block on the basis of the restored values of pixels neighboring to the current coded block and the restored values of pixels neighboring with respect to the supporting unit; a determination of the parameters of brightness correction for the reference block based on the predicted average pixel value of the current encoding block, the predicted mean value of the reference block and pixel values of the current encoding block and the reference block; and performing brightness correction to the reference block, using the previously defined parameter correction brightness.

The disadvantage of the prototype is the following. The restored values of pixels neighboring to the current coded block and the reference block are used exclusively for predicting average values. This limitation prevents the use of the information contained in the neighboring pixels. Additionally, there is no analysis of the relationship between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block. Therefore, not taken into account possible differences in the parameters of the correction changes the brightness and contrast between the considered blocks and areas neighboring with respect to the blocks. It can in order to degrade the reliability of the correction procedure of the differences in brightness and contrast, what a negative way will affect the efficiency of encoding.

In accordance with the description in the prototype [8] proposes a method of coding digital images (frames)based on the correction changes the brightness. This method includes the following steps: definition of the reference block, intended for the formation of block prediction for the current coded block; determining the parameters of brightness correction for the correction found the reference block; performing brightness correction found the reference block based on a given parameter brightness correction; forming unit-predictions for the current encoded block using the adjusted reference block; and encoding a difference between the formed block prediction and the current encoded block; generating the output bit stream and store information about the correction parameter brightness in a predetermined position inside the generated bit stream. The drawback of this method is the need to maintain the correction parameters in the output bit stream.

The claimed invention is directed to improving the efficiency of encoding multi-view video sequences if used hybrid coding. The invention is in p. the changes more reliable adaptive assessment procedure parameters changes the brightness and contrast of the reference block, and procedures for the correction of brightness and contrast of the reference block.

The technical result is achieved due to the use of more data for estimation of the parameters changes the brightness and contrast. In particular, in the present method is the analysis of the relationship between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block, and also correlations between the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block. When implementing the method is also provided the use of advanced methods of encoding and decoding multi-view video sequences, and such methods are based on the use of the correction of brightness and contrast, which improves the compression efficiency due to the fact that when assessing changes the brightness and contrast values of pixels that are available as when performing encoding and decoding. In this case, the correction parameters brightness and contrast can be accurately restored without requiring the transmission of additional data in the output bit stream.

According to the basic aspect of the claimed invention, a method of correcting the difference in brightness and contrast the t between the reference block and the current coded block when performing interframe prediction for coding Multiview video sequences, moreover, such a method includes:

- obtaining the pixel value of the current encoding block, owned by the encoded frame, and the values of the pixels of the reference block belonging keyframe;

- getting restored (encoded and decoded) values of pixels adjacent to the current block of the encoded frame, and the values of the pixels adjacent in relation to the reference block of the reference frame;

- determination of the ratios between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block, and also correlations between the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block;

- determination of the correction parameters changes the brightness and contrast for the correction of differences in brightness and contrast between the reference block and the current coded unit based on the found in the previous step ratios, pixel values of the reference block, the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block;

- correction of differences in brightness and contrast between the reference block and the current coded unit based found on the previous W the GE correction settings.

In one embodiment, the implementation of the claimed invention proposes a modification of the above method lies in the fact that the process of determining correlations between pixels of the current encoded frame and the reference frame, and the process of determining the correction parameters of brightness and contrast include:

- calculation of statistical characteristics for the restored values of pixels neighboring to the current coded block, the statistical characteristics for the pixels of the reference block and the statistical characteristics for the pixels adjacent in relation to the reference block;

- determining the relations between the statistical characteristics for the pixels of the reference block and the statistical characteristics for the restored values of pixels neighboring with respect to the supporting block;

- calculating evaluation values of statistical characteristics for the current encoded block based on the computed statistical characteristics and the relations between them produce;

- determination of the correction parameter changes the brightness and contrast for the correction of differences in brightness and contrast between the reference and the current encoded blocks based on the estimates obtained statistical characteristics for the current block and the statistical characteristics of the reference block.

Drugom implementation of the claimed invention proposes a modification of the above-mentioned method, in which the process of computing the statistical characteristics, determine the relationship between the statistical characteristics and determine the correction parameter changes the brightness and contrast includes:

- calculate the average value for the restored pixels neighboring to the current coded block and located to the left of the current encoding block, if available; the calculation of the average values of the restored pixels neighboring to the current coded block located above the current encoded block, if available, to calculate the average value for pixels of the reference block, calculate the average value for pixels neighboring with respect to the supporting block and located to the left of the reference block, if any, and calculate the average value for pixels adjacent in relation to the reference block and located above the reference block, if any;

- in the case of the restored pixels neighboring to the current coded block and located to the left of the current encoding block and the pixels neighboring with respect to the supporting block and located to the left of the reference block, the calculation of the ratio between the average value of the pixels of the reference block and an average value of neighboring pixels with respect to the supporting block and to the left from the reference block; the calculation works derived relations and the mean values of the restored pixels neighboring to the current coded block and located to the left of the current encoded block; determining the correction parameter changes the brightness and contrast as the ratio between the calculated product and the average value for pixels of the reference block;

otherwise, if the restored pixels neighboring to the current coded block located above the current encoded block, and the pixels adjacent in relation to the reference block and located above the support block, the calculation of the ratio between the average value of the pixels of the reference block and an average value of pixels neighboring with respect to the supporting block and located above the reference block; calculating compositions obtained relations and the mean values of the restored pixels neighboring to the current coded block located above the current encoded block; determining the correction parameter changes the brightness and contrast as the relationship between the calculated product and the average value for pixels of the reference block;

otherwise, by using the Median Prediction to calculate estimates of the average value of the current encoded BL is ka;

- determination of the correction parameter changes the brightness and contrast as the ratio between the average values for pixels of the current encoding block and the average value for pixels of the reference block.

Another modification of the invention is that the method for correcting the brightness and contrast of the reference block in the process of encoding multi-view video sequences, includes:

obtaining pixel values of the current block is coded frame and the pixel values of the reference block of the reference frame;

- getting restored (encoded and decoded) values of pixels adjacent to the current coded block, and the values of the pixels adjacent in relation to the reference block;

- calculation of the first evaluation estD_{i,j}for each position (i,j) pixel in the reference block; and the first assessment of estD_{i,j}is a function of a linear combination of the restored values of

- calculating second evaluation estR_{i,j}for each position (i,j) pixel in the reference block; and the second evaluation estR_{i,j}one is by the function of a linear combination of the values of

- determination of the correction parameters changes the brightness and contrast for the correction of each pixel in the reference block; and the definition of these parameters is based on the value of the first evaluation estD_{i,j}, the value of the second evaluation estR_{i,j}and for values of R_{i,j}pixels of the reference block, on the restored values of

the correction changes the brightness and contrast for each pixel in the reference block, using the found in the previous step, the correction parameters changes the brightness and contrast.

According to another modification of the invention, the method provides that the calculation of the first and second assessments for each position of the pixel in the reference block and the determination of the correction parameters changes the brightness and contrast for each pixel position in the reference block includes:

- calculation of the first estD_{
i,j}as

where W_{k}(i,j), k=0, ..., N-1 are weighting coefficients, and

- calculating second evaluation estR_{i,j}as

where W_{k}(i,j), k=0, ..., N-1 are weighting coefficients, and

- determination of the correction parameters changes the brightness and contrast for each position (i,j) pixel in the reference block; this parameter represents the ratio of_{i,j}not equal to zero. Otherwise α_{i,j}assumed to equal 1.

the correction changes the brightness and contrast for the reference block by multiplying the value of each pixel of the reference block R_{i,j}the corresponding correction parameter α_{i,j}.

Another modification of the invention provides that the calculation of the first and second assessments for each position of the pixel in the reference block includes:

- calculation of weight coefficients W_{k}(i,j), k=0, ..., N-1 for the first evaluation estD_{i,j}and the second assessment estR_{i,j}for each position (i,j) pixel in the reference block weighting factor W_{k}(i,j) is nonincreasing function of absolute difference:

that provides inversely proportional to the increase/decrease values of W_{k}(i,j) depending on the increase/decrease of absolute difference. Here R_{i,j}is the pixel value of the reference block;and

In another variant implementation of the invention proposes a modification of the method mentioned above, which provides that the calculation of the first and second assessments for each position of the pixel in the reference block includes:

- calculation of weight coefficients W_{k}(i,j), k=0, ..., N-1 for the first evaluation estD_{i,j}and the second assessment estR_{i,j}; for each position (i,j) pixel in the reference block weighting factor W_{k}(i,j) is nonincreasing function of absolute difference:

that provides inversely proportional to the increase/decrease values of W_{k}(i,j) depending on the increase/decrease of absolute difference; if_{k}(i,j). Here R_{i,j}pixel value of the reference block;and

When implementing the claimed invention has the sense to use another modification of the method mentioned above, which provides that the calculation of the first and second assessments for each position of the pixel in the reference block includes:

- calculation of weight coefficients W_{k}(i,j), k=0, ..., N-1 for the first evaluation estD_{i,j}and the second assessment estR_{i,j}for each position (i,j) pixel in the reference block weighting factor W_{k}(i,j) is nonincreasing function of absolute difference:

that provides inversely proportional to the increase/decrease values of W_{k}(i,j) is depending on the increase/decrease of absolute differences;
if_{k}(i,j)=0. Here R_{i,j}pixel value of the reference block;and

According to another implementation variant of the invention, it is proposed modification of the method mentioned above, which provides that the calculation of the first and second assessment for each position of the pixel in the reference block includes:

- calculation of weight coefficients W_{k}(i,j), k=0, ..., N - 1 for first the th evaluation estD_{
i,j}and the second assessment estR_{i,j}; for each position (i,j) pixel in the reference block weighting factor W_{k}(i,j) is equal to W_{k}(i,j)=exp(-C·A_{k}(i,j)), where C is a predefined constant greater than 0, and A_{k}(i,j) equals_{i,j}pixel value of the reference block,_{k}(i,j)=0.

Alternatively, when implementing the invention, it is proposed modification of the method mentioned above, which provides that the calculation of the first and second assessments for each position of the pixel in the reference block includes:

- calculation of weight coefficients W_{k}(i,j), k=0, ..., N - 1 for the first evaluation estD_{i,j}and the second assessment estR_{i}
; for each position (i,j) pixel in the reference block weighting factor W_{k}(i,j) is equal to W_{k}(i,j)=exp(-C·A_{k}(i,j)), where C is a predefined constant greater than 0, and A_{k}(i,j) equals_{i,j}pixel value of the reference block_{k}(i,j)=0.

According to another implementation variant of the invention, it is proposed modification of the method mentioned above, which provides that the position of the restored values of pixels neighboring to the current coded block, and the position values of pixels adjacent in relation to the reference block are determined adaptively instead of the corresponding pixels with predetermined positions.

Group connected by a single concept of the invention also includes an original method of encoding multi-view video sequences on the basis of the correction changes the brightness and contrast. This method includes:

- definition of the reference block, which is used to form block prediction for the current coded block;

- determination of the correction parameters changes the brightness and contrast for the correction of differences in brightness and contrast between the reference block and the current encoded block in the search process or upon completion of the search reference block;

the correction changes the brightness and contrast found reference block by using the found parameters correction of brightness and contrast;

- the formation of block prediction for the current encoded block by use of the adjusted brightness and or contrast of the reference block;

coding the current block using the generated block prediction without coding found correction settings changing the brightness and contrast; encoding the information about the reference block, if it is necessary for the decoding;

determination of the correction parameters changes the brightness and contrast includes:

- getting restored (encoded and decoded) values of pixels adjacent to the current block of the encoded frame, and the values of the pixels adjacent in relation to the reference block of the reference frame;

- determination of the ratios between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block, and also correlations between the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block;

- determination of the correction parameters changes the brightness and contrast for the correction of differences in brightness and contrast between the reference block and the current coded unit based on the found in the previous step ratios between the values of the pixels of the reference block, the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block.

In the framework of a unified concept also provides the use of an original method of decoding multi-view video sequences on the basis of the correction changes the brightness and contrast. This method includes:

decoding information of the reference block, if necessary, in order to determine the reference block for the current decoded block; determination of the reference block;

- determination of the correction parameters changes the brightness and contrast for the found reference block;

the correction changes the brightness and contrast found in the reference block, using the parameters of the correction changes the brightness and contrast;

- the formation of block prediction for the current decoded block by using the reference block, adjusted for brightness and contrast;

decoding the current block by using the obtained block prediction and correction settings changing the brightness and contrast,

the procedure of determining the parameters of the correction of brightness and contrast includes:

- getting restored (encoded and decoded) values of pixels adjacent to the current block of the encoded frame, and the values of the pixels adjacent in relation to the reference block of the reference frame;

- determination of the ratios between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block, and also correlations between the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block;- determination of the correction parameters changes the brightness and contrast for the correction of differences in brightness and contrast between the reference block and the current coded unit based on the found in the previous step ratios between the values of the pixels of the reference block, the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block.

Hereinafter the invention is explained with engaging graphics.

Figure 1 - structural diagram of the hybrid encoder Multiview video sequences and use the claimed invention.

2 is a block diagram of part of a hybrid video encoder that implements the inventive method, which is part of the process of coding with prediction.

Figure 3 is a chart explaining the method of correcting the brightness and contrast of the reference block in accordance with one example implementation of the invention.

4 is a block diagram, illustrious the way of correcting the brightness and contrast of the reference block, according to one example implementation of the invention.

5 is a chart illustrating the procedure of selection of input blocks in the current frame in the process of calculating the correction changes the brightness and contrast according to one example implementation of the invention.

6 is a chart illustrating the correction method changes the brightness and contrast control unit in accordance with another variant of realization of the invention.

7 is a flowchart illustrating a method of pixel-by-pixel correction of brightness and contrast for the reference block according to one example implementation of the invention.

Fig - chart explaining the method of correcting the brightness and contrast control unit in accordance with another variant of realization of the invention.

Fig.9 is a flowchart describing a method of encoding multi-view video sequences, based on the correction changes the brightness and contrast according to one example implementation of the invention.

Figure 10 is a flowchart describing a method of decoding multi-view video sequences on the basis of the correction changes the brightness and contrast according to one example implementation of the invention.

1 shows a structure of the structural scheme of the hybrid encoder Multiview video sequences. The input hybrid encoder 105 Multiview video sequences include source view (coded view) 101 and already encoded and then decoded kinds (views) 102, which are part of the encoded multi-view video data. Already encoded/decoded kinds 102 and already encoded/decoded sequence 103 depth charts are used to generate a synthesized view (view) for the source type (coded view) using the procedure 104 synthesis. The generated synthesized view (the view) is also fed to the input of the hybrid encoder 105.

Hybrid encoder 105 contains the following tools that are used to encode the initial view (view): managing keyframes 106, interframe prediction 107, intra-frame prediction 108, interframe and intraframe compensation 109, a spatial transformation 110, the optimization of the relationship between the rate/distortion 111, entropy encoding 112. Detailed information about these tools can be found in [9]. The inventive method can be implemented in the framework of the interframe prediction 107.

Figure 2 contains a diagram of part of a hybrid video encoder that implements the inventive method comprising the coding with prediction. Hybrid encoder includes a block 201 subtraction unit 202 transforms and is Mantovani, block 203 entropy encoding unit 204 inverse transformation and inverse quantization unit 205 compensate for the offset correction and change brightness/contrast, block 206 synthesis of form (view), block 207 addition, block 208 buffering keyframes and depth maps, block 209 predictions of the parameters of the compensation and correction, the evaluation unit of the offset and adjust the brightness/contrast settings 210 and block 211 deciding the encoding mode of the macroblock. Blocks 201-204, 207-209 and 211 are the building blocks of the encoding used in the basic hybrid coding method [9]. Block 206 synthesis of form (view) is a block-specific Multiview coding. Unit 206 synthesizes additional keyframes from an already encoded/decoded frames and depth maps.

The inventive method can be implemented in blocks 205 and 210. These blocks carry out the method chunked encoding by prediction, which includes the following steps:

- For the current block of the current encoded frame searches for a reference block that minimizes the following expression:

where I(m,n) represents the brightness value of a pixel with coordinates (m,n) within the current block. The size of the current encoding block is M×N, (i,j) denotes the displacement vector (DV), which indicates the reference block R within a predefined search area. ψ(x) - function, correcting the differences in brightness and contrast between the current block and the reference block. The described method is implemented in block 210. The parameters of the correction changes the brightness and contrast, along with the received DV, transmitted in block 205 in block 209.

- Found the reference transform block in accordance with the found parameters correction of brightness and contrast (block 205). After this unit 201 generates a differential unit. Then the differential block is transformed using the Discrete Cosine Transform (DCT), quantized (block 202) and is encoded by the entropy encoder (block 203). Additional information (SI)required for subsequent decoding, also entropy coded by the encoder (block 203).

Figure 3 contains a diagram explaining the method of correcting the brightness and contrast for the reference block in the accordance with one variant of implementation of the claimed invention. In accordance with Figure 3, at each iteration of the search reference block for the current block 311 of the current encoded frame 310 define the vector 320 displacement (DV). Vector 320 indicates the reference block 301 of the support frame 300. According to the claimed method, the function ψ(x) correction of brightness and contrast is as follows:

ψ(x)=α·x.

The correction parameter changes the brightness and contrast of α is described by the following equation:

refMX - average value of the reference block 301. (i,j) - coordinates of the upper left corner of the reference block 301. S denotes a pixel of the reference frame 300. A value expressed as estMX, is an estimate of the average value for the current coded block 311.

Figure 4 contains a flowchart illustrating the manual adjustment of brightness and contrast for the reference block, according to one implementation variants of the claimed invention. This method includes the following steps.

1. Receiving input pixel values of blocks 301, 302, 303, 311, 312, 313 and 314 (Figure 4, 401).

2. Calculate the following averages (Figure 4, 402): to calculate the average value encMX_L block 312

where DI(p,q) is restored (encoded and decoded), the luminance value of a pixel with coordinates (p,q) inside a block 312. The dimensions of the block 312 are P×Q.

The calculation of the average epsma block 313

where DI(u,v) is restored (encoded and decoded), the luminance value of a pixel with coordinates (u,v) within a block 313. The dimensions of the block 313 form U×V.

The calculation of the average refMX reference block 301.

The calculation of the average refMX_L block 302:

The dimensions of the block 302 is equal to the size of the block 312.

The calculation of the average refMX_A block 303:

The dimensions of the block 303 is equal to the size of the block 313.

3-Test 1 (Figure 4, 403): if the block 302 and block 312 is available (that is, blocks 302 and 312 are located at the edges of the frame and, if the reference frame is synthesized by the frame, the pixels of the block 302 does not belong to the area of occlusion, and the value of at least one pixel block 302 is different from 0), then proceed to the evaluation values estMX (4, 405) in accordance with the following expression:

Otherwise, proceed to test 2 (Figure 4, 404).

4. Test conditions 2 (4, 404): if the block block 303 and 313 are available (i.e. blocks 303 and 313 are located within the boundaries of the frame and, if the reference frame is synthesized by the frame, the pixel unit 303 does not belong to the area of occlusion, and the value of at least one pixel block 303 is different from 0), then proceed to the evaluation values estMX (Figure 4, 407) in accordance with the following expression:

estMX=MAP(encMX_L,encMX_A,encMX_LA),

where MAP(x,y,z) is a known method median prediction [10], encMX_LA is the average value of the block 314:

The dimensions of the block 314 are U×Q and is equal to the corresponding dimension of the blocks 312 and 313.

5. The calculation of the correction parameter changes the brightness and contrast α (4, 408) by using the obtained values estMX and refMX.

6. The correction changes the brightness and contrast (Figure 4, 409) for the reference block 301 by using the calculated parameter α.

It should be noted that the reference frame 300, blocks 301, 302, 303 and restored (encoded and decoded) blocks 312, 313, 314 are available both during encoding and decoding. Figure 5 illustrates the reciprocal position of the examined areas and blocks in the current frame 500. Region 501 of the current frame 50 available at the time of encoding and decoding the current coded block 502. Area 501 includes blocks 312, 313 and 314. Region 501 is sometimes called the "template". Region 503 is not present during decoding of the current block 502 and should not contain blocks 312, 313 and 314. In this regard, the above-described method can be implemented both in the encoder and in the decoder, and does not require the transmission of additional data in the output bit stream.

Another embodiment of the invention provides a pixel-by-pixel correction of brightness and contrast for the reference block when coding with prediction. The key idea is pixel-by-pixel evaluation of the correction parameter changes the brightness and contrast, and the correction is based on the restored values of pixels neighboring the current block, the pixel values of the reference frame and their mutual similarity. 6 illustrates a specific application of this technology.

According to 6 on each iteration of the search reference block for the current block 611, owned by the current encoded frame 610, determine the vector 620 displacement (DV). DV indicates the reference block 601 of the support frame 600. Current block 611 contains the pixels that are labeled A00~A33. The reference block 601 contains the pixels that are labeled as R00~R33. The restored values of pixels (blocks 612 and 613), adjacent to the current coded block, denoted as

For each position (i,j) pixel in the reference block 601 correction changes the brightness and contrast shall be in accordance with the following equation:

ψ(x_{i,j})=α_{i,j}·x_{i,j}.

Here the parameter pixel correction changes the brightness and contrast (if estR_{i,j}not equal to 0) is described as:

where estD_{i,j}- this is the first assessment of the pixel with coordinates (i,j) in the reference block; estR_{i,j}- this is the second assessment of the pixel with coordinates (i,j) in the reference block. Otherwise α_{i,j}assumed to equal 1.

The block diagram of the method pixel-by-pixel correction of brightness and contrast for the reference block is shown in Fig.7. This method includes the following steps:

1. Obtaining pixel values of blocks 601, 602, 603 of the base frame 600, the unit blocks 611 and 612, 613, belonging to the template region of the current encoded frame 610 (operation 701).

2. Calculation of weight coefficients W_{k}(i,j), k=0, ..., N - 1 for each position (i,j) pixel in the reference block 601 (operation 702). The weighting coefficients W_{k}(i,j) can be expressed as follows:

W_{k}(i,j)=exp(-C·A_{k}(i,j)),

where σ>0 is determined experimentally. Here N is the total number of pixels in blocks 612, 613 (or 602, 603). It should be noted that the weights reflect the fact that the value of R_{i,j}closer to

3. Calculating values of estD_{i,j}for each position (i,j) pixel in the reference block 601 (operation 703) in accordance with the following expression:

Thr1 and Thr2 is a predefined threshold value. Threshold values are used to exclude values of pixels adjacent in relation to the reference block, which differ significantly from the values of R_{i,j}and

4. Calculating values estR_{i,j/sub>
for each position (i,j) pixel in the reference block 601 (operation 704), in accordance with the following expression:}

Predetermined threshold values Thr1 and Thr2 are the same as in the calculation for the calculation of estD_{i,j}.

5. The calculation of the correction parameter changes the brightness and contrast α_{i,j}(operation 705) for each pixel with coordinates (i,j) in the reference block 601 on the basis of the obtained values estD_{i,j}and estR_{i,j}if estR_{i,j}not equal to 0. Otherwise α_{i,j}assumed to equal 1.

6. The correction changes brightness and contrast (operation 706) for the reference block 601 on the basis of the calculated parameters α_{
i,j}.

Another embodiment of the claimed invention is based on the following. Usually the pixels adjacent in relation to the reference block, select a group of pixels adjacent to the reference block. However, the procedure of the search reference block may select a displacement vector for that pixel values in the specified group will not be sufficiently similar to the corresponding values of the pixels adjacent to the current coded block. Moreover, the values of pixels adjacent to the reference block may differ greatly from the values of the pixels of the reference block In these cases, correction of brightness and contrast may be handled incorrectly.

To solve this problem, in a variant implementation of the invention it is proposed to use a "floating" (relative to the reference unit) position the said group of pixels neighboring with respect to the supporting block. Fig explains the inventive method in accordance with one variant of implementation of the claimed invention. According Pig at each iteration of the search reference block for the current block 811 of the current encoded frame 810 define the vector 820 displacement (DV). DV indicates the reference block 801 reference frame 800. The coordinates of the group of pixels of the reference frame (which form a PI who sat blocks 802 and 803) are determined with the help of additional vector 804 offset. Lookup vector 804 bias is the result of an additional assessment procedure offset. In this case we define a vector 804 bias, which gives the minimum value of the penalty function, which determines the degree of similarity blocks 812, 813 and blocks 802, 803, respectively. As the penalty function can be well-known features: the mean and standard error, sum of absolute differences, sum of absolute differences for signals with zero mean, and so the Vector 804 may be defined implicitly during the process of encoding and decoding without transmission of additional information in the output bit stream.

Figure 9 presents a flowchart that describes a method of encoding multi-view video sequences, based correction of brightness and contrast according to one implementation variants of the claimed invention. At step 901 determines a reference block, which is used to generate the predicted block. At step 902 determines the correction parameters changes the brightness and contrast to be found in the anchor block. The determination of the correction parameters changes the brightness and contrast includes:

- getting restored (encoded and decoded) values of pixels neighboring the current block and pixel values of neighboring adopt the s to the reference block of the reference frame;

- determination of numerical ratios between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block, and the correlation between the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block;

- determination of the correction parameters changes the brightness and contrast for the correction of differences in brightness and contrast for the reference block based found on the previous step of the numerical ratios of the values of the pixels of the reference block, the restored values of pixels neighboring to the current coded block, and the values of the pixels adjacent in relation to the reference block.

At step 903, using the parameters of the correction changes the brightness and contrast, make the correction for the reference block. At step 904, using adjusted the brightness and contrast of the reference block, form the block prediction for the current block. At step 905, using the generated block prediction to encode the current block. In particular, encode information about the reference block, if it is necessary for decoding. It should be noted that the parameters of the correction changes the brightness and contrast are not encoded and are not placed in the output bit stream.

Figure 10 illustriou the t method of decoding multi-view video sequences on the basis of the correction changes the brightness and contrast, according to one example implementation of the invention. According to Figure 10 information about the reference block is decoded, if required decoding. The decoded information can be used to determine the reference block at step 1001. At step 1002 determines the correction parameters changes the brightness and contrast for the correction of the reference block. The procedure for determining the correction parameters changes the brightness and contrast includes:

- getting restored (encoded and decoded) values of pixels neighboring the current block and pixel values of the neighboring relative to the reference block of the reference frame;

- determination of numerical ratios between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block, and the correlation between the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block;

- determination of the correction parameters changes the brightness and contrast for the correction of differences in brightness and contrast for the reference block based found on the previous step of the numerical ratios of the values of the pixels of the reference block, the restored values of pixels neighboring to the current coded block, indications and software pixels, neighboring with respect to the supporting block.

At step 1003, using the parameters of the correction changes the brightness and contrast, make the correction of the reference block. At step 1004, using adjusted the brightness and contrast of the reference block, form the block prediction for the current decoded block. At step 1005, using the generated block prediction, perform the decoding of the current block.

In practice the claimed invention can be used for encoding and decoding multi-view video sequences.

The embodiments of the invention described above are given only for illustrative purposes and are not restrictive. The scope of protection of the invention defined by the attached claims

Links

[1] Yea, S.; Vetro, A., "View Synthesis Prediction for Multiview Video Coding", Image Communication, ISSN: 0923-5965, Vol.24, Issue 1-2, pp.89-100, January 2009.

[2] ITU-T Rec. H.264. Advanced video coding for generic audiovisual services. 2010.

[3] US Patent 7,924,923. Motion Estimation and Compensation Method and Device Adaptive to Change in Illumination. April, 2011.

[4] Y.Lee, J.Hur, Y.Lee, R.Han, S.Cho, N.Hur, J.Kirn, J.Kirn, P.Lai, A.Ortega, Y.Su, P.Yin and C.Gomila. CE11: Illumination compensation. Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG JVT-U052, Oct. 2006.

[5] J.H.Kirn, P.Lai, J.Lopez, A.Ortega, Y.Su, P.Yin, and C.Gomila. New coding tools for illumination and focus mismatch compensation in multiview video coding. IEEE Trans. on Circuits and Systems for Video Technology, vol.17, no. 11, pp.1519-1535, Nov. 2007.

[6] .Yamamoto, TCA, "Weighted prediction using neighboring pixls," ITU-T Q.6/SG16 VCEG, Proposal VCEG-AH19, January 2008.

[7] US Patent Application 2011/0286678. Multi-view Image Coding Method, Multi-view Image Decoding Method, Multi-view Image Coding Device, Multi-view Image Decoding device. Multi-view Image Coding Program, and Multi-view Image Decoding Program. November, 2011.

[8] US patent application 2008/0304760. Method and Apparatus for Illumination Compensation and Method and Apparatus for Encoding and Decoding Image Based on Illumination Compensation. December, 2008.

[9] Richardson I.E. The H.264 Advanced Video Compression Standard. Second Edition. 2010.

[10] S.A. Martucci "Reversible compression of HDTV images using median adaptive prediction and arithmetic coding", in IEEE Int. Symp. on Circuits and Systems, 1990.

1. The way local adjustment of brightness and contrast of the reference frame for encoding multi-view video sequences, comprising the following steps:

get the pixel values of the current encoding block, owned by the encoded frame, and the pixel values of the reference block belonging to the support frame;

- get restored, that is encoded and then decoded, the values of pixels adjacent to the current block of the encoded frame, and the pixel values of the neighboring relative to the reference block of the reference frame;

- determine the numerical ratio between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block, and the correlation between the restored values of pixels neighboring to the current coded block, and pixel values adjacent in relation to the reference b is the eye;

based found on the previous step of the numerical ratios of the values of the pixels of the reference block, the restored values of pixels neighboring to the current coded block, and the values of the pixels adjacent in relation to the reference block, define the parameters for the correction of brightness and contrast for the correction of differences in brightness and contrast for the reference block in comparison with the current encoded block;

- perform correction of differences in brightness and contrast for the reference block, using the parameters of the correction.

2. The method according to claim 1, characterized in that the determination procedure of the numerical ratios for the pixels of the current encoded frame and the reference frame, and the procedure for determining the correction parameters changes the brightness and contrast include the following stages:

- calculate the statistical characteristics for the restored values of pixels neighboring to the current coded block, the statistical characteristics of the values of the pixels of the reference block and the statistical characteristics of the values of pixels adjacent in relation to the reference block;

- determine the numerical ratio between the statistical characteristics for the pixels of the reference block and the statistical characteristics for the restored values of pixels neighboring to the reference block;

based on the computed statistical characteristics and the relations between them assess the value of statistical characteristics for the current coded block;

- calculates the correction parameter changes the brightness and kontrastnosti for correction of differences in brightness and contrast between the reference and the current encoded blocks based on the estimates obtained statistical characteristics for the current block and the statistical characteristics of the reference block.

3. The method according to claim 2, characterized in that the calculation of the statistical characteristics, the determination of the ratios for the statistical characteristics and the determination of the correction parameter changes the brightness and contrast include the following stages:

- in the case of the restored pixels neighboring to the current coded block and located to the left of the current encoding block, they calculate the average value; in the case of the restored pixels neighboring to the current coded block located above the current encoded block, they calculate the average value, calculate the average value for pixels of the reference block, in the case of the pixels adjacent in relation to the reference block and located to the left of the reference block, they calculate the average value, and if there is exela,
neighboring with respect to the supporting block and located above the support block, and calculates the average value;

- in the case of the restored pixels neighboring to the current coded block and located to the left of the current encoding block and the pixels neighboring with respect to the supporting block and located to the left of the reference block, calculate the ratio between the average value of the pixels of the reference block and an average value of pixels neighboring with respect to the supporting block and located to the left of the reference block; calculate the product of the found relations and the mean values of the restored pixels neighboring to the current coded block and located to the left of the current encoded block; determine a correction parameter changes the brightness and contrast as the ratio between calculated product and the average value for pixels of the reference block;

otherwise, if the restored pixels neighboring to the current coded block located above the current coded block and pixels neighboring with respect to the supporting block and located above the support block, calculate the ratio between the average value of the pixels of the reference block and an average value of pixels neighboring the relationship is to the support block and located above the support block;
calculate the product of the found relations and the mean values of the restored pixels neighboring to the current coded block located above the current encoded block; determine a correction parameter changes the brightness and contrast as the ratio between the calculated product and the average value for pixels of the reference block;

otherwise, use the median prediction to calculate estimates of the average value of the current encoded block; determine a correction parameter changes the brightness and contrast as the ratio between the average values for pixels of the current encoding block and the average value for pixels of the reference block.

4. The method according to claim 1, characterized in that the procedure for determining the ratios for the pixels of the current encoded frame and the reference frame, determining the correction parameter changes the brightness and contrast, as well as the correction of differences in the brightness and contrast of the reference block in comparison with the current encoded block include the following stages:

- calculate the first evaluation estD_{i,j}for each position (i,j) pixel in the reference block; and the first assessment of estD_{i,j}is a linear combination of the reconstructed values

- calculate the second evaluation estR_{i,j}for each position (i,j) pixel in the reference block; and the second evaluation estR_{i,j}is a linear combination of the values of

determine, based on the first evaluation estD_{i,j}the second evaluation estR_{i,j}values of R_{i,j}pixels of the reference block, the reconstructed values

- make the correction changes the brightness and contrast for each pixel of the reference block using the previously determined correction parameters.

5. The method according to claim 4, characterized in that the calculation procedure of the first and second estimates for each pixel position in the reference the block and define the parameters of the correction changes the brightness and contrast for each pixel position in the reference block includes the following steps:

- calculate the first evaluation estD_{i,j}_{k}(i,j), k=0, ..., N-1 are weighting coefficients, and

- calculate the second evaluation estR_{i,j}_{k}(i,j), k=0, ..., N-1 are weighting coefficients, and

- if the second evaluation estR_{i,j}is not equal to zero, determine the correction parameter changes the brightness and contrast for the correction of differences in brightness and contrast for each pixel in the reference block; this parameter represents the ratio of:

otherwise, the correction parameter α_{i,j}is set equal to 1;

- perform correction of brightness and contrast of the reference block by multiplying the value of each pixel of the reference block R_{i,j}the corresponding correction parameter α_{i,j}.

6. The method according to claim 5, characterized in that the calculation procedure of the first and second assessments for each position of the pixel in the reference block includes the following steps:

- compute the weights W_{k}(i,j), k=0, ..., N-1 for the first evaluation estD_{i,j}and for the second evaluation estR_{i,j}; for each position (i,j) pixel in the reference block weighting factor, W_{k}(i,j) is nonincreasing function of absolute difference:

that provides inversely proportional to the increase/decrease of W_{k}(i,j) depending on the increase/decrease of absolute difference. Here R_{i,j}pixel value of the reference block;and

7. The method according to claim 5, characterized in that the calculation procedure of the first and second assessments for each position of the pixel in the reference block includes the following steps:

- compute the weights W_{k}(i,j), k=0, ..., N-1 for the first evaluation estD_{i,j}and the second assessment estR_{i,j}; for each position (i,j) pixel in the reference block weighting factor W_{k}(i,j) is nonincreasing function of absolute difference:

that provides inversely proportional to the increase/decrease of W_{k}(i,j) depending on the increase/decrease of absolute difference; when_{k}(i,j)=0. Here R_{i,j}is the pixel value of the reference block,

8. The method according to claim 5, characterized in that the calculation procedure of the first and second assessments for each position of the pixel in the reference block includes the following steps:

- compute the weights W_{k}(i,j), k=0, ..., N-1 for the first evaluation estD_{i,j}and the second assessment estR_{i,j}; for each position (i,j) pixel in the reference block weighting factor W_{k}(i,j) is nonincreasing function of absolute difference:

that provides inversely proportional to the increase/decrease of W_{k}(i,j) depending on the increase/decrease of absolute difference; if_{k}(i,j)=0; here R_{i,j}pixel value of the reference block,

9. The method according to claim 5, characterized in that the calculation procedure of the first and second assessments for each position of the pixel in the reference block includes the following steps:

- compute the weights W_{k}(i,j), k=0, ..., N-1 for the first evaluation estD_{i,j}and the second assessment estR_{i,j}; for each position (i,j) pixel in the reference block weighting factor W_{k}(i,j) is equal to W_{k}(i,j)=exp(-C·A_{k}(i,j)), where C is a predefined constant greater than 0, and A_{k}(i,j) equals_{i,j}pixel value of the reference block,_{k}(i,j)=0.

10. The method according to claim 5, characterized in that the calculation procedure of the first and second assessments for each position of the pixel in the reference block includes the following steps:

- compute the weights W_{k}(i,j), k=0, ..., N-1 for the first evaluation estD_{i,j}and the second assessment estR_{i,j}; for each position (i,j) pixel in the reference block weighting factor W_{k}(i,j) is equal to W_{k}(i,j)=exp(-C·A_{k}(i,j)), where C is a predefined constant greater than 0, and A_{k}(i,j) equals_{i,j}is the pixel value of the reference block,_{k}(i,j)=0.

11. The method according to claim 1, characterized in that the positions of the restored values of pixels neighboring to the current coded block, and the position values of pixels adjacent in relation to the reference block to determine adaptive instead of the corresponding pixels with predetermined positions.

12. The encoding method is novorochennyh video sequences,
based on local adjustment of brightness and contrast of the reference block, which includes the following steps:

- determine the reference block, which is used to form block prediction for the current coded block;

- determine the correction parameters changes the brightness and contrast for the correction of differences in brightness and contrast between the reference block and the current encoded block in the search process or upon completion of the search reference block;

- make the correction changes the brightness and contrast found reference block by using the found parameters correction of brightness and contrast;

- form block prediction for the current encoded block by use of adjusted brightness and contrast of the reference block;

- encode the current block using the generated block prediction without coding found correction settings changing the brightness and contrast; encoding the information about the reference block, if it is necessary for the decoding;

characterized in that the procedure for determining the correction parameters of brightness and contrast includes the following stages:

- get restored, that is encoded and then decoded, the values of pixels adjacent to the current block kodiruemogo the frame,
and the pixel values of the neighboring relative to the reference block of the reference frame;

- determine the numerical ratio between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block, and the correlation between the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block;

based found on the previous step of the numerical ratios of the values of the pixels of the reference block, the restored values of pixels neighboring to the current coded block, and the values of the pixels adjacent in relation to the reference block, determine the correction parameters changes the brightness and contrast for the correction of differences in brightness and contrast for the reference block.

13. The method of decoding multi-view video sequences, based on the correction of brightness and contrast, which includes the following steps:

- decode the information about the reference block, if necessary, in order to determine the reference block of the current block, and determine the reference block;

- determine the correction parameters changes the brightness and contrast to adjust the brightness and contrast to be found in the reference block;

- perform correction of differences in brightness and contrast to be found in the anchor block is a,
using the parameters of the correction changes the brightness and contrast;

- form block prediction for the current decoded block using the reference unit, adjusted for brightness and contrast;

- decode the current block using the generated block prediction and the parameters of the correction changes the brightness and contrast;

characterized in that the procedure for determining the correction parameters of brightness and contrast includes the following stages:

- get restored, that is encoded and then decoded, the values of pixels adjacent to the current block of the encoded frame, and the pixel values of the neighboring relative to the reference block of the reference frame;

- determine the numerical ratio between the values of the pixels of the reference block and pixel values of the neighboring relative to the reference block, and the correlation between the restored values of pixels neighboring to the current coded block, and pixel values of the neighboring relative to the reference block;

based found on the previous step of the numerical ratios of the values of the pixels of the reference block, the restored values of pixels neighboring to the current coded block, and the values of the pixels adjacent in relation to the reference block, determine the parameters correct and change the brightness and contrast for the correction of differences in brightness and contrast for the reference block.

**Same patents:**

FIELD: information technology.

SUBSTANCE: method is carried out by realising automatic computer formation of a prediction procedure which is appropriately applied to an input image. The technical result is achieved by making an image encoding device for encoding images using a predicted pixel value generated by a predetermined procedure for generating a predicted value which predicts the value of a target encoding pixel using a pre-decoded pixel. The procedure for generating a predicted value, having the best estimate cost, is selected from procedures for generating a predicted value as parents and descendants, where the overall information content for displaying a tree structure and volume of code estimated by the predicted pixel value, obtained through the tree structure, is used as an estimate cost. The final procedure for generating a predicted value is formed by repeating the relevant operation.

EFFECT: high efficiency of encoding and decoding, and further reduction of the relevant volume of code.

12 cl, 14 dwg

FIELD: information technology.

SUBSTANCE: disclosed is use of a parent population which is generated via random formation of a procedure for generating a predicted value, each indicated by a tree structure, and a set of procedures for generating a predicted value is selected as a parent from such a population. The procedure for generating a predicted value is generated as a descendant based on a certain method of development of the tree structure which develops selected procedures for generating a predicted value, where the existing function for generating a predicted value can be a tree end node. The procedure for generating a predicted value, having the best estimate cost, is selected from procedures for generating a predicted value as a parent and a descendant, and overall information content for representing the tree structure and volume of the code, estimated by the predicted pixel value, is used as a cost estimate, and the final procedure for generating a predicted value is formed by repeating the relevant operation.

EFFECT: high encoding efficiency.

28 cl, 14 dwg

FIELD: information technologies.

SUBSTANCE: method for motion vector coding includes the following stages: selection of the first mode as the mode of information coding about a predictor of the motion vector in the current unit, and in this mode information is coded, which indicates the motion vector predictor at least from one motion vector predictor, or selection of the second mode, in which information is coded, which indicates generation of a motion vector predictor on the basis of units or pixels included into a pre-coded area adjacent to the current unit; determination of the motion vector predictor of the current unit in accordance with the selected mode, and coding of information on the motion vector predictor of the current unit; and coding of the vector of difference between the motion vector of the current unit and predictor of the motion vector of the current unit.

EFFECT: increased efficiency of coding and decoding of a motion vector.

15 cl, 19 dwg

FIELD: information technologies.

SUBSTANCE: share of cast combinations of optimal forecasting modes, which shall be selected for spatially corresponding units of upper and lower layers is identified on the basis of the optimal forecasting mode, which was selected in process of traditional coding, and a table of compliance is developed, which describes interconnections between them. Combinations of selected optimal forecasting modes in the compliance table are narrowed on the basis of the value of the share of casts, in order to create information of compliance for forecasting modes, which describes combinations of narrowed optimal forecasting modes. In process of upper layer unit coding, the version of searching for the forecasting mode, searching for which shall be carried out in process of coding, is identified by referral to information of compliance for forecasting modes using as the key the optimal forecasting mode selected in process of coding of the spatially corresponding unit of the lower layer.

EFFECT: reduced versions of searching for a forecasting mode of an upper layer using correlations of optimal forecasting modes between layers.

7 cl, 14 dwg

FIELD: information technology.

SUBSTANCE: displacement vectors are searched for by searching for global displacement, breaking up the image into multiple layers of blocks, successive processing of the layers using various search schemes, using displacement vector prediction, as well as selecting displacement vectors based on efficiency of their further entropy coding.

EFFECT: quality improvement of efficiency of a video compressing system, especially at low bit losses, high output thereof.

2 cl, 8 dwg

FIELD: information technologies.

SUBSTANCE: video coding device is a video coding device for exposure of a video image to forecasting coding with compensation of motion, comprising a detection module, in order to detect accessible blocks for blocks having vectors of motion, from coded blocks adjacent to a block to be coded, and a number of available blocks, a selection module, in order to select one selective block from coded accessible blocks, a coder of selection information, to code information of selection, indicating the selective block, using a coding table, corresponding to the number of accessible blocks, and a coder of images, to expose the block to be coded to forecasting coding with compensation of motion using a vector of motion of the selective block.

EFFECT: reduction of additional information by information of selection of a motion vector with increased extents of freedom for calculation of a motion vector by selection of one of coded blocks.

10 cl, 14 dwg

FIELD: information technology.

SUBSTANCE: each re-encoded frame of a multiview video sequence, defined according to a predetermined encoding sequence, is presented as a set of non-overlapping units; at least one of already encoded frame is determined, which corresponds to said view and denoted as reference; synthesised frames are generated for the encoded and reference frames, wherein for each non-overlapping unit of pixels of the encoded frame, denoted as the encoded unit, a spatially superimposed unit inside the synthesised frame is determined, which corresponds to the encoded frame, denoted as a virtual unit, for which the spatial position of the unit of pixels in the synthesised frame which corresponds to the reference frame is determined, so that the reference virtual unit thus determined is the most accurate numerical approximation of the virtual unit; for the determined reference virtual unit, the spatially superimposed unit which belongs to the reference frame, denoted as the reference unit, is determined, and the error between the virtual unit and the reference virtual unit is calculated, as well as the error between the reference virtual unit and the reference unit; the least among them is selected and based thereon, at least one differential encoding mode is determined, which indicates which of the units found at the previous should be used to perform prediction during the next differential encoding of the encoded unit, and differential encoding of the encoded unit is carried out in accordance with the selected differential encoding mode.

EFFECT: providing differential encoding of a frame using a small volume of service information by taking into account known spatial connections between neighbouring views at each moment in time, as well as information available during both encoding and decoding.

5 cl, 15 dwg

FIELD: information technology.

SUBSTANCE: method of encoding an image using intraframe prediction involves selecting a pixel value gradient which is indicated by the image signal to be predicted from among a plurality of selected gradients; generating a predicted signal by applying the gradient in accordance with the distance from the reference prediction pixel, based on the gradient; intraframe encoding of the image signal to be predicted, based on the predicted signal; and encoding information which indicates the value of the selected gradient. As an alternative, the method involves estimating the pixel value gradient which is indicated by the image signal to be predicted, based on the image signal already encoded; generating a predicted signal by applying the gradient in accordance with distance from the reference prediction pixel, based on the gradient; and intraframe encoding of the image signal to be predicted, based on the predicted signal.

EFFECT: improved image compression efficiency.

20 cl, 55 dwg

FIELD: information technology.

SUBSTANCE: method of encoding a video signal comprises steps of: forming a predicted image for the current block; generating a weighted prediction coefficient for scaling the predicted image; forming a weighted prediction image by multiplying the predicted image with the weighted prediction coefficient; generating a difference signal by subtracting the weighted prediction image from the current block; and encoding the difference signal, wherein generation of the weighted prediction coefficient involves calculating the weighted prediction coefficient for which the difference between the base layer image, which corresponds to the current block, and the predicted image is minimal.

EFFECT: high efficiency of encoding a video signal by reducing the error of the current block, which must be compressed, and the predicted image.

31 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: deblocking filter 113 adjusts the value of disable_deblocking_filter-idc, slice_alpha_c0_offset_div2 or slice_beta_offset_div2 based on the Activity of an image calculated by an activity calculation unit 141, the total sum of orthogonal transformation coefficients of the image calculated by an orthogonal transformation unit 142, Complexity of the image calculated by the rate control unit 119, or the total sum of prediction errors of the image calculated by a prediction error addition unit 120.

EFFECT: improved image quality through correct deblocking.

8 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: invention discloses a method and a system for solving a specific task of converting video from monocular to stereoscopic and from black and white to colour, in semi-automatic mode. The method of selecting key frames and supplementing a video sequence with depth or colour information includes the following operations: obtaining data for initialising objects of each key object in each frame; detecting change of scene in an input video sequence and breaking the video sequence into scenes; for each scene, detecting data on activity of each object through a module for analysing video data and global movement (GM) data on all frames of the scene and storing said data in a video analysis result storage; wherein after processing the video scene, stored data on activity of each object are first analysed, key frames are selected, data on GM and key frames of the object are then analysed; key frames are extracted and output through the video data analysis unit; after which the video analysis result storage is cleared and then switched to the next scene of the input video sequence until reaching the end of the video sequence. The system consists of three basic parts: a video data analysis unit; a video analysis result storage; a video analysis result processing unit.

EFFECT: converting video from monocular to stereoscopic and from black and white to colour, in a semi-automatic mode.

21 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: device can capture one or more 2D images, where the 2D image is representative of a tangible object from a perspective defined by orientation of the device. Furthermore, the device may include a content aggregator that can construct a 3D image from two or more 2D images collected by the device, in which the construction is based at least in part on aligning each corresponding perspective associated with each 2D image.

EFFECT: easier capturing of a portion of 2D data for implementation within a 3D virtual environment.

14 cl, 10 dwg

FIELD: information technology.

SUBSTANCE: in the method of drawing advanced maps based on a three-dimensional digital model of an area, involving central projection of points of the three-dimensional digital model of the area by a beam onto a plane, the mapping object is selected in form of a three-dimensional digital model of the area and its boundaries in the horizontal projection are determined; settings of the advanced map to be drawn are given; optimality criteria for advanced display of the mapping object are selected; the value of the horizontal and vertical viewing angle is given; a certain preliminary path of observation points is drawn around the mapped object in the horizontal projection such that the mapped object fits into the section of horizontal viewing angles.

EFFECT: broader functional capabilities by finding the optimum position of the projection centre when drawing an advanced map based on a three-dimensional digital model.

4 cl, 4 dwg

FIELD: physics.

SUBSTANCE: method and apparatus employ computer graphics techniques to project (display, render) a primary three-dimensional model onto virtual viewing positions which coincide with positions of capturing the original images. The decoding texture is calculated, which enables to associate projection pixel coordinates with parameters of geometric beams emitted to corresponding points of the three-dimensional model. Stereoscopic juxtaposition of real images with corresponding displayed projections is carried out. Three-dimensional coordinates of the digital model are improved by translation of the found disparities, having the physical meaning of reverse engineering errors, in adjustments to the three-dimensional model using a decoding image. The process can take place iteratively until convergence, during which more accurate and sparse disparity maps and a three-dimensional model will be obtained.

EFFECT: improved quality of disparity maps and accuracy of the reconstructed three-dimensional model owing to simplification of the task of juxtaposing elements of stereoscopic pairs.

9 cl, 15 dwg

FIELD: information technology.

SUBSTANCE: method of recognising geometrically arranged objects based on a graphical technique of constructing a spherical perspective on a plane does not include lists of measurements and postponements, and is based on plane-parallel displacements in conditions of changing projection planes.

EFFECT: broader functional capabilities owing to obtaining orthographic drawings from each separate photograph of the architecture.

1 dwg

FIELD: information technology.

SUBSTANCE: system includes: apparatus for filtering a stream of image elements installed at the input of three-dimensional image computing apparatus and containing: apparatus for selecting in said stream of image elements, elementary images, each forming at least a portion of the image output on a screen; apparatus for encoding each successive elementary image based on an index value which characterises content of said elementary image, where said index values are transmitted to said three-dimensional image computing apparatus, for reproducing content of each elementary image with said three-dimensional image computing apparatus.

EFFECT: reduced power consumption and improved quality of displaying a digital mock-up of an object on a screen in form of a synthesised image.

15 cl, 9 dwg

FIELD: information technology.

SUBSTANCE: mapping application, which displays detailed information on data as a function of a plurality of sets of layered data. When part of at least to sets of layered data overlap, a set operation is applied to the overlapping parts in order to create a new set of layered data. The set operation enables to change the sets of layered data using a simple function to drag and drop a set of layered data onto another region of the map. When the parts no longer overlap, the set operation is deleted, while displaying the sets of layered data in their initial format.

EFFECT: broader functionalities owing to visual filtration of data in layers of mapping applications.

20 cl, 9 dwg

FIELD: measurement equipment.

SUBSTANCE: invention refers to devices for formation of ultrasonic medical image. The method consists in the fact that the first data of volume of organ of ultrasonic image with the first resolution is collected, during the cardial cycle of a patient; the second data of three-dimensional sector of the above volume of ultrasonic image with the second higher resolution is collected, during the other cardial cycle of the patient; the first and the second data of ultrasonic image is compared; depending on the comparison result, the second data of three-dimensional ultrasonic image is confirmed if the data of ultrasonic image is similar, or the sector is subject to additional processing.

EFFECT: providing continuous display of three-dimensional image.

9 cl, 5 dwg

FIELD: information technology.

SUBSTANCE: device for providing a video frame sequence based on a scene model and based on content provided to a user has a video frame generator which is configured to generate a sequence from a plurality of video frames based on the scene model, analyse the scene model, insert a link into the scene model which directs to accept the content provided to the user as the texture for the identified surface, or give a texture property for the identified object or surface, and display the video frame sequence based on the scene model. The scene model comprises a scene model object having the name of the object or the property of the object, sets the scene in terms of the list of geometric objects, characteristics of objects present in the scene, and characteristics which give part of the scene model which is visible to a viewer at a viewing point, and sets the scene based on characteristics of the material or characteristics of the texture of the object of the scene model.

EFFECT: enabling flexible generation of an adjustable video frame sequence and providing an easy-to-use concept of creating the menu structure of a video carrier.

23 cl, 24 dwg

FIELD: information technologies.

SUBSTANCE: system of forms visualisation comprises a visualisation mechanism that operates on a processor in a computer system. Besides, the system comprises an application configured to provide a user interface to select form parameters. At the same time the user interface is a facility for selection of one or more according 2D parameters for the specified shape. The system also comprises a mechanism of 2D visualisation, comprising a facility to use 2D effects and 2D surface effects to the specified form, a facility to develop a texture card from 2D text, a facility to generate the first initial plane for 2D effects, a facility to generate and visualise effects of 2D text on the second initial plane. Besides, the system comprises a 3D modelling factory, which includes a facility to produce 2D geometry from the specified form, and a facility to generate a 3D model. The system also comprises a facility to display texture card at a 3D model and a facility to visualise 3D model.

EFFECT: reduced scope of calculations in development of 3D graphics compared to traditional 3D modelling.

20 cl, 8 dwg

FIELD: information technology.

SUBSTANCE: parallel secant lines that are turned at an angle ranging from 0 to 180° from the horizontal are made on the image. The average value of the length of all elements on all secant lines is determined for each direction. The direction of orientation is determined from the maximum value of the average length determined in all directions.

EFFECT: high accuracy of determining orientation of elements of gray-scale and two-gradation images.

7 dwg