RussianPatents.com

Method of encoding/decoding multi-view video sequence based on local adjustment of brightness and contract of reference frames without transmitting additional service data. RU patent 2493668.

Method of encoding/decoding multi-view video sequence based on local adjustment of brightness and contract of reference frames without transmitting additional service data. RU patent 2493668.
IPC classes for russian patent Method of encoding/decoding multi-view video sequence based on local adjustment of brightness and contract of reference frames without transmitting additional service data. RU patent 2493668. (RU 2493668):

H04N7/32 - involving predictive coding (H04N0007480000, H04N0007500000 take precedence);;
G06T15 - 3D [Three Dimensional] image rendering
G06K9/46 - Extraction of features or characteristics of the image
Another patents in same IPC classes:
Method for automatic formation of procedure of generating predicted pixel value, image encoding method, image decoding method, corresponding device, corresponding programmes and data media storing programmes Method for automatic formation of procedure of generating predicted pixel value, image encoding method, image decoding method, corresponding device, corresponding programmes and data media storing programmes / 2493601
Method is carried out by realising automatic computer formation of a prediction procedure which is appropriately applied to an input image. The technical result is achieved by making an image encoding device for encoding images using a predicted pixel value generated by a predetermined procedure for generating a predicted value which predicts the value of a target encoding pixel using a pre-decoded pixel. The procedure for generating a predicted value, having the best estimate cost, is selected from procedures for generating a predicted value as parents and descendants, where the overall information content for displaying a tree structure and volume of code estimated by the predicted pixel value, obtained through the tree structure, is used as an estimate cost. The final procedure for generating a predicted value is formed by repeating the relevant operation.
Method for automatic formation of procedure of generating predicted pixel value, image encoding method, mage decoding method, corresponding device, corresponding programmes and data media storing programmes Method for automatic formation of procedure of generating predicted pixel value, image encoding method, mage decoding method, corresponding device, corresponding programmes and data media storing programmes / 2492586
Disclosed is use of a parent population which is generated via random formation of a procedure for generating a predicted value, each indicated by a tree structure, and a set of procedures for generating a predicted value is selected as a parent from such a population. The procedure for generating a predicted value is generated as a descendant based on a certain method of development of the tree structure which develops selected procedures for generating a predicted value, where the existing function for generating a predicted value can be a tree end node. The procedure for generating a predicted value, having the best estimate cost, is selected from procedures for generating a predicted value as a parent and a descendant, and overall information content for representing the tree structure and volume of the code, estimated by the predicted pixel value, is used as a cost estimate, and the final procedure for generating a predicted value is formed by repeating the relevant operation.
Method and device for coding/decoding of motion vector Method and device for coding/decoding of motion vector / 2488972
Method for motion vector coding includes the following stages: selection of the first mode as the mode of information coding about a predictor of the motion vector in the current unit, and in this mode information is coded, which indicates the motion vector predictor at least from one motion vector predictor, or selection of the second mode, in which information is coded, which indicates generation of a motion vector predictor on the basis of units or pixels included into a pre-coded area adjacent to the current unit; determination of the motion vector predictor of the current unit in accordance with the selected mode, and coding of information on the motion vector predictor of the current unit; and coding of the vector of difference between the motion vector of the current unit and predictor of the motion vector of the current unit.
Method for scalable video coding, device of scalable video coding, software of scalable video coding and machine-readable record medium that saves software Method for scalable video coding, device of scalable video coding, software of scalable video coding and machine-readable record medium that saves software / 2488235
Share of cast combinations of optimal forecasting modes, which shall be selected for spatially corresponding units of upper and lower layers is identified on the basis of the optimal forecasting mode, which was selected in process of traditional coding, and a table of compliance is developed, which describes interconnections between them. Combinations of selected optimal forecasting modes in the compliance table are narrowed on the basis of the value of the share of casts, in order to create information of compliance for forecasting modes, which describes combinations of narrowed optimal forecasting modes. In process of upper layer unit coding, the version of searching for the forecasting mode, searching for which shall be carried out in process of coding, is identified by referral to information of compliance for forecasting modes using as the key the optimal forecasting mode selected in process of coding of the spatially corresponding unit of the lower layer.
Method of searching for displacement vectors in dynamic images Method of searching for displacement vectors in dynamic images / 2487489
Displacement vectors are searched for by searching for global displacement, breaking up the image into multiple layers of blocks, successive processing of the layers using various search schemes, using displacement vector prediction, as well as selecting displacement vectors based on efficiency of their further entropy coding.
Device of video coding and device of video decoding Device of video coding and device of video decoding / 2486692
Video coding device is a video coding device for exposure of a video image to forecasting coding with compensation of motion, comprising a detection module, in order to detect accessible blocks for blocks having vectors of motion, from coded blocks adjacent to a block to be coded, and a number of available blocks, a selection module, in order to select one selective block from coded accessible blocks, a coder of selection information, to code information of selection, indicating the selective block, using a coding table, corresponding to the number of accessible blocks, and a coder of images, to expose the block to be coded to forecasting coding with compensation of motion using a vector of motion of the selective block.
Method of adaptive frame prediction for multiview video sequence coding Method of adaptive frame prediction for multiview video sequence coding / 2480941
Each re-encoded frame of a multiview video sequence, defined according to a predetermined encoding sequence, is presented as a set of non-overlapping units; at least one of already encoded frame is determined, which corresponds to said view and denoted as reference; synthesised frames are generated for the encoded and reference frames, wherein for each non-overlapping unit of pixels of the encoded frame, denoted as the encoded unit, a spatially superimposed unit inside the synthesised frame is determined, which corresponds to the encoded frame, denoted as a virtual unit, for which the spatial position of the unit of pixels in the synthesised frame which corresponds to the reference frame is determined, so that the reference virtual unit thus determined is the most accurate numerical approximation of the virtual unit; for the determined reference virtual unit, the spatially superimposed unit which belongs to the reference frame, denoted as the reference unit, is determined, and the error between the virtual unit and the reference virtual unit is calculated, as well as the error between the reference virtual unit and the reference unit; the least among them is selected and based thereon, at least one differential encoding mode is determined, which indicates which of the units found at the previous should be used to perform prediction during the next differential encoding of the encoded unit, and differential encoding of the encoded unit is carried out in accordance with the selected differential encoding mode.
Image coding and decoding apparatus, image coding and decoding methods, programs therefor and recording medium on which program is recorded Image coding and decoding apparatus, image coding and decoding methods, programs therefor and recording medium on which program is recorded / 2479940
Method of encoding an image using intraframe prediction involves selecting a pixel value gradient which is indicated by the image signal to be predicted from among a plurality of selected gradients; generating a predicted signal by applying the gradient in accordance with the distance from the reference prediction pixel, based on the gradient; intraframe encoding of the image signal to be predicted, based on the predicted signal; and encoding information which indicates the value of the selected gradient. As an alternative, the method involves estimating the pixel value gradient which is indicated by the image signal to be predicted, based on the image signal already encoded; generating a predicted signal by applying the gradient in accordance with distance from the reference prediction pixel, based on the gradient; and intraframe encoding of the image signal to be predicted, based on the predicted signal.
Method of encoding and decoding video signal using weighted prediction and apparatus therefor Method of encoding and decoding video signal using weighted prediction and apparatus therefor / 2479939
Method of encoding a video signal comprises steps of: forming a predicted image for the current block; generating a weighted prediction coefficient for scaling the predicted image; forming a weighted prediction image by multiplying the predicted image with the weighted prediction coefficient; generating a difference signal by subtracting the weighted prediction image from the current block; and encoding the difference signal, wherein generation of the weighted prediction coefficient involves calculating the weighted prediction coefficient for which the difference between the base layer image, which corresponds to the current block, and the predicted image is minimal.
Image processing apparatus, method and program Image processing apparatus, method and program / 2479938
Deblocking filter 113 adjusts the value of disable_deblocking_filter-idc, slice_alpha_c0_offset_div2 or slice_beta_offset_div2 based on the Activity of an image calculated by an activity calculation unit 141, the total sum of orthogonal transformation coefficients of the image calculated by an orthogonal transformation unit 142, Complexity of the image calculated by the rate control unit 119, or the total sum of prediction errors of the image calculated by a prediction error addition unit 120.
Method and system for selecting key frames from video sequences Method and system for selecting key frames from video sequences / 2493602
Invention discloses a method and a system for solving a specific task of converting video from monocular to stereoscopic and from black and white to colour, in semi-automatic mode. The method of selecting key frames and supplementing a video sequence with depth or colour information includes the following operations: obtaining data for initialising objects of each key object in each frame; detecting change of scene in an input video sequence and breaking the video sequence into scenes; for each scene, detecting data on activity of each object through a module for analysing video data and global movement (GM) data on all frames of the scene and storing said data in a video analysis result storage; wherein after processing the video scene, stored data on activity of each object are first analysed, key frames are selected, data on GM and key frames of the object are then analysed; key frames are extracted and output through the video data analysis unit; after which the video analysis result storage is cleared and then switched to the next scene of the input video sequence until reaching the end of the video sequence. The system consists of three basic parts: a video data analysis unit; a video analysis result storage; a video analysis result processing unit.
3d content aggregation built into devices 3d content aggregation built into devices / 2491638
Device can capture one or more 2D images, where the 2D image is representative of a tangible object from a perspective defined by orientation of the device. Furthermore, the device may include a content aggregator that can construct a 3D image from two or more 2D images collected by the device, in which the construction is based at least in part on aligning each corresponding perspective associated with each 2D image.
Method of drawing advanced maps (versions) Method of drawing advanced maps (versions) / 2485593
In the method of drawing advanced maps based on a three-dimensional digital model of an area, involving central projection of points of the three-dimensional digital model of the area by a beam onto a plane, the mapping object is selected in form of a three-dimensional digital model of the area and its boundaries in the horizontal projection are determined; settings of the advanced map to be drawn are given; optimality criteria for advanced display of the mapping object are selected; the value of the horizontal and vertical viewing angle is given; a certain preliminary path of observation points is drawn around the mapped object in the horizontal projection such that the mapped object fits into the section of horizontal viewing angles.
Method of enhancing dense and sparse disparity maps, accuracy of reconstructed three-dimensional model and apparatus for realising said method Method of enhancing dense and sparse disparity maps, accuracy of reconstructed three-dimensional model and apparatus for realising said method / 2479039
Method and apparatus employ computer graphics techniques to project (display, render) a primary three-dimensional model onto virtual viewing positions which coincide with positions of capturing the original images. The decoding texture is calculated, which enables to associate projection pixel coordinates with parameters of geometric beams emitted to corresponding points of the three-dimensional model. Stereoscopic juxtaposition of real images with corresponding displayed projections is carried out. Three-dimensional coordinates of the digital model are improved by translation of the found disparities, having the physical meaning of reverse engineering errors, in adjustments to the three-dimensional model using a decoding image. The process can take place iteratively until convergence, during which more accurate and sparse disparity maps and a three-dimensional model will be obtained.
Method of recognising geometrically arranged objects Method of recognising geometrically arranged objects / 2460138
Method of recognising geometrically arranged objects based on a graphical technique of constructing a spherical perspective on a plane does not include lists of measurements and postponements, and is based on plane-parallel displacements in conditions of changing projection planes.
Encoding method and system for displaying digital mock-up of object on screen in form of synthesised image Encoding method and system for displaying digital mock-up of object on screen in form of synthesised image / 2446472
System includes: apparatus for filtering a stream of image elements installed at the input of three-dimensional image computing apparatus and containing: apparatus for selecting in said stream of image elements, elementary images, each forming at least a portion of the image output on a screen; apparatus for encoding each successive elementary image based on an index value which characterises content of said elementary image, where said index values are transmitted to said three-dimensional image computing apparatus, for reproducing content of each elementary image with said three-dimensional image computing apparatus.
Filtering multi-layer data on mapping applications Filtering multi-layer data on mapping applications / 2440616
Mapping application, which displays detailed information on data as a function of a plurality of sets of layered data. When part of at least to sets of layered data overlap, a set operation is applied to the overlapping parts in order to create a new set of layered data. The set operation enables to change the sets of layered data using a simple function to drag and drop a set of layered data onto another region of the map. When the parts no longer overlap, the set operation is deleted, while displaying the sets of layered data in their initial format.
Method and device for formation of three-dimensional ultrasonic image Method and device for formation of three-dimensional ultrasonic image / 2436514
Invention refers to devices for formation of ultrasonic medical image. The method consists in the fact that the first data of volume of organ of ultrasonic image with the first resolution is collected, during the cardial cycle of a patient; the second data of three-dimensional sector of the above volume of ultrasonic image with the second higher resolution is collected, during the other cardial cycle of the patient; the first and the second data of ultrasonic image is compared; depending on the comparison result, the second data of three-dimensional ultrasonic image is confirmed if the data of ultrasonic image is similar, or the sector is subject to additional processing.
Device and method of providing video frame sequence, device and method of providing scene model, scene model, device and method of creating menu structure and computer programme Device and method of providing video frame sequence, device and method of providing scene model, scene model, device and method of creating menu structure and computer programme / 2433480
Device for providing a video frame sequence based on a scene model and based on content provided to a user has a video frame generator which is configured to generate a sequence from a plurality of video frames based on the scene model, analyse the scene model, insert a link into the scene model which directs to accept the content provided to the user as the texture for the identified surface, or give a texture property for the identified object or surface, and display the video frame sequence based on the scene model. The scene model comprises a scene model object having the name of the object or the property of the object, sets the scene in terms of the list of geometric objects, characteristics of objects present in the scene, and characteristics which give part of the scene model which is visible to a viewer at a viewing point, and sets the scene based on characteristics of the material or characteristics of the texture of the object of the scene model.
Metaphor of 2d editing for 3d graphics Metaphor of 2d editing for 3d graphics / 2427918
System of forms visualisation comprises a visualisation mechanism that operates on a processor in a computer system. Besides, the system comprises an application configured to provide a user interface to select form parameters. At the same time the user interface is a facility for selection of one or more according 2D parameters for the specified shape. The system also comprises a mechanism of 2D visualisation, comprising a facility to use 2D effects and 2D surface effects to the specified form, a facility to develop a texture card from 2D text, a facility to generate the first initial plane for 2D effects, a facility to generate and visualise effects of 2D text on the second initial plane. Besides, the system comprises a 3D modelling factory, which includes a facility to produce 2D geometry from the specified form, and a facility to generate a 3D model. The system also comprises a facility to display texture card at a 3D model and a facility to visualise 3D model.
Method of determining orientation of image elements Method of determining orientation of image elements / 2491630
Parallel secant lines that are turned at an angle ranging from 0 to 180° from the horizontal are made on the image. The average value of the length of all elements on all secant lines is determined for each direction. The direction of orientation is determined from the maximum value of the average length determined in all directions.

FIELD: information technology.

SUBSTANCE: method for local adjustment of brightness and contrast of a reference frame for encoding a multi-view video sequence including: obtaining pixel values of the current encoded block belonging to the encoded frame, and pixel values of a reference block belonging to a reference frame; obtaining restored pixel values neighbouring with respect to the current block of the encoded frame, and pixel values neighbouring with respect to the reference block of the reference frame; determining numerical relationships between pixel values of the reference block and pixel values neighbouring with respect to the reference block, and relationships between the restored pixel values, neighbouring with respect to the current encoded block, and pixel values neighbouring with respect to the reference block; numerical relationships found at the previous step are used to determine parameters for adjusting brightness and contrast for adjusting differences in brightness and contrast for the reference block compared to the current encoded block; and adjusting differences in brightness and contrast for the reference block using the found adjustment parameters.

EFFECT: high encoding efficiency.

13 cl, 10 dwg

 

The present invention relates to a method for the correction of differences in brightness and contrast that may arise between frames Multiview video sequences. In particular, the present invention may be used for encoding and decoding multi-angle video sequences.

One of the methods used for encoding multi-angle video sequences, is to use frames belonging to neighbouring kinds (views), as well as personnel of the synthesized using frames neighboring kinds (views) and depth charts. Such frames act as keyframes when conducting coding with the prediction [1]. This is done by eliminating the displacement of the object in the current frame on one of keyframes. Under the offset can be understood in the motion of the object or the difference in position between the current coded frame and personnel belonging to the neighbouring kinds (views), or synthesized frame. The purpose of elimination of the specified offset is the provision of the minimum interframe difference. The resulting inter-frame difference is then encoded (for example by applying conversion, quantization and entropy coding) and placed in the output bitstream.

Possible differences in the parameters of the cameras used for multi-angle shooting video sequences, as well as the difference in light stream, coming from your subjects to the cameras, lead to differences in brightness and contrast between frames belonging to different angles. These differences in brightness and contrast also affect the characteristics of the synthesized personnel. This may lead to an increase in absolute values of frame-to-frame difference, which negatively affects the coding efficiency.

To resolve the above issue in the standard H.264 [2] is used by weighted prediction, originally designed for efficient encoding single () video sequences, where the effects of the smooth introduction and deducing image flicker or change of scene. Weighted forecast allows to eliminate the difference in brightness between the coded frame and keyframes at the level of macroblocks. Using the same weighting coefficients for all macroblocks, belonging to the same layer. Weights can be determined during encoding, and stored in the output bitstream («explicit» weighted prediction) or computed during encoding/decoding (the«implicit» weighted prediction). However, in the case of multi-angle sequences where there may be local variations in brightness and/or contrast, this method may prove to be ineffective.

By another decision of the specified problem is adaptive block-correction differences in brightness [3]. One way of implementing this approach is the way to single-step the affine brightness correction to multi-angle video (Multiview One-Step Affine Illumination Compensation MOSAIC) [4, 5]. This method involves a combination of block-correction differences in brightness described in the standard H.264 modes predictions. In the process of such coding for each macroblock calculate the average values of the pixels of the current encoded block and thrust block-candidate. For these blocks form the modified blocks by subtracting the mean value for each pixel block. Then for derived units calculate the sum of the absolute differences of the Mean-Removed Sun of Absolute Difference - MRSAD). The result predictions are the relative coordinates of the reference block (the displacement vector, which provide the minimum value of the cost of encoding, as well as the difference between the modified coded block and modified reference block. The calculation of the value of the encoding is based on the computed value MRSAD and evaluation of bit transmission costs additional information necessary for the subsequent decoding. In addition to the displacement vector, additional information includes the difference between the average values of the current and reference blocks. This difference is referred to as DVIC (Difference Value of Illumination Compensation) and is a parameter brightness correction. Value DVIC subjected to differential coding and placed in the output bitstream. It should be noted that in case the mode "R Skip" value DVIC determined on the basis of values DVIC adjacent macroblocks that have already been encoded at the time of encoding of the current macroblock. Thus, the above method does not eliminate the need for explicit provide additional information required for the subsequent decoding.

The parameters necessary for the correction of brightness and contrast, can be obtained through the analysis of recovered (encoded, and then decoded) areas of personnel. This helps reduce the amount of information which should be coded explicitly placed in the output bitstream. This approach was implemented in the method of weighted prediction using neighboring pixels (WPNP - Weighted Prediction using Neighboring Pixels) [6]. This method uses the pixel values of the encoded frame, neighbouring with the current coded block, and the pixel values of the reference frame, neighboring reference block, pixel assessment brightness changes. The changes in brightness to select two adjacent pixels multiplied by the weights and the make and model assessment of changes to the brightness and contrast between the individual pixels of the current and reference blocks. It should be noted that the weights are calculated separately for each position of the pixel is encoded block. Weighting coefficients are determined on the basis of the mutual distance between the pixel encoded block and selected adjacent pixels. The main disadvantage of this process is that the reduction of the volume of additional information is achieved due to a possible reduction in quality correction. The reason for the poor quality is that changing the brightness of pixels, neighboring towards and reference blocks, may differ from the change of brightness of the pixels belonging directly and reference blocks.

Another option that implements the approach to evaluation of parameters of brightness and contrast, by the analysis of recovered (encoded, and then decoded) areas of personnel, described in the patent application US 2011/0286678 [7]. Described in the application method of encoding multi-angle video includes the correction of the difference in brightness during the encode with the prediction. The correction parameters brightness changes are evaluated based on an assessment of changes in brightness to a related in relation to the and reference units-regions. As these related areas are available in both encoding and decoding, there is no need for explicit parameters correction in the output bitstream. The obtained parameters are used for correction of the reference block. The reliability of estimates of parameters change of brightness is determined by adjusting the brightness of the frame of reference for the area adjacent to the reference block, and compare the received adjusted field with the restored (encoded and then decoded) area of the encoded frame adjacent to the current coded block. Disadvantage of this method is that the reliability of the correction of brightness changes is determined only by analyzing the related fields. The data contained in support block, not used for the analysis of reliability correction changes the brightness, which can lead to erroneous correction, thus reducing its effectiveness.

Closest to the claimed invention is a method described in the patent application US 2008/0304760 [8]. This method of correction of brightness and contrast, for the reference block includes the following stages: obtaining reconstructed values of pixels adjacent to the current bloc, and reconstructed values of pixels, neighboring with respect to a reference unit, as input information; prediction of the mean values for the current encoding and reference blocks on the basis of the reconstructed values of pixels adjacent to the current bloc, and reconstructed values of pixels, neighboring with respect to a reference unit; determination of parameters brightness correction to the bearing unit on the basis of the predicted average value of the pixels of the current encoded block is predicted by the mean value of the jacket and the values of the pixels of the current encoded block and support unit; and implementation of the brightness correction to the reference block, using the previously particular parameter brightness correction.

The lack of a prototype is as follows. Reconstructed values of pixels adjacent to the current bloc and the support unit are exclusively used for the prediction of the mean values. This restriction prevents the use of the information contained in the neighboring pixels. Furthermore, there is no analysis of the correlations between the values of the pixels of the jacket and the values of the pixels, neighboring with respect to a reference unit. Thus not taken into account possible differences in the parameters of correction of brightness and contrast between the units and regions, neighboring to the considered blocks. This can result in reduced reliability of the correction procedure differences in brightness and contrast, which negatively affect the coding efficiency.

In accordance with the description in the prototype [8] proposes a method of encoding digital images (frames), based on the use of correction of brightness changes. This method involves the following steps: identification of the reference block, intended for forming of block-predictions for the current encoded block; determining the parameters of brightness correction to correction of the found support unit; implementation of brightness correction of the found support unit, based on a defined in the previous step parameter brightness correction; formation of block-predictions for the current the encoded block using the adjusted reference unit; encoding difference between the generated block-prediction and the current coded block; formation of output bit stream and save information about the brightness correction in the predefined location inside formed bit stream. The disadvantage of this method is the need for compensation options in the output bitstream.

The claimed invention is aimed at improving the efficiency of coding multi-angle video sequences if the model is used hybrid video encoding. The invention lies in the use of more reliable adaptive assessment procedure parameters adjust the brightness and contrast of the bearing unit, as well as procedures for adjusting brightness and contrast of the bearing unit.

The technical result is achieved by using a larger amount of data to estimate the parameters of brightness and contrast. In particular, in the present method is an analysis of the correlations between the values of the pixels of the jacket and the values of the pixels, neighboring with respect to a reference the unit, and also correlations between the recovered values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit. When implementing the method also provides for the use of advanced methods of encoding and decoding multi-angle video sequences, and these methods are based on the use of adjusting brightness and contrast, which allows to increase the efficiency of compression due to the fact that when assessing the changes of brightness and contrast values are used pixels, which are available both in the encoding and decoding. In this case, the parameters adjusting brightness and contrast can be accurately restored without the need for additional data in the output bitstream.

According to the basic aspect of the claimed invention is proposed the method of correction of the difference in brightness and the contrast between the reference block and the current coded block when conducting predictions for encoding the Multiview video sequences, and this method includes:

- get the values of the pixels of the current encoded block belonging frame, and values pixels support unit, belonging to the keyframe;

- getting restored (encoded and then decoded) pixel values, adjacent to the current block encoded frame, and the values of the pixels, neighboring with respect to a reference block reference frame;

- definition correlations between the values of the pixels of the jacket and the values of the pixels, neighboring with respect to a reference unit, and correlations between the recovered values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit;

- definition of parameters of correction of brightness and contrast correction differences in brightness and contrast between the reference block and the current coded block on the basis of found at the previous step ratios, the values of the pixels of the jacket and restored the values of the pixels in the neighboring in relation to the current bloc and the values of the pixels, neighboring with respect to a reference unit;

- a correction differences in brightness and contrast between the reference block and the current coded block on the basis of found at the previous step correction settings.

In one of the options realization of the claimed invention, the proposed modification of the above mentioned method, which consists in the fact that the process of determining the relationships between pixels currently encoding the frame and the frame of reference, as well as the process of determining the parameters adjusting brightness and contrast include:

- calculation of statistical characteristics of the restored pixel values, adjacent to the current unit, statistical characteristics for the pixels of the bearing unit and the statistical characteristics for pixels, neighboring with respect to a reference unit;

- definition of ratios between statistical characteristics for the pixels of the jacket and statistical characteristics for the restored pixel values, neighboring with respect to a reference unit;

- calculation of estimates for the values of statistical characteristics for the current encoding is a block on the basis of calculated the statistical characteristics of and the relations between them produce;

- calculation of mean value for the recovered pixels, adjacent to the current unit and located to the left of the currently encoding block, if any; calculation of mean value for the recovered pixels, adjacent to the current unit and placed above the current coded block, if any, of calculation of mean values for pixels reference block, calculation of mean values for pixels, neighboring with respect to a reference unit and located to the left of the reference block, if any, and calculation of average values for pixels, neighboring with respect to a reference unit and located above the reference block, if any;

- in case of availability of recovered pixels, adjacent to the current unit and located to the left of the currently encoding block, and also the availability of pixels, neighboring with respect to a reference unit and located to the left of the reference block, the calculation of the ratio between the average of the pixels of the bearing unit and the average pixels, neighboring with respect to a reference unit and to the left of the reference block; calculation works received relations and the average value of the recovered pixels, adjacent to the current unit and located to the left of the currently encoding block; definition of the parameter correction of brightness and contrast, as the relationship between the computed and work the average value for a pixel support unit;

otherwise, in case of availability of recovered pixels, adjacent to the current unit and placed above the current coded block, and the availability of pixels, neighboring with respect to a reference unit and located above the reference unit, the calculation of the ratio between the average of the pixels of the bearing unit and the average pixels, neighboring with respect to a reference unit and is situated above the reference block; calculation of the work of the obtained relations and the average value of the recovered pixels, neighboring the current unit and located at the top of the current encoded block; definition of the parameter correction of brightness and contrast, as the relationship between the computed work and average values for the pixels of the bearing unit;

- otherwise, by using the Median Predictions for the calculation of estimates of the mean value of the current encoded block;

- definition of the parameter correction of brightness and contrast, as the relationship between the average values for pixels currently encoding block and average values for the pixels of the bearing unit.

Another modification of the claimed invention consists in that the method of adjusting brightness and contrast of the bearing unit in the process of coding the Multiview video sequences includes:

- get the values of the pixels of the current block is encoded frame and the pixel values of the reference block reference frame;

- getting restored (encoded and then decoded) pixel values, adjacent to the current bloc, and the values of the pixels, neighboring with respect to a reference unit;

- the first assessment of estD i,j for each position (i,j) of the pixel in the support block; and the first evaluation of estD i,j is the function of a linear combination of the reconstructed values

T k D

pixels, adjacent to the current bloc, k=0, ..., N-1, N - the number of pixels adjacent to the current and reference blocks;

- calculation of the second assessment estR i,j for each position (i,j) of the pixel in the support block; the second assessment estR i,j is the function of a linear combination of the values

T k R

pixels, neighboring with respect to a reference unit, k=0, ..., N-1;

- define the settings for correction of brightness and contrast correction of each pixel support block; the definition of these parameters is based on the first assessment of estD i,j , value of the second assessment estR i,j , and on the values of R (i,j pixels reference block, on the restored values

T k D

pixels, adjacent to the current block, and on the values of

T k R

pixels, neighboring with respect to a reference unit;

- a correction of brightness and contrast for each pixel in the reference block, using the obtained in the previous step, the correction parameters of brightness and contrast.

According to another modification of the claimed invention, method provides that the first and second evaluations for each position of the pixel in the support block, and determination of parameters of correction of brightness and contrast for each position of the pixel in the support block includes:

- the first assessment of estD i,j as

e s t D i , j = aff k = 0, ... , N - 1 W k ( i , j ) Buna T k D ,

where W k (i,j), k=0, ..., N-1 - weights, and

T k D

, k=0, ..., N-1 is reconstructed values of pixels adjacent to the current bloc, N is the number of pixels adjacent to the current and reference blocks;

- calculation of the second assessment estR i,j as

- define the settings for correction of brightness and contrast for each position (i,j) of the pixel in the support block; this parameter represents the ratio of the

α i , j = e s t D i , j e s t R i , j

if the second assessment estR i,j is not equal to zero. In otherwise α i,j is assumed to be 1.

- a correction of brightness and contrast, for reference by multiplying the unit value for each pixel support unit, R i,j the corresponding correction parameter α i,j .

Another modification of the claimed invention provides that the first and second evaluations for each position of the pixel in the support block includes:

- weight calculation W k (i,j), k=0, ..., N-1 for the first evaluation of estD i,j, and the second assessment estR i,j , for each position (i,j) of the pixel in the support block weighting factor W k (i,j) is function of absolute difference:

| R i , j - T k R | ,

that provides inversely proportional increase/decrease in the values of W k (i,j) depending on the decrease/increase the absolute difference. Here R i,j is the pixel value of the reference block;

T k R

(k=0, ..., N-1) - value of the pixel, neighboring with respect to a reference unit; N - the number of pixels adjacent to the current and reference blocks.

In another variant of realization of the claimed invention, the proposed modification the method mentioned above, which provides that the first and second evaluations for each position of the pixel in the support block includes:

- weight calculation W k (i,j), k=0, ..., N-1 for the first evaluation of estD i,j, and the second assessment estR i,j ; for each position (i,j) of the pixel in the support block weighting factor W k (i,j) is function of absolute difference:

| R i , j - T k R | ,

that provides inversely proportional increase/decrease in the values of W k (i,j) depending on the decrease/increase the absolute difference; in the case

| T k R - R i , j | ≤ T h r

where Thr is a predefined threshold value; otherwise W k (i,j). Here R i,j is the pixel value of the reference block;

T k R

(k=0, ..., N-1) - value of the pixel, neighboring with respect to a reference unit; N - the number of pixels adjacent to the current and reference blocks.

At realization of the claimed invention makes sense to apply another modification the method mentioned above, which provides that the first and second evaluations for each position of the pixel in the support block includes:

- weight calculation W k (i,j), k=0, ..., N-1 for the first evaluation of estD i,j, and the second assessment estR i,j, for each position (i,j) of the pixel in the support block weighting factor W k (i,j) is function of absolute difference:

| R i , j - T k R | ,

that provides inversely proportional increase/decrease in the values of W k (i,j) depending on the decrease/increase the absolute difference; in the case

| T k R - T k D | ≤ T h r 1 , where T k D

(k=0, ..., N) is the pixel value, adjacent to the current bloc, Thr1 - first predefined threshold; and

| T k R - R i , j | ≤ T h r 2

where Thr2 - second predefined threshold; otherwise W k (i,j)=0. Here R i,j is the pixel value of the reference block;

T k R

(k=0, ..., N - 1) - value of the pixel, neighboring with respect to a reference unit; N - the number of pixels adjacent to the current and reference blocks.

According to other variant of realization of the claimed invention, offers modification of the method mentioned above, which provides that the first and second assessment for each position of the pixel in the support block includes:

- weight calculation W k (i,j), k=0, ..., N - 1 for the first evaluation of estD i,j, and the second assessment estR i,j ; for each position (i,j) of the pixel in the support block weighting factor W k (i,j) is W k (i,j)=exp(-C·A k (i,j)), where C is The predefined constant greater than 0, and A k (i,j) is equal to

A k ( i , j ) = | R i , j - T k R |

where R (i,j is the pixel value of the reference block,

T k R

(k=0, ..., N - 1) - the pixel value, neighboring with respect to a reference block, in the case of

| T k R - R i , j | ≤ T h r

where Thr is a predefined threshold; otherwise W k (i,j)=0.

Alternatively, for the realization of the claimed invention, the proposed modification of the method mentioned above, which provides that the first and second evaluations for each position of the pixel in the support block includes:

According to other variant of realization of the claimed invention, the proposed modification of the method mentioned above, which provides that the position of restored pixel values, adjacent to the current unit, and position of the pixel values, the neighbouring with respect to a reference unit, defines adaptive instead of the corresponding pixels with pre-determined positions.

In a group of related by a common idea of inventions is also an original method of encoding multi-angle video sequences based on the correction of brightness and contrast. This method includes:

- definition of the reference block that is used to generate a block-predictions for the current encoded block;

- define the settings for correction of brightness and contrast correction differences in brightness and contrast between the reference block and the current coded block in the process of search or on completion of the search, the reference unit;

- a correction adjust the brightness and contrast of the found support unit, through the use of the derived parameters correction of brightness and contrast;

- formation of block-predictions for the current encoded block due to the use of adjusted the brightness and contrast of the bearing unit;

- the encoding of the current block via the generated block-predictions without coding found correction settings adjust the brightness and contrast; the encoding of information about the support block, if it is necessary for decoding;

the definition of correction settings to change brightness and contrast includes:

- getting restored (encoded and then decoded) pixel values, adjacent to the current block encoded frame, and the values of the pixels, neighboring with respect to a reference block reference frame;

- definition of ratios between the values pixels support unit, and the values of the pixels, neighboring with respect to a reference unit, and correlations between the recovered values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit;

- determination of parameters of correction adjust the brightness and contrast correction differences in brightness and contrast between the reference block and the current coded block on the basis of found at the previous step of the correlations between the values of the pixels of the jacket and the restored values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit.

In the framework of a unified concept also envisages the use of an original method of decoding multi-angle video sequences based on the correction of brightness and contrast. This method includes myself:

- decoding of the information about the support block, if necessary, in order to determine the support unit for the current decoded block; definition of the reference block;

- define the settings for correction of brightness and contrast, for the found support unit;

- a correction adjust the brightness and contrast of the found support unit, using the obtained parameters correction of brightness and contrast;

- formation of block-predictions for the current decoded box due to the LSP, adjusted the brightness and contrast;

- decoding of the current block by using the block-prediction and correction settings of brightness and contrast,

the procedure of determining the parameters of adjusting brightness and contrast includes:

- getting restored (encoded and then decoded) pixel values, adjacent to the current block encoded frame, and the values of the pixels, neighboring with respect to a reference block reference frame;

- definition of ratios between the values of the pixels of the jacket and the values of the pixels, neighboring with respect to a reference block, and correlations between the recovered values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit;

- define the settings for correction of brightness and contrast correction differences in brightness and contrast between reference block and the current coded block on the basis of found at the previous step of the correlations between the values of the pixels of the jacket and the restored values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit.

Further, the nature of the invention is explained with the involvement of graphic materials.

Figure 1 - block diagram of a hybrid multi-angle encoder video sequences and application of the claimed invention.

Figure 2 - block diagram of parts of a hybrid video encoder implementing the proposed method, which is part of the encoding process with the prediction.

Figure 3 diagram explaining the method of correction of brightness and contrast, support unit, in accordance with one of the examples of realization of the claimed invention.

Figure 4 is a block diagram illustrating the method of correction of brightness and contrast of the bearing unit, according to one of the examples of realization of the claimed invention.

Figure 5 - chart illustrating the procedure for selection of input blocks in the current frame in the calculation of the correction settings adjust the brightness and contrast according to one of the examples of realization of the claimed invention.

6 - a chart illustrating the method of correction of brightness and contrast for support unit, in accordance with another variant of realization of the claimed invention.

Fig.7 - flow chart showing the way to the pixel correction of brightness and contrast for support unit, according to one of the examples of realization of the claimed invention.

Fig.8 - diagram explaining the method of correction of brightness and contrast for support unit, in accordance with another variant of realization of the claimed invention.

Fig.9 - flowchart that describes how to encode multi-angle video sequences based on the correction of brightness and contrast according to one of the examples of realization of the claimed invention.

Figure 10 is a block diagram that describes how to decode multi-angle video sequences based on the correction of brightness and contrast according to one of the examples of realization of the claimed invention.

Figure 1 shows a block diagram of a hybrid multi-angle encoder video sequences. Input hybrid encoder 105 multi-angle video sequences include original form (encoded view) 101 and already encoded and then decoded kinds (views) 102, which are part of the encoded multi-angle video. Already encoded/decoded types 102 and already encoded/decoded sequence 103 of depth maps are used for the formation of the synthesized kind (view) for the source view (encoded view) using the procedure 104 synthesis. Formed synthesized view (view) also to the input of the hybrid encoder 105.

Hybrid encoder 105 contains the following tools used to encode the source view (view): management of keyframes 106, interframe prediction 107, prediction 108, inter-frame and frame compensation 109, spatial transformation 110, optimization of relations speed/distortion 111, entropy encoding 112. Detailed information about these tools can be found in [9]. The proposed method can be implemented in the framework of predictions 107.

Figure 2 contains the schema part of the hybrid video encoder, which implements the proposed method in the composition of coding with the prediction. Hybrid encoder includes a unit 201 subtraction, unit 202 transformation and quantization block 203 entropy coding, unit 204 inverse transform and inverse quantization, block 205 offset correction and change brightness/contrast, the unit 206 synthesis kind (view), block 207 addition, block 208 buffering keyframes and depth maps, block 209 predictions compensation parameters and correction, block estimate the shift and change brightness/contrast 210 and block 211 decision about the mode of macroblock coding. Blocks 201-204, 207-209 and 211 are the building blocks of encoding used in the base hybrid method of encoding [9]. Unit 206 synthesis kind (view) is a block specific to encoding. Unit 206 synthesizes additional keyframes of the already encoded/decoded frames and depth maps.

The proposed method can be implemented in blocks 205 and 210. These units carry out the way chunked encoding with the prediction, which includes the following stages:

- For the current block in the current encoded frame search support unit, which minimizes the following expression:

aff m = 1 M aff n = 1 N | I ( m , n ) - Ψ ( R ( m + i , n + j ) ) |

where I(m,n) is the value of the brightness of a pixel with coordinates (m,n) within the current block. The size of the current encoded block is equal to M X N (i,j) specifies the displacement vector (DV), which indicates the support unit R within a predefined scope. j(x) is the function, correcting for differences in brightness and contrast between the current block and reference block. The described method is implemented in block 210. The obtained parameters of correction of brightness and contrast, together with the DV, are passed in the block 205 and block 209.

- Found support unit transform in accordance with the found correction parameters adjust the brightness and contrast (block 205). Then the unit 201 forms a Delta block. Then the difference block is converted using the Discrete Cosine Transform (DCT), is quantized (unit 202) and coded encoder (block 203). Additional information (SI), necessary for the subsequent decoding, also encoded encoder (block 203).

Figure 3 contains a chart to explain the essence of the method of correction of brightness and contrast for support unit, in accordance with one of the embodiments of the invention claimed. In accordance with Figure 3, at each iteration procedures for searching the support unit for the current block 311 currently encoding frame 310 determine the vector of 320 offset (DV). Vector 320 points to support unit 301 keyframe 300. According to the claimed method, the function j(x) correction of brightness and contrast, has the following form:

j(x)=a·x.

Correction parameter changes the brightness and contrast α described by the following equation:

α = e s t M X r e f M X , r e f M X = 1 N Buna M Buna aff m = 1 M aff n = 1 N S ( m + i , n + j )

refMX - average value of the bearing unit 301. (i,j) - coordinates of the upper left corner of the bearing unit 301. S denotes the pixel frame of reference 300. A value expressed as estMX, represents an estimate of the average value for the current encoded block 311.

Figure 4 provides a flowchart illustrating the method of correction of brightness and contrast, for the reference block, according to one of the embodiments of the invention claimed. This method involves the following steps.

1. Getting to the input pixel values blocks 301, 302, 303, 311, 312, 313 and 314 (Figure 4, 401).

2. Calculation the following average values (Figure 4, 402): calculation of mean value encMX_L unit 312

e n c M X _ L = 1 P Buna Q Buna aff p = 1 P aff q = 1 Q D I ( p , q ) ,

where DI(p,q) is a refurbished (encoded and then decoded) the value of the brightness of a pixel with coordinates (p,q) inside the unit 312. Unit size is 312 are P x q

To calculate an average _ unit 313

e n c M X _ A = 1 U Buna V Buna aff u = 1 U aff v = 1 V D I ( u , v ) ,

where DI(u,v) is restored (encoded and then decoded) the value of the brightness of a pixel with coordinates (u,v) inside the unit 313. Dimensions of the unit are 313 U x V.

To calculate an average refMX support unit 301.

To calculate an average refMX_L block 302:

r e f M X _ L = 1 P Buna Q Buna aff p = 1 P aff q = 1 Q S ( p + i , q + j - Q ) .

Dimensions of the block 302 are equal to the sizes of the block 312.

To calculate an average refMX_A block 303:

r e f M X _ A = 1 U Buna V Buna aff u = 1 U aff v = 1 V S ( u + i , - U , v + j ) .

Dimensions of the block 303 equal to the sizes of the block 313.

3. Check of condition 1 (Figure 4, 403): if the block 302 and unit 312 available (that is, blocks and 312 302 are the edges of the frame and, if the reference frame is synthesized frame, pixels block 302 not belong to the area of occlusion, and the value of at least one pixel block 302 different from 0), then go to assessing the significance of the estMX (Figure 4, 405) in accordance with the following expression:

e s t M X = r e f M X r e f M X _ L Buna e n c M X _ L .

Otherwise, proceed to checking the conditions 2 (Figure 4, 404).

4. Check of condition 2 (Figure 4, 404): if the block 303 313 block are available (i.e. the blocks 303 313 is located within the borders of the frame and, if a key frame is a synthetic frame, pixels block 303 not belong to the area of occlusion, and the value of at least one pixel block 303 different from 0), then go to assessing the significance of the estMX (Figure 4, 407) in accordance with the following expression:

e s t M X = r e f M X r e f M X _ A Buna e n c M X _ A .

Otherwise, proceed to evaluate the value of estMX (Figure 4, 406) in accordance with the following expression:

estMX=MAP(encMX_L,encMX_A,encMX_LA),

where the MAP(x,y,z) is well-known method of the median predictions [10], encMX_LA is the average value of the block 314:

e n c M X _ L A = 1 U Buna Q Buna aff u = 1 U aff q = 1 V D I ( u , q )

Dimensions of the block, 314 constitute a U x Q and equal to the sizes of the blocks 312 and 313.

5. Calculating the correction of brightness and contrast, alpha (Figure 4, 408) through the use of the obtained values estMX and refMX.

6. A correction of brightness and contrast (Figure 4, 409) for the LSP-301 through the use of calculated parameters a.

It should be noted that the reference frame 300, blocks 301, 302, 303 and recovered (encoded and then decoded) blocks 312, 313, 314 available as at the time of encoding and decoding. Figure 5 illustrates the mutual arrangement of the areas under consideration and the block in the current frame 500. Region 501 current frame 500 is available at the time of encoding and decoding currently encoding unit 502. Region 501 includes blocks 312, 313 and 314. Region 501 sometimes called a «pattern». The area of 503 not present during decoding of the current block 502 and should not contain blocks 312, 313 and 314. In this regard, the above-described method can be implemented in both the encoder and decoder, and does not require additional data in the output bitstream.

According to Figure 6 on each iteration procedures for searching the support unit for the current block 611, belongs to the current frame 610, determine the vector of 620 offset (DV). DV indicates support unit 601 keyframe 600. Current block 611 contains pixels, which is identified as A00~A33. Support unit 601 contains pixels, which is identified as R00~R33. Reconstructed values of pixels (blocks 612 and 613), adjacent to the current unit, designated as

T 0 D ∼ T 15 D . T 0 R ∼ T 15 R

- pixels belonging blocks 602 and 603. Blocks 602 and 603 are neighboring with respect to a reference unit 601 and assigns blocks of 612 and 613. It should also be noted that the total number of pixels in blocks 612, 613 and 602, 603 is the same.

For each position (i,j) of the pixel in the support block 601 correction to adjust the brightness and the contrast is carried out in accordance with the following equation:

j(x i,j )=a (i,j ·x i,j .

Here, the pixel correction of brightness and contrast (if estR i,j is not equal to 0) is described as:

α i , j = e s t D i , j e s t R i , j ,

where estD i,j - it is the first estimate for the pixel with coordinates (i,j) in support block; estR i,j is the second assessment pixel with coordinates (i,j) in support block. Otherwise α i,j is assumed to be 1.

Block diagram method pixel correction of brightness and contrast for support unit, refer to figure 7. This method includes the following steps:

1. Get the values of pixels blocks 601, 602, 603 of the reference frame 600, block 611 and blocks 612, 613, belong to the scope of the template currently encoding frame 610 (operation 701).

2. Weight calculation W k (i,j), k=0, ..., N - 1 for each position (i,j) of the pixel in the support block 601 (operation 702). The weights W k (i,j) can be expressed as follows:

W k (i,j)=exp(-C·A k (i,j)),

C = σ 2 , A k ( i , j ) = | R i , j - T k R | .

where s>0 is determined experimentally. Here N is the total number of pixels in blocks 612, 613 (or 602, 603). It should be noted that the weights reflect the fact that the value of R (i,j closer to

T k R

the more his contribution in determining the parameter correction of brightness and contrast, for the LSP.

3. Calculating values of estD i,j for each position (i,j) of the pixel in the reference unit 601 (operation 703) in accordance with the following expression:

e s t D i , j = aff k

belongs to

0 ... N - 1 k : | T k R - T k D | ≤ T h r 1 a n d | T k R - R i , j | ≤ T h r 2 W k ( i , j ) Buna T k D

Thr1 and Thr2 is predefined thresholds. Thresholds are used to the exclusion of pixel values, neighboring with respect to a reference unit, which differ significantly from the values R i,j and values

T k D

, adjacent to the current block.

4. Calculating values estR i,j for each position (i,j) of the pixel in the support block 601 (operation 704) in accordance with the following expression:

e s t R i , j = aff k

belongs to

0 ... N - 1 k : | T k R - T k D | ≤ T h r 1 a n d | T k R - R i , j | ≤ T h r 2 W k ( i , j ) Buna T k R .

Predefined thresholds Thr1 and Thr2 are the same as in the calculation for calculation of estD i,j .

5. Calculating the correction of brightness and contrast, α i,j (operation 705) for each pixel with coordinates (i,j) in support block 601 on the basis of the obtained values of estD i,j and estR i,j if estR i,j is not equal to 0. Otherwise α i,j is assumed to be 1.

6. A correction of brightness and contrast (operation 706) for support unit 601 on the basis of the application of the calculated parameters α i,j .

Another variant of realization of the claimed invention is based on the following. Usually, as pixels, neighboring with respect to a reference unit, select a group of pixels, directly adjacent to a reference unit. However, the procedure for search of the reference block may choose such a displacement vector, pixel values in the specified group will not be sufficiently similar to their corresponding values of pixels adjacent to the current block. Moreover, the values of the pixels directly adjacent to a reference unit, may differ significantly from the pixel values of the reference block In these cases, correction of brightness and contrast can be handled incorrectly.

To solve this problem, in the variant of realization of the claimed invention is proposed to use the "floating" (relative to the reference unit) the provision referred to a group of pixels, neighboring with respect to a reference unit. Fig.8 explains the proposed method in accordance with one of the embodiments of the invention claimed. According Fig.8 on each iteration procedures for searching the support unit for the current block 811 currently encoding frame 810 determine the vector of 820 offset (DV). DV indicates support unit 801 keyframe 800. The coordinates of the group pixels frame of reference (which is formed by blocks of pixels 802 and 803) are defined using additional lookup vector 804 offset. Clarifying vector 804 offset is the result of the additional assessment procedures offset. When this is determined by a vector 804 offset, which gives the minimum value of the penalty function, which determines the degree of similarity of blocks 812, 813 and blocks 802, 803, respectively. As a penalty function can be known functions as: medium-square error, the sum of absolute differences, the sum of the absolute differences for signals with zero mean, etc. Vector 804 can be determined implicitly during the process of encoding and decoding without the transmission of additional information in the output bitstream.

Figure 9 presents the block diagram, which describes how to encode multi-angle video sequences based adjusting brightness and contrast according to one of the embodiments of the invention claimed. At the stage 901 determine the support unit that is used to generate predicted block. At the stage of 902 determine the parameters of the correction of brightness and contrast, for the found support unit. Definition of parameters of correction of brightness and contrast, includes:

- getting restored (encoded and then decoded) pixel values, adjacent to the current block, and pixel values, neighboring with respect to a reference block reference frame;

- determination of numerical correlations between the values of the pixels of the jacket and the values of pixels, neighboring with respect to a reference unit, and the ratio between the recovered values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit;

- define the settings for correction of brightness and contrast correction the differences in brightness and contrast to the bearing unit on the basis of found at the previous step of numerical ratios, the pixel values of the reference block, reconstructed values of pixels adjacent to the current bloc, and the values of the pixels, neighboring with respect to a reference unit.

At the stage of 903, using the obtained parameters correction of brightness and contrast, make the correction for the LSP. On stage, 904, using adjusted the brightness and contrast of the support block form the block prediction for the current block. On stage, 905, using the generated block prediction, encode the current block. In particular, encode information about the support block, if it is necessary for decoding. It should be noted that the obtained parameters correction of brightness and contrast, not encoded and not placed in the output bitstream.

Figure 10 illustrates the method of decoding multi-angle video sequences based on the correction of brightness and contrast, according to one of the examples of realization of the claimed invention. According to Figure 10 information about the support block is decoded, if you want its decoding. The decoded information can be used to determine the bearing unit at the stage of 1001. At the stage of 1002 determine the parameters of the correction of brightness and contrast correction of the reference block. The procedure of determination of parameters of correction of brightness and contrast, includes:

- getting restored (encoded and then decoded) pixel values, adjacent to the current block, and pixel values, neighboring with respect to a reference block reference frame;

- determination of numerical correlations between the values of the pixels of the jacket and the values of pixels, neighboring with respect to a reference unit, and the ratio between the recovered values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit;

- define the settings for correction of brightness and contrast correction the differences in brightness and contrast to the bearing unit on the basis of found at the previous step of numerical ratios, the pixel values of the reference block, reconstructed values of pixels adjacent to the current bloc, and the values of the pixels, neighboring with respect to a reference unit.

At the stage 1003, using the obtained parameters correction of brightness and contrast, make the correction of the reference block. At the stage 1004, using adjusted the brightness and contrast of the support block form the block prediction for the current decoded block. At the stage of 1005, using the generated block prediction, perform the decoding of the current block.

[8] US patent application 2008/0304760. Method and Apparatus for Illumination Compensation and Method and Apparatus for Encoding and Decoding Image Based on Illumination Compensation. December, 2008.

[9] Richardson I.E. The Advanced H.264 Video Compression Standard. Second Edition. 2010.

[10] Martucci S.A. Reversible compression of HDTV images using median adaptive prediction and arithmetic coding», in IEEE Int. Symp. on Circuits and Systems, 1990.

1. The method of local adjustment of brightness and contrast, the frame of reference for encoding the Multiview a video sequence, includes the following stages: - receive the value of the pixels of the current encoded block belonging frame, and the pixel values of the reference block, owned by the keyframe get recovered, that is encoded and then decoded, pixel values, adjacent to the current block encoded frame, and the values of the pixels, neighboring with respect to a reference block reference frame; - determines numerical ratio between the values of the pixels of the jacket and the values of the pixels, neighboring with respect to a reference unit, and the ratio between the recovered values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit; - on the basis of found at the previous step of numerical ratios, the pixel values of the reference block, reconstructed values pixels, adjacent to the current bloc, and the values of the pixels, neighboring with respect to a reference block, determine the parameters of adjusting brightness and contrast correction differences in brightness and contrast for support unit, compared to the current coded block; - they correction of the differences in brightness and contrast to the reference block, using the obtained parameters of correction.

2. The method according to claim 1, wherein the procedure for defining numerical ratios for pixels currently encoding the frame and the frame of reference and procedure of determining the parameters of correction adjust the brightness and contrast include the following stages: - calculate the statistical characteristics of the restored pixel values, adjacent to the current unit, statistical characteristics for the values of the pixels of the bearing unit and the statistical characteristics for pixel values, neighboring with respect to a reference unit; - determine the numerical relations between the statistical characteristics for the pixels of the jacket and statistical characteristics for the restored pixel values, neighboring with respect to a reference unit; - on the basis of the calculated statistic characteristics and relations between them assess the value of statistical characteristics for the current encoded block; - calculate the correction parameter changes brightness and for the correction of the differences in brightness and contrast between the reference and the current coded blocks based on the found assessment of the statistical characteristics of the current block and the statistical characteristics of the bearing unit.

3. The method of claim 2, characterized in that the calculation of the statistical characteristics, the definition of ratios for statistical characteristics and definition of the parameter correction of brightness and contrast, involves the following steps: - in case of availability of recovered pixels, adjacent to the current unit and located to the left of the currently encoding block for them to calculate the average value; in case of availability of recovered pixels, adjacent to the current unit and placed above the current coded block for these compute the average value, calculate the average value for a pixel support unit, in case of availability of pixels, neighboring with respect to a reference unit and located to the left of the reference block, for them to calculate the average, and in the case of the pixels in relation to the neighboring the reference unit and located above the reference block, for them also calculate the average value; - in case of recovered pixels, adjacent to the current unit and located to the left of the currently encoding block, and of the pixels in relation to the neighboring the reference unit and located to the left of the reference block, we calculate the ratio between the average of the pixels of the bearing unit and the average pixels, neighboring with respect to a reference unit and located to the left of the reference block; calculate found work relations and the average of the recovered pixels, adjacent to the current unit and located to the left of the currently encoding block; determine the parameter correction of brightness and contrast, as the ratio between the computed work and average values for the pixels of the reference block; - otherwise, if the recovered pixels, adjacent to the current unit and placed above the current coded block, and there pixels, neighboring with respect to a reference unit and located above the reference block, we calculate the ratio between the average the value of the pixels of the bearing unit and the average pixels, neighboring with respect to a reference unit and located above the reference block; calculate found work relations and the average value of the recovered pixels, adjacent to the current block and located above the current coded block; determine the parameter correction of brightness and contrast, as the ratio between the computed work and average values for the pixels of the reference block; otherwise, use the median prediction for the calculation of the assessment the average value of the current encoded block; determine the parameter correction of brightness and contrast, as the ratio between the average values for pixels currently encoding block and average values for the pixels of the bearing unit.

4. The method according to claim 1, wherein the procedure determining ratios for pixels currently encoding the frame and the frame of reference, parameter definition correction of brightness and contrast, as well as a correction differences in brightness and contrast of the support unit, compared to the current coded block includes the following stages: - calculate the first assessment of estD i,j for each position (i,j) of the pixel in the support block; the first assessment of estD i,j is a linear combination of the restored values

T k D

pixels, adjacent to the current bloc, k=0, ..., N-1, N - the number of pixels in the neighboring the current and thrust block; - calculate the second assessment estR i,j for each position (i,j) of the pixel in the support block; the second assessment estR i,j is a linear combination of the values

T k R

pixels, neighboring with respect to a reference unit, k=0, ..., N-1; - determine the basis of the first evaluation of estD i,j , second assessment estR i,j , the values of R (i,j pixels reference block, reconstructed values

T k D

pixels, adjacent to the current bloc, and values

T k R

pixels, neighboring with respect to a reference unit, correction parameter changes brightness and contrast for each position of the pixel in the support block; - make the correction of brightness and contrast for each pixel of the bearing unit using pre-defined settings correction.

8. The method according to claim 5, wherein the calculation procedure of the first and second evaluations for each position of the pixel in the support block includes the following stages: - calculate the weights W k (i,j), k=0, ..., N-1 for the first evaluation of estD i,j, and the second assessment estR i,j ; for each position (i,j) of the pixel in the support block weighting factor W k (i,j) is function of absolute difference:

| R i , j - T k R |

that provides inversely proportional increase/decrease in W k (i,j) depending on the decrease/increase the absolute difference; case

| T k R - T k D | ≤ T h r 1 , where T k D

(k=0, ..., N-1) - value of the pixel, adjacent to the current bloc, Thr1 is the first predefined threshold;

| T k R - R i , j | ≤ T h r 2

where Thr2 - second predefined threshold; otherwise W k (i,j)=0; here R i,j is the pixel value of the reference block,

T k R

(k=0, ..., N-1) - value of the pixel, neighboring with respect to a reference unit.

9. The method according to claim 5, wherein the calculation procedure of the first and second evaluations for each position of the pixel in the support block includes the following stages: - calculate the weights W k (i,j), k=0, ..., N-1 for the first evaluation of estD i,j, and the second assessment estR i,j ; for each position (i,j) of the pixel in the support block weighting factor W k (i,j) is W k (i,j)=exp(-C·A k (i,j)), where C is the predefined constant greater than 0, and A k (i,j) is equal to

A k ( i , j ) = | R i , j - T k R |

where R (i,j is the pixel value of the reference block,

T k R

(k=0, ..., N-1) - value of the pixel, neighboring with respect to a reference unit; in the case

| T k R - R i , j | ≤ T h r

where Thr - predefined threshold; otherwise W k (i,j)=0.

10. The method according to claim 5, wherein the calculation procedure of the first and second evaluations for each position of the pixel in the support block includes the following stages: - calculate the weights W k (i,j), k=0, ..., N-1 for the first evaluation of estD i,j, and the second evaluation estR i,j ; for each position (i,j) of the pixel in the support block weighting factor W k (i,j) is W k (i,j)=exp(-C·A k (i,j)), where C is the predefined constant greater than 0, and A k (i,j) is equal to

A k ( i , j ) = | R i , j - T k R |

where R (i,j is the value of a pixel reference block,

T k R

(k=0, ..., N-1) is the value of a pixel, neighboring with respect to a reference block, in the case of

| T k R - T k D | ≤ T h r 1 , where T k D

, (k=0, ..., N-1) is the value of a pixel, adjacent to the current bloc, Thr1 - first predefined threshold; and

| T k R - R i , j | ≤ T h r 2

where Thr2 - second predefined threshold; otherwise W k (i,j)=0.

11. The method according to claim 1, characterized in that the position of restored pixel values, adjacent to the current unit, and position of the pixel values, neighboring with respect to a reference unit determine the adaptive instead of the corresponding pixels with pre-determined positions.

12. Way encoding multi-angle video sequences based on local adjustment of brightness and contrast of the bearing unit, which includes the following stages: - define the support unit that is used to generate a block-predictions for the current encoded block; - determine the parameters of the correction of brightness and contrast correction differences in brightness and contrast between the reference block and the current coded block in the process of search or on completion of the search, the reference unit; - make the correction of brightness and contrast, found the reference box due to the obtained parameters correction of brightness and contrast; - form the block prediction for the current encoded block due to the use of adjusted the brightness and contrast of the bearing unit; - encode the current block via the generated blockpredictions without coding found correction settings adjust the brightness and contrast; the encoding of information about the support block, if it is necessary for decoding; wherein the procedure of determination of parameters of adjusting brightness and contrast includes the following stages: - get recovered, that is encoded and then decoded, pixel values, adjacent to the current block encoded frame, and the values of the pixels, neighboring with respect to a reference block reference frame; - determines numerical ratio between the values of the pixels of the bearing unit and the values of the pixels, neighboring with respect to a reference unit, and the ratio between the recovered values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit; - on the basis of found at the previous step of numerical ratios, pixel values of the reference block, reconstructed values of pixels adjacent to the current bloc, and the values of the pixels, neighboring with respect to a reference block, determine the parameters of the correction of brightness and contrast correction differences in brightness and contrast for reference block.

13. Method of decoding multi-angle video sequences based on the correction of brightness and contrast, includes the following stages: - decode information about the support block, if necessary, in order to determine the support unit of the current block, and define support unit; - determine the parameters of the correction of brightness and contrast for adjusting brightness and contrast to be found support unit; - make the correction differences in brightness and contrast to be found support unit, using the obtained parameters correction changes brightness and contrast; - form the block prediction for the current decoded box using the support unit, adjusted the brightness and contrast; - decode the current block, using the generated block prediction and found the correction parameters adjust the brightness and contrast; wherein the procedure of determination of parameters of adjusting brightness and contrast includes the following stages: - receive recovered, that is encoded and then decoded, pixel values, adjacent to the current block encoded frame, and the values of the pixels, neighboring with respect to a reference block reference frame; - determines numerical ratio between the values of the pixels of the jacket and the values of the pixels, neighboring with respect to a reference unit, and the ratio between the recovered values of pixels adjacent to the current bloc and the values of the pixels, neighboring with respect to a reference unit; - on the basis of found at the previous step of numerical ratios, the pixel values of the reference block, reconstructed values of pixels adjacent to the current bloc, and the values of the pixels in relation to the neighboring reference block, determine the parameters of the correction of brightness and contrast correction differences in brightness and contrast to the reference block.

 

© 2013-2014 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English.