Method and device for encoding video and method and device for decoding video through pixel value compensation in accordance with pixel groups

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. The method of decoding video comprises obtaining from a bit stream information on pixel value compensation in accordance with a pixel value band or a limiting value level, if information on pixel value compensation indicates a band, applying a compensation value of the predefined band obtained from the bit stream to the pixel included in the predefined band among pixels of the current block; and if information on pixel value compensation indicates a limiting value level, applying a compensation value of the predefined boundary direction obtained from the bit stream to the pixel in the predefined boundary direction among pixels of the current block, wherein the predefined band is one of bands formed by breaking down the full range of pixel values.

EFFECT: high quality of the reconstructed image.

3 cl, 22 dwg, 2 tbl

 

The technical field to which the invention relates

Devices and methods in accordance with illustrative embodiments of the implementation are related to encoding and decoding the video.

Art

As developed and provided hardware for reproducing and storing high-definition video or high quality, increases the need for the video codec for effectively encoding or decoding high-definition video or high quality. In the codec prior art video is encoded in accordance with a limited method of coding based on the macroblocks having a predefined size.

The quality of the image may be distorted due to the video encoding and decoding, and restored the image to the decoder can be added to the module post-processing to improve the quality of the restored image.

Disclosure of the invention

Technical problem

One or more illustrative embodiments provide a method and apparatus for encoding video and method and device for decoding a video for compensating pixel values of a predefined group of pixels.

Solution

In accordance with the illustrative aspect var�Anta implementation is provided a method of encoding a video for compensating pixel values the method contains the stages at which: encode image data; decode the encoded image data to form a restored image by performing loop filtering on the decoded image data; determine a compensation value corresponding to errors between a predefined group of the restored pixels in the restored image and corresponding original pixels, and a group of pixels containing the recovered pixel, which must be compensated by using the compensation value; and encode the compensation value and transmit the encoded compensation value and a bitstream of encoded image data.

The beneficial effects of the invention

When encoding and decoding a video, in accordance with illustrative embodiments of the implementation can determine the average values of the errors of the local minimum values and local maximum values of a predefined group of pixels between the restored image and the original image and to compensate for the pixel values of the restored pixels in a predefined group of pixels.

Brief description of the drawings

Fig. 1 is a block diagram of an apparatus for encoding a video for compensating pixel values in the corresponding�AI with the illustrative option implementation;

Fig. 2 is a block diagram of an apparatus for decoding a video for compensating pixel values in accordance with the illustrative option implementation;

Fig. 3 illustrates neighboring restored pixels that should be compared with the restored pixel in order to determine the level of the extreme value of the restored pixel, in accordance with the illustrative option implementation;

Fig. 4 is a block diagram of the sequence of operations to describe the adaptive loop filtering in accordance with the illustrative option implementation;

Fig. 5 is a block diagram of the sequence of operations to describe the adaptive loop filtering in accordance with another illustrative option implementation;

Fig. 6 is a block diagram of the sequence of operations illustrating a method of encoding a video for compensating pixel values in accordance with the illustrative option implementation;

Fig. 7 is a block diagram of the sequence of operations illustrating a method of decoding a video by compensating pixel values in accordance with the illustrative option implementation;

Fig. 8 is a block diagram of an apparatus for encoding video by compensation pixel value after performing loop filtering based on the coding elements having a tree�LiDE structure, in accordance with the illustrative option implementation;

Fig. 9 is a block diagram of an apparatus for decoding a video by compensating the pixel value after performing loop filtering based on the coding elements having a tree structure, in accordance with the illustrative option implementation;

Fig. 10 is a diagram for describing the concept of coding elements in accordance with the illustrative option implementation;

Fig. 11 is a block diagram of an image encoder based on the coding elements in accordance with the illustrative option implementation;

Fig. 12 is a block diagram of the decoder of the image based on the coding elements in accordance with the illustrative option implementation;

Fig. 13 is a diagram illustrating deeper coding elements in accordance with the depths and sections, in accordance with the illustrative option implementation;

Fig. 14 is a diagram for describing the relationship between the encoding and the conversion element in accordance with the illustrative option implementation;

Fig. 15 is a diagram for describing information coding coding elements corresponding to the depth of encoding, in accordance with the illustrative option implementation;

Fig. 16 - diagram of the deeper coding elements in accordance with the depths in suitable�and with the illustrative option implementation;

Fig. 17-19 diagram for describing a relationship between coding elements, elements prediction and the conversion element in accordance with the illustrative option implementation;

Fig. 20 is a diagram for describing the relationship between the element encoding element predictions or section and the conversion element in accordance with information about the encoding mode in table 2;

Fig. 21 is a block diagram of the sequence of operations illustrating a method of encoding video by compensation pixel value after performing loop filtering based on the coding elements having a tree structure, in accordance with the illustrative option implementation; and

Fig. 22 is a block diagram of the sequence of operations illustrating a method of decoding a video by compensating the pixel value after performing loop filtering based on the coding elements having a tree structure, in accordance with the illustrative option implementation.

A preferred embodiment of the invention

In accordance with an aspect of the illustrative embodiment is provided a method of encoding a video for compensating pixel values, the method contains the stages at which: encode image data; decode the encoded data shown�and I form a restored image by performing loop filtering on the decoded image data; determine the compensation value corresponding to errors between a predefined group of the restored pixels in the restored image and corresponding original pixels, and a group of pixels containing the recovered pixel, which must be compensated by using the compensation value; and encode the compensation value and transmit the encoded compensation value and a bitstream of encoded image data.

The step of determining compensation values and the group of pixels contains the stages on which: determine the level of extreme values indicating the degree of closeness with the maximum value or the minimum value, in accordance with the restored pixels by comparing pixel values of neighboring restored pixels in the restored image; and identify a group of pixels containing the recovered pixel, which must be compensated by the neighboring restored pixels on the basis of a certain level of extreme values in accordance with the restored pixels.

The step of determining groups of pixels based on the extreme values may include the stage at which distribute the neighboring restored pixels into groups of pixels, including restored�Lenno pixels having the same level of extreme values, on the basis of a certain level of extreme values in accordance with the restored pixels, and define the group of pixels with at least one level of extreme values as a group of pixels comprising the restored pixel that should be compensated, and the determination of compensation values and a group of pixels may further include a phase in which determine the compensation value for a certain group of pixels with at least one level is extreme value.

The step of determining compensation values and the group of pixels may include the stages at which: distribute the restored pixels of the restored image into groups of pixels, including restored pixels in the same strip, in accordance with the bands obtained by dividing the entire section of the pixel values; and determine a compensation value in accordance with the groups of pixels in accordance with stripes.

The phase distribution of the reconstructed pixels in accordance with stripes can include the stage at which distribute the restored pixels into groups of pixels in accordance with the lanes on the basis of division of whole sections of the pixel values on the strip, the number of which is equal to the number�at 2 in the positive degree.

The exponent of the number 2 can be determined based on the number of senior significant bits in the bit depth of the restored pixels.

The whole section of the pixel values may be within a range of extended bit depth.

The step of determining compensation values and the group of pixels may include the stages at which: distribute the restored pixels of the restored image into groups of pixels, including restored pixels that are located on the same line, in accordance with the lines; and determine the compensation value in accordance with the groups of pixels in accordance with lines.

Distribution of reconstructed pixels into groups of pixels in accordance with the lines may include the stage at which detect the restored pixels forming the line, at least in one direction from the horizontal direction, vertical direction, diagonal direction, along the curve and the direction along the boundary of a predefined object among restored pixels of the restored image.

The step of determining compensation values and the group of pixels may include a phase in which determine the compensation value by using the average value of errors between the restored pixels GRU�PY pixels and corresponding original pixels.

The step of determining compensation values and the restored pixel may contain a phase in which determine the compensation values for all of the restored pixels to be compensated, or individually determine the compensation values in accordance with predefined groups of the restored pixels to be compensated.

Stage of formation of the restored image can be performed by using adaptive loop filtering with the use of a plurality of continuous one-dimensional filters.

The step of determining compensation values and the group of pixels may include a phase in which determine the compensation value and the restored pixel that should be compensated, in accordance with at least one item of data among the sequence of image sections of the frame and encoding the input video.

The step of transmitting the bit stream may include the stage at which insert the encoded compensation value in the header section and pass it.

Encoding an input sequence of images may include the stages at which: share the image on the maximum element coding; perform encoding at least one deeper element of coding in accordance with g�ubinas in accordance with the areas overwhelmed by hierarchical division maximum element of the encoding with increasing depth to determine the encoding mode of the encoding with the depth of encoding, the encoding mode includes information on at least one depth coding, which generates the smallest error encoding; and generate encoded image data in accordance with a certain depth of the encoding mode and encoding.

In accordance with another aspect of the illustrative embodiment is provided a method of decoding a video for compensating pixel values, the method contains the stages at which: extracts encoded image data and the compensation value from the bitstream by parsing the bitstream of the encoded image; decode the extracted image data to form a restored image by performing loop filtering on the decoded image data; identify a group of pixels containing the restored pixel which have to be compensated among restored pixels of the restored image by using the extracted compensation value; and compensate the error between the restored pixel of a certain group of pixels�lei and a corresponding original pixel by using the extracted compensation value.

The step of determining groups of pixels may include phases in which: determine the level of extreme value, which indicates the degree of closeness with the maximum value or the minimum value in accordance with the restored pixels by comparing pixel values of neighboring restored pixels in the restored image; and identify a group of pixels containing the recovered pixel, which must be compensated by the neighboring restored pixels on the basis of a certain level of extreme values.

The step of determining the level of extreme values may include the stage at which distribute the neighboring restored pixels into groups of pixels containing the restored pixels having the same level of extreme values, on the basis of a certain level of extreme values and identify a group of pixels of at least one level of extreme values as a group of pixels containing the recovered pixel, which should be compensated.

The step of determining groups of pixels may include a phase distribution of the restored pixels of the restored image into groups of pixels in accordance with stripes.

Phase error compensation may include the stage at which compensate pogresno� of the pixel values of the restored pixels of the group of pixels in accordance with the levels of extreme values by using compensation values in accordance with the levels of extreme value to compensate the pixel values of groups of pixels in accordance with the levels of extreme values.

The step of determining groups of pixels may include the stage at which distribute the restored pixels of the restored image into groups of pixels, including restored pixels that are located on the same line, in accordance with the lines, and identify a group of pixels among the groups of pixels in accordance with the lines, as a group of pixels containing the recovered pixel, which should be compensated.

Phase error compensation may include the stage at which compensate for the pixel values of the restored pixels into groups of pixels in accordance with the lines through the use of the compensation values of groups of pixels in accordance with the lines to compensate for the pixel values of groups of pixels in accordance with lines.

Distribution of reconstructed pixels into groups of pixels in accordance with the lines may include the stage at which detect the restored pixels forming the line, at least in one direction from the horizontal direction, vertical direction, diagonal direction, along the curve and the direction along the boundary of a predefined object among restored pixels of the restored and�of obrazenia.

The compensation value can be determined by using the average value of errors between the restored pixel group of pixels and the respective original pixels in the encoding time of the encoded picture data.

Error compensation may include the stage at which compensate for all of the restored pixels to be compensated by using the compensation value.

Error compensation may include the stage at which compensate for the pixel values of the restored pixels by using compensation values that are individually determined in accordance with predefined groups of the restored pixels to be compensated.

The formation of the restored image can be performed by using adaptive loop filtering with the use of a plurality of continuous one-dimensional filters.

Encoded image data can be encoded by dividing the image into the maximum element of the encoding, and perform encoding at least one deeper element of coding in accordance with the depth in accordance with areas broken by a hierarchical division of the maximum element of Cody�hardware with increasing depth to determine the information of the encoding mode of the encoding with depth encoding which includes information of at least one depth coding, which generates the smallest error encoding of the bit stream, and the formation of the restored image may include decoding the image data encoded based on the depth of encoding and the encoding mode on the basis of information about the encoding mode, and performing loop filtering.

In accordance with another aspect of the illustrative embodiment is provided a device for encoding a video for compensating pixel values, the apparatus comprises: an encoder that encodes the image data; a generator of the restored image, which decodes encoded image data and generates a restored image by performing loop filtering on the decoded image data; block determine the value of compensation and the group of pixels that determines the compensation value corresponding to errors between a predefined group of the restored pixels in the restored image and corresponding original pixels, and a group of pixels containing the recovered pixel, which must be compensated by using the compensation value; and a transmitter, which encodes the value of commercial�anscii and transmits the encoded compensation value and a bitstream of encoded image data.

In accordance with another aspect of the illustrative embodiment is provided a device for decoding a video for compensating pixel values, the apparatus comprises: an extractor which extracts encoded image data and the compensation value from the bitstream by parsing the bitstream of the encoded image; a reconstructed image generator that decodes the extracted image data and generates a restored image by performing loop filtering on the decoded image data; a unit determining the group of pixels that defines a group of pixels containing the restored pixel that should be compensated, among restored pixels of the restored image by using the extracted compensation value; and a compensation unit restored pixel which compensates the error between the restored pixel of a certain group of pixels and a corresponding original pixel by using the extracted compensation value.

In accordance with another aspect of the illustrative embodiment is provided a machine readable medium on which is recorded program for executing any of the above FPIC�BOV.

The implementation of the invention

Hereinafter will be more fully described illustrative implementation options with reference to the accompanying drawings. Expressions such as "at least one", when they precede the list of elements, modify the entire list of elements and do not modify the individual elements of the list.

Now with reference to Fig. 1-22 will be described a method and apparatus for encoding video and method and device for decoding a video by compensating the error of the pixel values of a predefined group of pixels in accordance with an illustrative variants of implementation. In particular, the encoding and decoding a video by compensating the pixel value after performing loop filtering in accordance with an illustrative variants of implementation will be described with reference to Fig. 1-7, and coding and decoding a video for compensating a pixel value after performing loop filtering based on the coding elements having a tree structure, in accordance with illustrative embodiments of the implementation will be described with reference to Fig. 8-22.

Next, with reference to Fig. 1-7 will be described encoding and decoding a video by compensating the pixel value after performing loop filtering in accordance with an illustrative in�the options of implementation.

Fig. 1 is a block diagram of the device 10 encoding a video for compensating pixel values in accordance with the illustrative option implementation.

The video encoding device includes an encoder 12, a generator 14 of the restored image, the block 16 determine the value of compensation and the group of pixels and the transmitter 18. The operation of the encoder 12, the generator 14 of the restored image and the block 16 determine the value of compensation and group of pixels of the device 10 encoding video can seamlessly control the processor to encode the video, CPU, GPU, etc.

The encoder 12 encodes the image element in the image among the sequence of input images. The encoder can generate encoded image data by performing motion estimation, inter-prediction, intra-prediction, transformation and quantization on an input image.

The encoder 12 may use any method of video encoding, such as MPEG 1, 2, 4, and H. 26x. For example, the encoder 12 may use a method of video encoding based on the coding elements having a tree structure, according to illustrative variant of implementation, which will be described later with reference to Fig. 8-22.

The generator 14 of the restored image can receive image data, sacode�created by the encoder 12, and generate a restored image by decoding encoded image data and performing loop filtering on the decoded image data.

The generator 14 of the restored image may generate decoded image data by performing inverse quantization, inverse transformation, inter prediction, motion compensation and intra prediction on the encoded image data.

Decoding is performed on the coded image data generator 14 of the reconstructed image, can be performed as processes, which is opposite to the method of video encoding performed by the encoder 12. For example, the device 10 of video coding in which the encoder 12 and the generator 14 of the restored image perform a method of encoding video according to an illustrative variant implementation, will be described later with reference to Fig. 8-22.

The generator 14 of the restored image can perform loop filtering on the decoded image data. Loop filtering may optionally include a filtering for the removal of the deblocking and adaptive loop filtering. Adaptive loop filtering may be performed through the use of a plurality of continuous one-dimensional filters. Adaptivepath filtering in accordance with an illustrative variants of implementation will be described in detail later with reference to Fig. 4 and 5.

Block 16 determine the value of compensation and the group of pixels receives the input image and the restored image, issued by the generator 14 of the restored image, determines the compensation value of the error between each restored pixel of a predefined group in the restored image and the corresponding original pixel in the input image and determines a group of pixels comprising the restored pixel to be compensated by using the compensation value.

Block 16 determine the value of compensation and compares groups of pixels pixel values of neighboring restored pixels among restored pixels in the restored image, and determines the level of extreme and/or boundary values indicating the proximity to the maximum value and the minimum value. Hereinafter for convenience of explanation "the level of extreme and/or boundary values" may represent at least one level from the level of extreme values and level of constraint. Block 16 determine the value of compensation and the group of pixels may determine the neighboring restored pixels into groups of pixels, including restored pixels having the same level of extreme and/or� boundary values, on the basis of each level of extreme and/or boundary values of neighboring restored pixels.

Block 16 determine the value of compensation and the group of pixels may define at least one group of pixels in the level of extreme and/or edge value among distributed groups of pixels as a group of pixels having pixel values that should be compensated. Block 16 determine the value of compensation and the group of pixels may decide to compensate the pixel values of groups of pixels of the minimum and maximum levels of extreme and/or boundary values or pixel values of groups of pixels of the extreme levels and/or boundary values in a predefined range. Method for determining a target that should be compensated, based on the levels of extreme and/or boundary values of neighboring restored pixels will be described later with reference to Fig. 3.

Block 16 determine the value of compensation and the group of pixels may decide to compensate the pixel values of groups of pixels in accordance with stripes. Block 16 determine the value of compensation and the group of pixels may be divided the entire range of the pixel values of the restored pixels into many bands split to assign a group of pixels of the restored pixels. The block 16 opredeleniniya compensation and group of pixels can distribute the restored pixels in the same lane of a group of pixels in accordance with stripes based on the pixel values of the restored pixels. In this case, all pixel values of the restored pixels in the group of pixels in accordance with stripes can be defined as seeking compensation, and the block 16 determine the value of compensation and the group of pixels to determine the value of compensation separately for each group of pixels in accordance with stripes.

For high speed machining of the entire range of the pixel values may be divided into strips, the number of which corresponds to the number 2 in the positive degree. For high-speed processing when the number of senior significant bits in the bit depth of the bit string of the restored pixels is equal to p, the whole range of the pixel values may be divided into strips, the number of which is equal to 2^p. Alternatively, the entire range of the pixel values may be identical to the extended range of the bit depth of the restored pixel.

Block 16 determine the value of compensation and the group of pixels may be restored to analyze the image to detect lines in a predefined direction and to distribute the restored pixels into groups of pixels in accordance with the lines, including restored pixels on the same line. When the detected lines in various directions, such as horizontal direction, vertical direction, diagonal�first direction, the direction along the curve and the direction along the boundary of a predefined object, the pixels forming each line, can be divided into one group of pixels. Block 16 determine the value of compensation and the group of pixels to determine the value of compensation individually for each group of pixels in accordance with lines.

Block 16 determine the value of compensation and the group of pixels may be defined by the average value of errors between the restored pixels to be compensated, and corresponding original pixels as the value of compensation. The error between the restored pixel and the initial pixel may include the difference between the restored pixel and the initial pixel, the absolute value of the difference or the squared difference. Block 16 determine the value of compensation and the group of pixels to determine a single value of compensation, which should be equally applied to all of the restored pixels to be compensated, or individually to determine the compensation value in accordance with the groups of pixels distributed in accordance with the specifications.

Block 16 determine the value of compensation and the group of pixels to determine the restored pixel that should be compensated, and op�adelite appropriate compensation value in accordance with at least one item of data among the sequence of images, sections of the frame and encoding the input video.

The transmitter 18 receives and encodes the value of compensation, defined by the block 16 determine the value of compensation and the group of pixels. The transmitter 18 receives image data encoded by the encoder 12, and generates and outputs a bitstream that includes the encoded compensation value and encoded image data. Encoded image data can be converted to the format of the bitstream using statistical coding and inserted into a bitstream for transmission.

The transmitter 18 may receive additional information about the process of determining a group of pixels from the block determine the value of compensation and the group of pixels and encode and insert additional information in the bitstream. Since the method can be based on the levels of extreme and/or boundary values, the stripes or lines, as described above, may be transferred to information that indicates how the value of compensation, and specifies the group of pixels using the compensation value.

When the generator 14 of the restored image performs an adaptive loop filtering, the transmitter 18 may receive information about the loop filter coefficient for the adaptive loop filter and encode and insert information in �Italy stream. The device 10 video encoding can break the image into pieces in square, rectangular or even irregular in shape and perform selective adjustments only for specified groups of pixels in a particular area. Based on the parts of the split image pixel values can be compensated adaptive to the image content. In addition, the encoding device 10 can transmit video information on groups of pixels that needs to be adjusted, by means of explicit signaling and implicit signaling.

The device 10 of the video encoding may provide information about the value of compensation received during the encoding process, the decoder so that the decoder can support post-processing that can be performed to reduce the error between the restored image and the original image. In addition, since the compensation value is determined in accordance with the groups of pixels, the number of transmission bits can be reduced only by means of encoding and transmitting information about the compensation value without the need to encode and transmit information about the location of individual pixels.

Fig. 2 is a block diagram of the device 20 decoding a video for compensating pixel values in accordance with an illustrative vari�ntom implementation.

The device 20 decoding video includes extractor 22, generator 24 of the restored image, the determination unit 26 of the group of pixels and the block 28 compensation of the restored pixels. The operation of the extractor 22, generator 24 of the restored image, the determination unit 26 of the group of pixels and the block 28 compensation of the restored pixels of the device 20 decoding video can seamlessly manage the video decoding processor, the Central processor, graphics processor, etc.

The extractor 22 receives and syntactically analyzes (performs parsing) bitstream of an encoded image and extracts the bitstream of the encoded image data and information relating to the compensation value. Information relating to the compensation value may include information about the compensation value. When information relating to the compensation values, additionally includes information about the process of determining a group of pixels that have to be compensated by using the compensation value, the extractor 22 may extract, from the bitstream, the value of compensation and method information. The extractor 22 may extract at least one element of the set consisting of the values of compensation and information relating to the compensation value in suitable�and with at least one item of data among the sequence of images, sections of the frame and encoding the input video.

The extractor 22 may extract information encoding, such as encoding method and encoding mode used for decoding encoded image data. When the information on the loop filter coefficient for the adaptive loop filter is inserted in the bit stream, the extractor 22 may extract information about the loop filter coefficient of the bit stream.

The generator 24 of the restored image receives encoded image data, information encoding and information on the loop filter coefficient, which is extracted by the extractor 22, and generates a restored image by decoding encoded image data and performing loop filtering on the decoded image data.

Decoding the encoded image data may be performed as processes, return to the video encoding method performed on the encoded image data. For example, when encoded image data is encoded and transmitted in accordance with the method of video encoding based on the coding elements having a tree structure, according to illustrative variant implementation, the generator 24 of the restored image to moldokadyrov encoded image data in accordance with the method of video decoding on the basis of the coding elements, having a tree structure.

The generator 24 of the restored image can selectively perform loop filtering, such as filtering with the removal of the deblocking and adaptive loop filtering on the decoded image data. Adaptive loop filtering may be performed through the use of a plurality of continuous one-dimensional filters.

Block 26 to define a group of pixels may receive the restored image generated by the generator 24 of the restored image, and information relating to the compensation value extracted by the extractor 22, and to identify the group of pixels, including restored pixels for which must be compensated by using the compensation value, among restored pixels of a predefined group in the restored image. Block 28 compensation of the restored pixels receives the compensation value extracted by the extractor 22, and information about the group of pixels specified by the block 26 to define a group of pixels and compensates for the pixel values of the restored pixels by using the compensation value, and outputs a restored image having the restored pixel values.

When information about the process of determining a group of pixels extraction�and extractor 22, block 26 to define a group of pixels to identify the group of pixels having pixel values that have to be compensated through the use of the method. For example, the block 26 to define a group of pixels to determine whether to classify the restored pixels in accordance with the levels of extreme and/or boundary values, the strips of the pixel values or lines, and to identify the group of pixels on the basis of the method.

Block 26 to define a group of pixels can determine the level of extreme and/or boundary values in accordance with the restored pixels by comparing pixel values of neighboring restored pixels in the restored image. Block 26 to define a group of pixels can distribute restored neighboring pixels, based on the levels of extreme and/or boundary values and to identify the group of pixels comprising the restored pixel to at least one predefined level of extreme and/or boundary values, as groups of pixels, including restored pixels having pixel values that have to be compensated by using the compensation value. Block 28 compensation of the restored pixels can compensate for pixel values restored�Lenno pixels in a particular group of pixels through the use of the compensation value.

Alternatively, the block 26 to define a group of pixels can distribute the restored pixels in the restored image into groups of pixels in accordance with stripes on the basis of the bands obtained by splitting the entire section of the pixel values. Block 28 compensation of the restored pixels can compensate for the pixel values of the restored pixels in the group of pixels in accordance with the bands through the use of the compensation value in accordance with stripes for the restored pixels in the group of pixels in accordance with stripes.

The whole section of the pixel values can be divided into several bands, the number of which is equal to the number 2 in the positive degree. The ratio of number 2 in the positive degree can be determined based on the number of senior significant bits in the bit depth of the restored pixels. In addition, the entire section of the pixel values may be a range of extended bit depth of the restored pixels.

Block 26 to define a group of pixels can distribute the restored pixels of the restored image into groups of pixels according to the lines. Block 28 compensation of the restored pixels can compensate for the pixel values of the restored pixels in the group of pixels in suitable�and with lines through the use of the compensation value for the group of pixels in accordance with the lines. Block 26 to define a group of pixels can detect the restored pixels forming a line in at least one direction from the horizontal direction, vertical direction, diagonal direction, along a curve or direction along the boundary of a predefined object among restored pixels of the restored image.

The compensation value can be determined and transferred through the use of the average value of errors between the restored pixels and corresponding original pixels during encoding. Block 28 compensation of the restored pixels can compensate for all of the pixel values of the restored pixels to be compensated, by using the same value of compensation. Alternatively, when the compensation value extracted by the extractor 22 are set in accordance with the groups of pixels, the block 28 compensation of the restored pixels can compensate for pixel values through use of the compensation value, individually defined according to the groups of pixels.

The device 10 of the video encoding device 20 and the video decoding can compensate for the systematic error, is formed between the restored izobrageniem the original image, when the encoded image is decoded and restored. The encoding device 10 can transmit video information on groups of pixels that needs to be adjusted, by means of explicit signaling and implicit signaling. The device 10 of the video encoding device 20 and the video decoding can split image into parts in square, rectangular or even irregular in shape and perform selective adjustments only for specified groups of pixels in a particular area. Based on the parts of the split image pixel values can be compensated adaptive to the image content.

As an example of systematic error between the restored image and the original image, the average value of the error pixel values between restored pixels in a predefined group and corresponding original pixels may not be equal to 0. Accordingly, the device 10 of the video encoding device 20 and the video decoding to compensate for the error between the restored pixel and the initial pixel.

Block 16 determine the value of compensation and the group of pixels to determine a compensation value in accordance with equation 1 below.

[Equation 1]

Here, m denotes an integer from 1 to M, and the average valuecorrthe error between the pixel values ofOrg(xm., ym)the initial pixel and the pixel values ofRec(xm., ym)restored pixel may be used as the compensation values of groups of pixels{(xmym)}that includes the pixel locations in(xmym).

Block 28 compensation of the restored pixels can compensate for the pixel values of the restored pixels in the group of pixels in accordance with equation 2 below.

[Equation 2]

Block 28 compensation of the restored pixels can compensate for pixel values ofRec(xmym)restored pixel by using the compensation value,corrand to give the pixel values ofReccorrected(xmym)in accordance with pixels as the results of compensation pixel values ofRec(xmym)groups of pixels{(xmym)}.

The device 10 of the video encoding device 20 and the video decoding can distribute the restored pixels according to a predefined standard for defined�I groups of pixels, having a pixel value that should be compensated.

Groups of pixels can be distributed in accordance with the levels of extreme and/or boundary values in accordance with the illustrative option implementation. Local extreme and/or edge value includes a local minimum value and local maximum value. Local minimum value f(xmin, ymin) and a local maximum value f(xmax, ymax) in the next coordinate (x, y) in a predefined range of ε respectively defined in equations 3 and 4 below, relative to the quadratic function f(x, y).

[Equation 3]

f(x,y)>f(xmin, ymin), if |xmin-x|+|ymin-y|<ε and ε>0.

[Equation 4]

f(x,y)<f(xmax, ymax), if |xmax-x|+|ymax-y| < ε (where ε>0).

In addition, local minimum value f(xmin, ymin) and a local maximum value of f(xmax, ymax) can be respectively defined in equations 5 and 6 below, relative to the pixel (x, y) of a discrete signal.

[Equation 5]

f(xmin, ymin)<(xmin+1, ymin)

f(xmin, ymin)<(xmin-1, ymin)

f(xmin, ymin)<(xmin, ymin+1)

f(xmin, ymin)<(xmin, ymin-1).

[Equation 6]

f(xmax, ymax)<(xmax+1, ymax )

f(xmax, ymax)<(xmax-1, ymax)

f(xmax, ymax)<(xmax, ymax+1)

f(xmax, ymax)<(xmax, ymax-1).

The device 10 of the video encoding device 20 and the video decoding can determine the pixels corresponding to extreme and/or edge value among a predefined neighboring restored pixels in the horizontal and vertical lines in accordance with equations 5 and 6. In addition, a greater number of neighboring pixels, including, for example, pixels (xmax+1, ymax+1), (xmax-1, ymax+1), (xmax+1, ymax-1) and (xmax-1, ymax-1) on the diagonal lines that can be included in the process of distribution of pixels in groups. Predefined pixels can be excluded from groups of pixels. For example, if the group of pixels may be distributed only pixels on the same line, other pixels outside the relevant line can be excluded from the group of pixels.

The phenomenon of planarization may be formed through a common system of encoding and decoding video. In accordance with this local minimum value in the reconstructed image is higher than the pixel value of the original image and the error between the local minimum values of the restored image and the original image�expression is a positive value. In addition, a local maximum value in the reconstructed image is lower than a pixel value of the original image and the error between the local maximum values of the restored image and the original image is a negative value.

Accordingly, the device 10 of the video encoding device 20 and the video decoding can determine the average values of the errors of the local minimum values and local maximum values of a predefined group of pixels between the restored image and the original image and to compensate for the pixel values of the restored pixels in a predefined group of pixels. Next, with reference to Fig. 3 will be described a method of determining the level of extreme and/or boundary values of the restored pixels of a predefined group of pixels, which is performed by block 16 determine the value of compensation and group of pixels of the device 10 of the video encoding unit 26 and define a group of pixels of the device 20 decoding the video.

Fig. 3 illustrates neighboring restored pixels 32, 34, 35 and 37, which should be compared with the current restored pixel 30 in order to determine the level of extreme and/or boundary value of the current restored pixel 30, in accordance with the picture�actions option implementation. For ease of explanation Fig. 3 illustrates only the neighboring restored pixels 32, 34, 35 and 37. However, the pixels that are compared with the current restored pixel 30, according to illustrative variant implementation is not limited to neighboring restored pixels 32, 34, 35 and 37 on the horizontal and vertical lines.

Block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels to determine the level of extreme and/or boundary value of the current restored pixel 30 by comparing neighboring restored pixels 32, 34, 35 and 37, which are top, left, right and below the current restored pixel 30, respectively, with the current restored pixel 30. When the Pixel_Type parameter denotes the level of extreme and/or boundary value of the current restored pixel 30, Rec[x][y] denotes the pixel value of the current restored pixel 30, and Rec[x][y-1], Rec[x-1][y], Rec[x+1 [y] and Rec [x][y+1] respectively denote the pixel values of neighboring restored pixels, the level of extreme and/or boundary values can be determined by the following distribution formulas:

Pixel_Type=0;

if(Rec[x][y]>Rec[x-1][y]) Pixel_Type ++;

if(Rec[x][y]<Rec[x-1][y]) Pixel_Type --;

if(Rec[x][y]>Rec[x+1][y]) Pixel_Type ++;

if(Rec[x][y]<Rec[x+1][y]) Pixel_Type --;

if(Rec[x][y]>Rec[x][y-1]) Pixl_Type ++;

if(Rec[x][y]<Rec[x][y-1]) Pixel_Type --;

if(Rec[x][y]>Rec[x][y+1]) Pixel_Type ++;

if(Rec[x][y]<Rec [x][y+1]) Pixel_Type --.

The maximum and minimum values Pixel_Type can be taken as +4 and -4.

If the pattern of distribution of the pixel has a shape differing from the illustrative pattern of distribution of the pixel shown in Fig. 3, the allocation formula should be changed accordingly. For example, during the detection boundary in the diagonal direction at an angle of 45 pixels 31 and 38 are compared with the current pixel 30. The level of extreme and/or boundary values can be determined by the following modified formula of distribution:

Pixel_Type=0;

if(Rec[x][y]>Rec[x-1][y-1]) Pixel_Type ++;

if(Rec[x][y]<Rec[x-1][y-1]) Pixel_Type --;

if(Rec[x][y]>Rec[x+1][y+1]) Pixel_Type ++;

if(Rec[x][y]<Rec[x+1][y+1]) Pixel_Type --.

Accordingly, the maximum and minimum values Pixel_Type may be taken as +2 and -2.

In other words, the level of extreme and/or edge value is determined by comparing the pixel values of neighboring restored pixels 32, 34, 35 and 37 in a predefined range of the current restored pixel 30 pixel value of the current restored pixel 30. When the level of extreme and/or boundary value of the current restored pixel 30 is the maximum level of extreme and/or edge�about the values that is, M, the current restored pixel 30 can be defined as the local maximum pixel, and when the level of extreme and/or boundary value of the current restored pixel 30 is the minimum level of extreme and/or boundary values, i.e., -M, the current restored pixel 30 can be defined as the local minimum pixel. The value M may be determined based on a predefined number of analyzed restored neighboring pixels. Block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels to determine the restored pixels determined as the local maximum pixel and the local minimum pixel, as the pixels that should be compensated.

Thus, the block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels that determine the levels of extreme and/or boundary values of the restored pixels in the current data item and define the group of pixels, including restored pixels having a level of extreme and/or boundary values of M, and the group of pixels, including restored pixels having a level of extreme and/or boundary values-M. Block 16 determine the value of compensation of gruppi pixels can determine the average value of errors of the pixel values between restored pixels and corresponding original pixels in accordance with the groups of pixels and to determine the value compensation is based on the average values. Block 26 to define a group of pixels and the block 28 compensation of the restored pixels can compensate for the pixel values of the restored pixels in accordance with the groups of pixels through the use of the compensation value extracted from the received information about the compensation value.

Block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels can define a group of pixels, including restored pixels adjacent to the local maximum pixel and the local minimum pixel as a target to compensate. In accordance with this, the block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels to determine compensation values for the levels of extreme and/or boundary values in a predefined range that includes the maximum level of extreme and/or boundary values and the minimum level of extreme and/or boundary values. For example, since the maximum level of extreme and/or boundary values equal to M, as described above, the restored pixels having a level of extreme and/or boundary values M-1, are adjacent to the local maximum pixel.

In accordance with this, the block 16 determine Zn�to receive compensation and the group of pixels and the block 26 to define a group of pixels can define a group of pixels, including restored pixels having a level of extreme and/or boundary values higher than a predefined positive value, as groups of pixels, connecting with the highest level of extreme and/or boundary values, and groups of pixels, including restored pixels having a level of extreme and/or boundary values are lower than the predetermined negative value, as groups of pixels adjacent to a minimum level of extreme and/or boundary values. For example, when the level of extreme and/or boundary values higher than m or lower than-m, that is, when the level of extreme and/or edge values are equal to-M, -(M-1), -(M-2),..., -(m+1), (m+1),..., (M-1) or M, the compensation value in accordance with the levels of extreme and/or boundary values can be determined.

Alternatively, block 16 determine the value of compensation and the group of pixels can calculate the average value of errors between the restored pixels and corresponding original pixels in accordance with the groups of pixels adjacent to the maximum level of extreme and/or boundary values, and to determine the compensation value in accordance with the groups of pixels. In addition, the determination unit 26 of the group of pixels and the block 28 compensation of the restored pixels �might compensate for the pixel values of the restored pixels in accordance with the groups of pixels through the use of the compensation values in accordance with the groups of pixels extracted from information on the compensation value.

The 4 neighboring restored pixel 32, 34, 35 and 37 located on the top, left, right and bottom of the current restored pixel 30, respectively, are used to determine the level of extreme and/or boundary values, but to thoroughly distribute the level of extreme and/or boundary values, to determine the level of extreme and/or boundary value of the current restored pixel 30 as the restored neighboring pixels may be used 8 restored pixels 31-38 around the current restored pixel 30.

Alternatively, the device 10 of the video encoding device 20 and the video decoding can distribute the pixel values of the bands, the number of which is equal to or greater than a predefined number.

For example, when the bit depth of the restored pixels is N, the entire range of the pixel values Rec[x][y] restored pixel is a 0≤Rec[x][y]≤(2^N)-1. In other words, the maximum value Max of the pixel value Rec[x][y] is equal to (2^N)-1, and section of the restored pixels is a [0, Max]. Block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels can split a section of the restored pixels into L bands. In other words�and, band of the restored pixel may be divided into the ranges [0, (Max+1)/L-1], [Max/L, 2*(Max+1)/L-1] and [2*Max/L, 3*(Max+1)/L-1] to [(L-1)*Max/L, L*(Max+1)/L-1].

The actual initial data can be inside the range [Min, Max]. The minimum value Min and maximum value Max is not necessarily equal to 0 and (2^N)-1, respectively. The number of different values may correspond to the actual range of the original data, i.e., Range=Max-Min+1. If banding restored pixels are divided evenly, uniform stripes are broken into ranges [Min, Range/L-1], [Max/L, 2*Range/L-1] and [2*Max/L, 3*Range/L-1] [(L-1)*Range/L, Max]. In another illustrative embodiment, the strip of reconstructed pixels can be divided unevenly.

The number L of the bands split the section [0, Max] of the restored pixel may be a multiple of 2 and may be equal to 16 or more for fast calculation. In addition, for quick calculation of the number L can be defined such that the number p of senior significant bits of the restored pixels is in the range of 2 power indicator. For example, when the number of senior significant bits of the restored pixel is 4 bits (p=4), and an extended bit depth of the restored pixel is 12 bits, the number L can be equal to 2^p=16. In accordance with this band restored pixels of the expanded bit depth can be� broken as shown in table 1 below.

Table 1
The number of lanes012...16
Band of the pixel values of the reconstructed pixel[0, 255][256, 511][512, 767]...[3840, 4095]
The hexadecimal expression of pixel values[0x0000, 0x00FF][0x0100, 0x01FF][0x0200, 0x02FF]...[0x0F00, 0x0FFF]

Since the computation of the bits is easily performed when the bandwidth of the pixel values is divided on the basis of the number of senior significant bits of the restored pixels, the block 26 to define a group of pixels can effectively perform the calculation to determine the strip.

Block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels can distribute the restored pixels in the same bands in the pixel group� in accordance with stripes. Stripes can be divided on the basis of actual minimum and maximum values of original or reconstructed signal.

The average value of errors between the restored pixels included in the group of pixels in accordance with stripes, and the original pixels is not equal to 0. In accordance with this, the block 16 determine the value of compensation and the group of pixels may determine the compensation value by using the average value in accordance with stripes. In addition, the determination unit 26 of the group of pixels and the block 28 compensation of the restored pixels can compensate for the pixel values of the restored pixels in the group of pixels in accordance with the bands through the use of the compensation values in accordance with stripes.

Alternatively, the device 10 of the video encoding device 20 and the video decoding can distribute the restored pixels into groups of pixels, including restored pixels forming a predefined line.

Block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels can analyze the characteristics of the reconstructed image and to detect lines in the vertical direction, horizontal direction, diagonal direction, the direction along the cu�howling and the direction along the boundary of a predefined object. Block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels to determine the restored pixels forming the same line, as a group of pixels in accordance with lines.

The average value of errors of the pixel values between restored pixels included in the group of pixels in accordance with the lines, and the original pixels are also not equal to 0. Block 16 determine the value of compensation and the group of pixels may determine the compensation value by using the average value in accordance with the lines. Block 26 to define a group of pixels and the block 28 compensation of the restored pixels can compensate for the pixel values of the restored pixels in the group of pixels in accordance with the lines through the use of the compensation value in accordance with lines.

Block 16 determine the value of compensation and the group of pixels and the block 26 to define a group of pixels to determine the value of compensation in accordance with the levels of extreme and/or boundary values in accordance with the data elements, such as sequences of image frames or video blocks. The transmitter 18 may encode and transmit information relating to the compensation value as proprietary information. The accuracy of the compensation value Uwe�icepets, as decreases the data item to determine the value of compensation in accordance with the level of extreme and/or boundary values, but the service information may increase because additional information for encoding and transmitting information relating to the compensation value may increase.

In addition, the extractor 22 may extract information relating to the compensation value, from the service information or the information of the header section and to compensate for the pixel values of the restored pixels by using compensation values.

The generators 14 and 24 of the restored image can selectively perform adaptive loop filtering on the image data decoded in the spatial domain. The generators 14 and 24 of the restored image may restore the current image by continuous discharge of a one-dimensional filter in the horizontal direction and the vertical direction in accordance with adaptive loop filtering.

The transmitter 18 of the device 10 of the video encoding may encode and output a filter coefficient used in the adaptive loop filtering. In addition, since the type, quantity, size, bit quantization, the coefficient of filtration direction of each one-dimensional filter and informats�I mean, whether you have filtered and the current filter can be set for adaptive loop filtering, information about the many one-dimensional filter loop filter can be encoded and transferred.

The generator 24 of the restored image may output the filter coefficient of each one-dimensional filter by using the difference information of the filter coefficient extracted from the extractor 22.

For example, the current filter coefficient of each one-dimensional filter can be disabled by adding the difference between the current filter coefficient and the previous coefficient of the filter to the previous filter coefficient. Continuous one-dimensional filtering can be performed on the data subjected to the removal of blocking artifacts through the use of an extracted filter coefficient of each one-dimensional filter. Removal of the deblocking is performed to reduce the effect of deblocking the decoded data, and a loop filter minimizes the error between the restored image and the original image.

For a deeper understanding of loopback filtering, use continuous one-dimensional filtering in the horizontal direction and the vertical direction, will be described with reference to the following equations.

The current filter coefficient may be�conducted in accordance with the equation 7, below.

[Equation 7]

c[i][j]=adaptive_loop_filter_prev[i][j]+ adaptive_loop_filter[i][j].

Here i denotes the index of the one-dimensional filter, and j denotes the index of the filter coefficient of the one-dimensional filter. c[i][j] indicates the current filter coefficient, adaptive_loop_filter_prev[i][j] refers to the previous filter coefficient, and adaptive_loop_filter[i][j] indicates a differential component coefficient of the filter is passed as information to the coefficient of the filter.

In other words, the current filter coefficient can be deduced from the sum of the previous filter coefficient and a differential component. To display the next filter coefficient after removing the current coefficient of the filter, the current filter coefficient c[i][j] is updated to adaptive_loop_filter_prev[i][j].

Loop filtering using a continuous one-dimensional filtering can be performed in accordance with equations 8 and 9 below. In equations 8 and 9, i denotes the index in the width direction of the current image, and j denotes the index in the direction of the height of the current image.

[Equation 8]

qi,j=(pi,j-4*c[0][4]+pi,j-3*c[0][3]+pi,j-2*c[0][2]+pi,j-1*c[0][1]+ pi,j*c[0][0]+pi,j+1*c[0][1]+pi,j+2*c[0][2]+pi,j+3*c[0][3]+ pi,j+4*c[0][4]).

Here pi,jdenotes current data of the image subjected to the removal of the blockiness, and qi, jdenotes otfil�trevanny with univariate filter data in a horizontal direction relative to the data, subjected to remove the blocking artifacts. 5 filter coefficients are used for symmetric filtering 9 parts of the data subjected to the removal of the blockiness by the use of the filter coefficient c of the symmetric filter.

[Equation 9]

fi,j=(qi,j-4*c[1][4]+qi,j-3*c[1][3]+qi,j-2*c[1][2]+qi,j-1*c[1][1]+ qi,j*c[1][0]+qi,j+1*c[1][1]+qi,j+2*c[1][2]+qi,j+3*c[1][3]+ qi,j+4*c[1][4]).

Here fi,jdenotes filtered through a one-dimensional filter data in a vertical direction relative to filtered through a one-dimensional filter data qi,j. Since the coefficient of the c filter uses the current method of filtration, one-dimensional filtering is continuously executed in the vertical direction above filtered through a one-dimensional filter data in the horizontal direction.

In the symmetric one-dimensional filter, the filter may set the coefficients of all filters by using only a small number of coefficients compared to the two-dimensional filter. Accordingly, the number of bits related to the characteristics of the filter from the set of one-dimensional filters that are inserted into the bit stream transmission may be relatively low compared to the two-dimensional filter.

In addition, the memory storage time�military data during filtration less for a one-dimensional filter, than for two-dimensional filter. Bandwidth filtering of two-dimensional filter is much more in comparison with one-dimensional filtering. When the current filter is impossible to implement a parallel process in accordance with multiple filtering through the use of two-dimensional filter, but it is possible to perform a parallel process through the use of one-dimensional filter.

However, loop filtering is not limited to a continuous one-dimensional filtering in the horizontal and vertical directions. Loop filtering may be performed as a predefined number of one-dimensional filters perform continuous one-dimensional filtering, each one-dimensional filtering is performed in a predefined direction.

The device 20 decoding video in addition to the filter coefficient can be information about many of the one-dimensional filters in order to check the type, quantity, size, bit quantization, the coefficient of filtration direction of each one-dimensional filter and the information indicating whether the filtering and current filtering. In accordance with this, the generator 24 of the restored image can perform loop filtering through a combination of various one-dimensional filters.

Now with reference to Fig. 4 and 5 will be described the adaptability�th loop filtering, performed by the generators 14 and 24 of the restored image.

Fig. 4 is a block diagram of the sequence of operations to describe the adaptive loop filtering in accordance with the illustrative option implementation.

Loop filtering may be performed as a plurality of one-dimensional filters, continuously filtering. In step 41 accepted the decoded image data. Alternatively can be taken image data on which filter remove the blocking artifacts after decoding. In step 42 is determined whether to be used all of the first N filters. If it is determined that the first N filters should not be used at step 46. If in step 42 is determined so that the first N filters must be used, one-dimensional filtering can be performed in accordance with the procedure of filtration, for example, the first filter performs one-dimensional filtering in the first direction filtering on the stage 43, and the second filter performs one-dimensional filtering in a second direction filtering in step 44 until the N-th filter performs one-dimensional filtering in the N-th direction filtering in step 45.

In step 46 the decoded image data subjected to the removal of the deblocking image data, or subjected to a continuous one-dimensional filtering is given�s stored in the buffer or played back by the playback device.

The filtration direction one-dimensional filter can be adaptively determined in accordance with the local characteristics of the image, through the analysis of characteristics. For example, the direction of the filter can be adaptively defined as the direction along the boundary of the local image in order to preserve the local image border.

Fig. 5 is a block diagram of the sequence of operations to describe the adaptive loop filtering in accordance with another illustrative variant implementation.

When the decoded image data or subjected to removal of the deblocking image data taken in step 51, for each pixel of the decoded image data or subjected to removal of the blockiness of the image data is detected by the boundary of the stage 52. In step 53 the one-dimensional filtering is performed in accordance with the detected boundary, and the filtered data is stored or played back by the playback device in step 54.

Information about the many one-dimensional filters, including filtration direction, as determined in accordance with the boundary is encoded and provided to the decoder during video encoding. Information about the loop-filter is read from the received data during decoding video, and one-dimensional filtering in accordance � the filtration direction, for example, along the boundary, can be performed using a predefined one-dimensional filter.

Post-processing, component of the loop filter, can reduce the distortion between the original image and the restored image, which is formed due to a complex lossy compression. In addition, subjected to the loop filter, the image may be used as a reference image to improve the image quality obtained by performing compensation prediction or motion.

Accordingly, the generators 14 and 24 of the restored image can selectively perform adaptive loop filtering-related characteristics of the image, the system environment or user requirements, through a combination of one-dimensional filters having different characteristics. As to perform adaptive loop filtering is a two-dimensional filter used continuous one-dimensional filters, adaptive loop filtering may be advantageous from the point of view of memory, bandwidth, number of bits of transmission, etc. compared to the two-dimensional filter. When the generators 14 and 24 of the restored image perform adaptive loop filtering, the transmitter 18 and the extractor 22 send and receive information obtained RVBR�the rotary coding difference component of the encoded filter coefficient, and, thus, the amount of information used for adaptive loop filtering may be reduced.

Fig. 6 is a block diagram of the sequence of operations illustrating a method of encoding a video for compensating pixel values in accordance with the illustrative option implementation.

In step 62, the input sequence is encoded image. In step 64 decodes encoded image data, and formed the restored image by performing loop filtering on the decoded image data. The restored image can be formed by performing adaptive loop filtering, and at least one operation of the one-dimensional filter is continuously executed on the decoded image data or subjected to removal of the deblocking image data.

In step 66 determines the value of the compensation on the error between each restored pixel of a predefined group in the restored image and the corresponding original image and the group of pixels comprising the restored pixel to be compensated. The group of pixels comprising the restored pixel to be compensated may be determined in accordance with brownerectile and/or boundary values of the pixel values, strips of the pixel values or lines. The compensation value in accordance with the groups of pixels may be determined based on the mean values of the errors.

In step 68 the compensation value is encoded, and transmitted bit stream of the encoded compensation value and an encoded input sequence of images. When the compensation value is determined in accordance with further groups of pixels, pixel values can be accurately compensated, but proprietary information may increase.

Fig. 7 is a block diagram of the sequence of operations illustrating a method of decoding a video for compensating pixel values in accordance with the illustrative option implementation.

In step 72 the bitstream of the encoded image is received and syntactically parsed and encoded image data and the compensation value extracted from the bitstream.

In step 74, the encoded image data are decoded, and the restored image is formed by performing loop filtering on the decoded image data. The restored image can be formed by performing adaptive loop filtering, and at least one operation of the one-dimensional filter continuously back�is on the decoded image data or subjected to removal of the deblocking image data.

In step 76, the group of pixels comprising the restored pixel to be compensated by using the compensation value is determined among restored pixels in the restored image. The group of pixels comprising the restored pixel to be compensated by using the compensation value may be determined in accordance with the level of extreme and/or edge value of the pixel values of the restored pixels, strips of the pixel values or lines in accordance with the method of determining the group of pixels on the basis of information relating to the compensation value. In step 78, the restored image having compensated the error may be issued by way of compensation error between the restored pixels of a certain group of pixels and the original pixel by using the compensation value.

In accordance with the method of video encoding method and video decoding quality of the reconstructed image can be improved by compensation of systematic errors of the reconstructed image, and the number of bits of the transmission of additional information to improve the quality of the reconstructed image can be reduced�but since the encoded and transmitted information only about the compensation value in accordance with the groups of pixels, and information about locations of restored pixels to be compensated, not transmitted.

Next, with reference to Fig. 8-22 will be described encoding and decoding a video for compensating a pixel value after performing loop filtering based on the coding elements having a tree structure, in accordance with illustrative embodiments of the implementation.

Fig. 8 is a block diagram of the device 80 video encoding for encoding a video for compensating a pixel value after performing loop filtering based on coding units having a tree structure, in accordance with the illustrative option implementation.

The device 80 video encoding in accordance with the illustrative option implementation includes an encoder 81, the generator 84 of the restored image, the block 87 determine the value of compensation and the group of pixels and a transmitter 88. The encoder 81 includes a block 82 splitting the maximum coding elements and the block 83 to determine the depth of encoding and encoding mode. The generator 84 of the restored image includes a decoder and a block 86 perform loop filtering.

The encoder 81 encodes the input on�sledovatelnot image. The encoder 81 is able to encode the input image sequence based on the coding elements having a tree structure. Block 82 splitting the maximum coding elements can break the current image on the basis of the maximum element of the encoding of the current image. The maximum element of the encoding in accordance with the illustrative option implementation can represent a data element having a size of 32x32, 64x64, 128x128, 256x256, etc., the form data item is a square having a width and length that are equal to powers of 2.

If the current image is greater than the maximum element of the encoding data of the current image can be divided by at least one maximal element encoding. The image data can be received by the block 83 to determine the depth of encoding and the encoding mode in accordance with at least one maximum element of the encoding.

The encoding is in accordance with the illustrative option implementation can be characterized by a maximum size and depth. Depth refers to the number of times the element of spatial encoding is broken, since the maximum element of the encoding, and increasing the depth of the blocks of the coding according to the depths may be split from m�simalenga element encoding to a minimal element of coding. The depth of the maximum element of the encoding is the top depth and the minimum depth of the element coding is the bottom depth. Since the element size coding, corresponding to each depth decreases as the depth increases the maximum element coding element coding corresponding to a higher depth, may include a plurality of coding elements corresponding to a lower depth.

As described above, data of the current image are split into the maximum coding elements in accordance with the maximum element size coding, and each of the maximum coding elements may include deeper coding elements, which are divided in accordance with the depths. Since the maximum element of the encoding in accordance with the illustrative option implementation is split according to depths, the image data of the spatial domain included in the maximum element coding can be hierarchically distributed in accordance with the depths.

The maximum depth and the maximum element size coding, which limit the total number of times that hierarchically split the height and width of the maximum element of the encoding, can be predefined.

B�OK 83 to determine the depth of encoding and the encoding mode encodes at least one region partitioning, obtained by splitting the maximum element of the encoding according to depths, and determines a depth to output a final encoded image data in accordance with at least one area of the break. In other words, the block 83 to determine the depth of encoding and the encoding mode determines the depth of encoding by encoding the image data in the deeper coding elements in accordance with the depths in accordance with the maximum element of the encoding of the current image and select the depth having the smallest error encoding. Thus, encoded data are given the image of the element encoding corresponding to a certain depth coding. In addition, the coding elements corresponding to the depth of encoding can be considered as a coded coding elements. A certain depth of the coding and encoded image data in accordance with a certain depth coding are given to the transmitter 88.

The image data in the maximum element coding is encoded based on the deeper coding elements corresponding to at least one depth equal to or below the maximum depth, and results of encoding the image data are compared on main�ve from each of the deeper coding elements. Depth having the smallest error encoding can be selected after comparing encoding errors of the deeper coding elements. At least one depth coding can be selected for each maximal element encoding.

The size of the maximum element of the encoding is broken as the encoding is hierarchically split according to depths, and as the number of coding elements. In addition, even if the coding elements correspond to the same depth in one maximum element of the encoding is determined whether to split each of the coding elements corresponding to the same depth, lower depth by measuring the error of encoding the image data of each of the coding elements separately. Accordingly, even when image data is included in one maximum element of the encoding image data partitioned into regions in accordance with the depths, and error coding may differ according to the areas within the maximum element of the encoding, and thus, the depth of encoding may differ according to regions in the image data. Thus, one or more of the depth coding can be defined�on in one maximal element encoding and the image data of the maximum element of the encoding can be divided in accordance with elements encoding at least one depth encoding.

In accordance with this, the block 83 to determine the depth of encoding and the encoding mode may determine the coding elements having a tree structure included in the maximum element of the encoding. "The coding elements having a tree structure in accordance with the illustrative option implementation includes coding elements corresponding to the depth defined as the depth of encoding, from among all deeper coding elements included in the maximum element of the encoding. The element coding depth of coding may be hierarchically determined according to depths in the same region of the maximum element of the encoding and can be independently defined in different areas. Similarly, the depth of encoding in the current scope can be specified regardless of the depth in another area.

The maximum depth in accordance with the illustrative option implementation is an index related to the number of splits from the maximum element of the encoding to a minimal element of coding. The maximum depth in accordance with an illustrative �Ariant implementation may denote the total number of splits from the maximum element of the encoding to a minimal element of coding. For example, when the depth of the maximum element of the encoding is 0, the depth of encoding, for which the maximum element of the encoding is broken once, can be set equal to 1, and the depth of encoding, for which the maximum element of the encoding is broken twice, can be set equal to 2. In this case, if the minimum element encoding is an encoding for which the maximum element coding is divided four times, there are 5 depth levels of depths 0, 1, 2, 3 and 4, and thus, the maximum depth may be set to 4.

Coding with prediction and conversion may be performed in accordance with the maximum element of the encoding. Coding with prediction and conversion is also performed based on deeper coding elements in accordance with a depth of equal to or less than the maximum depth, in accordance with the maximum element of the encoding. The conversion may be performed in accordance with the method of orthogonal transform or an integer transform.

Since the number of deeper coding elements increases whenever the maximum element of the encoding is split according to depths, encoding including encoding � prediction and transformation, runs over all deeper coding elements that are generated with increasing depth. For convenience of description coding with prediction and conversion will now be described on the basis of the encoding is the current depth in the maximum element of the encoding.

The device 80 video encoding can choose a different size or shape of an item of data to encode the image data. To encode the image data, performs operations such as coding with prediction, transformation, and statistical encoding, and at this time the same data item can be used for all operations or different data elements can be used for each operation.

For example, the device 80 video encoding can choose not only the encoding is to encode the image data but also a data element different from the element coding so as to perform the encoding with the prediction of the above image data in the encoding.

To perform the encoding with the prediction of the maximum element coding, coding with prediction can be made on the basis of the encoding, corresponding to the depth of encoding, that is, based on the feature encoding which is no longer divided into coding elements, resp�according to bottom depth. In the future, an encoding that is no longer split and becomes a basic element for predictive coding, will be called "element of prediction. The partition obtained by splitting element predictions, may include an element of prediction or data item obtained by splitting at least either the height or width of the prediction.

For example, when the element coding with size 2N×2N (where N is a positive integer) is no longer split and becomes an element predictions with the size of 2N×2N, the size of the partition may be 2N×2N, 2N×N, N×2N or N×N. Examples of a partition type include symmetrical partitions that are obtained by symmetrically splitting the height or width of an element predictions, partitions obtained by asymmetrically splitting the height or width of an element predictions, for example, 1:n or n:1, partitions, which is obtained by geometric partitioning element predictions, and partitions having arbitrary shapes.

The prediction mode of the element predictions can be at least one mode of intra-mode (internal mode), the inter mode (inter - or cross-mode and skip mode. For example, the intra mode or the inter mode may be performed on the sections with a size of 2N×2N, 2N×N, N×2N or N×N. in addition, the mode �of Robusta can be performed only on the partition with a size of 2N×2N. The coding is performed independently on one element of prediction in the encoding, and thereby selects a prediction mode having the smallest error encoding.

The device 80 video encoding can also perform the conversion on image data in the encoding is not only based on the feature encoding for encoding the image data, but also based on the feature data, which is different from the encoding.

To perform the conversion to the encoding, transformation may be performed based on the feature data having a size that is less than or equal to the element encoding. For example, the data element for converting may include a data element for the intra-mode and the data item for the inter-mode.

A data item used as the base conversion, will now be called "conversion element". Depth conversion, specifying the number of partitions to achieve the conversion element by splitting the height and width of element encoding can also be set in the conversion element. For example, in the current element coding with size 2N×2N depth conversion can be equal to 0 when the size of the translation element is also 2N×2N, may be equal to 1, and when you�OTA, and width of the current element coding is divided into two equal parts, i.e. 4^1 conversion element, and the element size conversion, therefore, is N×N, and may be equal to 2, and when the height and width of the current element coding is divided into four equal parts, i.e. 4^2 conversion element, and the element size conversion, therefore, is N/2×N/2. For example, the conversion element can be set in accordance with a hierarchical tree structure, in which the conversion element of the upper depth of the transformation is divided into the four elements of the transformation of the lower depth conversion in accordance with the hierarchical characteristics of the depth of the conversion.

Like element encoding element conversion element in the encoding may be recursively divided into the areas of smaller size, so that the conversion element can be determined independently in units of areas. Thus, differential data encoding can be divided in accordance with the transformation that has a tree structure in accordance with the depths of the conversion.

Information coding in accordance with coding elements corresponding to the depth of encoding uses the information not only about the depth of encoding, but also info about�rmacie, related to coding with prediction and conversion. In accordance with this, the block 83 to determine the depth of encoding and the encoding mode determines not only the depth of encoding, having the smallest error encoding, but also the partition type in a prediction, the prediction mode in accordance with the elements prediction and the size of the translation element to be converted.

Block 83 to determine the depth of encoding and the encoding mode can measure the error encoding of the deeper coding elements in accordance with the depths through the use of distortion optimization based on Lagrange multipliers.

The generator 84 restored image decodes encoded image data and generates a restored image by performing loop filtering on the decoded image data. The decoder 85 included in the generator 84 of the restored image, decodes the image data on the basis of the coding elements having a tree structure, which is encoded by the encoder 81. The decoder 85 to decode encoded image data and produce image data in the spatial domain according to the maximum coding elements based on the depth of encoding and the encoding mode, defined Blo�83 ohms to determine the depth of coding and encoding modes.

Block 86 execution of a loop filter included in the generator 84 of the reconstructed image, can perform loop filtering on the decoded image data. The same adaptive loop filtering, selectively performed by the generator 14 of the reconstructed image, can be performed by block 86 perform loop filtering. In accordance with this, the block 86 perform loop filtering can continuously perform one-dimensional filtering in the horizontal direction and one-dimensional filtering in the vertical direction to restore the current image. Block 86 perform loop filtering may issue the restored image block 87 determine the value of compensation and the group of pixels.

Block 87 determine the value of compensation and the group of pixels determines the compensation value of the error between each of the restored pixels of a predefined group in the restored image and the corresponding original pixel and the group of pixels, including restored pixels having pixel values that should be compensated. Block 87 determine the value of compensation and the group of pixels is a technical element that corresponds to the block 16 determine the value of compensation and the group of pixels.

In accordance with �Tim block 87 determine the value of compensation and the group of pixels can determine the level of extreme and/or boundary values of neighboring restored pixels of the restored image in accordance with the recovered pixels and divide neighboring restored pixels into groups of pixels in accordance with the levels of extreme and/or boundary values. Alternatively, the block 87 determine the value of compensation and the group of pixels can distribute the restored pixels into groups of pixels in accordance with stripes based on the pixel values. Alternatively, the block 87 determine the value of compensation and the group of pixels can detect lines in a predefined direction through the analysis of the reconstructed image and distribute the restored pixels into groups of pixels in accordance with the lines, which include the restored pixels on the same line.

Block 87 determine the value of compensation and the group of pixels to determine the value of compensation individually for each group of pixels through the use of the average value of errors between the restored pixel and a corresponding original pixel. Block 87 determine the value of compensation and the group of pixels to determine the restored pixel which have to be compensated, in accordance with at least one item of data among the sequence of image sections of the frame and encoding the input video and to determine the compensation value that corresponds�eastwoodiae restored to a specific pixel, which should be compensated. Information about the compensation value and the group of pixels specified by the block 87 determine the value of compensation and the group of pixels may be issued to the transmitter 88.

Transmitter 88 generates the image data of the maximum element of the encoding, which is encoded based on the at least one depth encoding a particular block 83 to determine the depth of encoding and the encoding mode, and information about the encoding mode in accordance with the depth of the encoding bit streams. The image data encoded by the encoder 81, can be converted to the format of the bitstream using statistical coding and then inserted into a bitstream for transmission.

Alternatively, the transmitter 88 may encode and insert the value of compensation, defined by the block 86 to determine the value of compensation and the group of pixels in the bit stream for transmission. Alternatively, the transmitter 88 may receive additional information about the process of determining a group of pixels from the block 87 determine the value of compensation and the group of pixels and encode and insert additional information in the bitstream.

Encoded image data may be obtained by encoding the differential data of the image.

Information about the encoding mode in soo�line with the depth of encoding may include information about the depth of encoding, about the partition type in a prediction, the prediction mode and the size of the translation element.

Information about the depth of the encoding can be determined by using information split according to depths, which indicates whether the coding on the coding elements of the lower depth instead of a current depth. If the current depth of the current element coding is the depth of encoding the image data in the current encoding is encoded and issued, and, thus, information partitioning can be defined so as not to break the current element encoding to the lower depths. Alternatively, if the current depth of the current element coding is not the depth of encoding, the encoding is performed over the element of coding a lower depth, and thus information partitioning can be defined in such a way as to break the current element encoding to obtain elements encoding a lower depth.

If the current depth is not the depth of encoding, the encoding is performed on the encoding, which is divided into an element encoding a lower depth. Because one element encoding the current depth there is at least one element of coding a lower depth, the encoding with povero� is performed on each item of coding a lower depth, and thus, the encoding may be recursively performed for the coding elements having the same depth.

Since the coding elements having a tree structure are determined for one maximum element of the encoding, and information on at least one encoding mode is determined for the element coding depth of coding information of at least one encoding mode may be determined for one maximum element of the encoding. In addition, the depth of encoding of the image data of the maximum element of the encoding may differ depending on the locations, since the image data is hierarchically split according to depths, and thus information about the depth of the encoding and the encoding mode may be set for the image data.

In accordance with this transmitter 88 may designate information encoding on the corresponding depth of the encoding mode and encoding at least one element from the element encoding element predictions and minimum element included in the maximum element of the encoding.

The minimum element in accordance with the illustrative option implementation is a rectangular element data obtained by splitting the minimum �the item encoding, constituting the lowermost depth by 4. Alternatively, the minimum element can be a maximum rectangular data element that can be included in all the coding elements, elements predictions, the section elements and the conversion elements included in the maximum element of the encoding.

For example, information encoding, issued through the transmitter 88 may be distributed on information coding in accordance with coding elements and information encoding in accordance with the elements of the prediction. Information coding in accordance with coding elements may include information about the prediction mode and the size of the partitions. Information encoding in accordance with the elements of the prediction may include information regarding the estimated direction of the inter-mode, the reference image index of the inter mode, the motion vector of the chroma intra mode and the method of intra prediction mode. In addition, information about the maximum size of the encoding is defined in accordance with images, sections or groups of pictures (GOP), and information about a maximum depth may be inserted into the set of parameters sequence (SPS) or in the header of the bitstream.

Transmitter 88 may encode and �IDate filter coefficient, used for adaptive loop filtering. In addition, since the type, quantity, size, bit quantization, the coefficient of filtration direction of each one-dimensional filter and the if filter and the current filtering, can be set for adaptive loop filtering, information about the many one-dimensional filter loop filter can be encoded and transferred.

In the device 80 video encoding, a deeper element encoding can represent an encoding is obtained by dividing the height or width of encoding is more than the upper depth, which is one layer above, by two. In other words, when the element size coding the current depth is 2N×2N, the size of the element encoding a lower depth is N×N. in addition, the encoding of the current depth having the size of 2N×2N may include a maximum of 4 elements encoding a lower depth.

In accordance with this, the device 80 video encoding may form the coding elements having a tree structure by determining coding elements having an optimum shape and an optimum size for each maximum element of the encoding based on the size of the maximum element of the encoding and the maximum g�ubina, defined based on the characteristics of the current image. Also, since encoding may be performed on each maximum element of the encoding by using any one of various prediction modes and transformations, an optimum encoding mode may be determined based on the characteristics of the element the coding of the different sizes of the image.

Thus, if an image having high resolution or a large amount of data, the macroblock is encoded in the prior art, the number of macroblocks in the image increases significantly. Accordingly, the number of pieces of compressed data, formed for each macroblock, increases, and thus, it is difficult to convey concise information, and the efficiency of data compression is reduced. However, when using the device 80 video encoding in accordance with the illustrative option implementation efficiency of image compression can be increased, since the element coding is adapted based on the characteristics of the image, and increases the maximum size of the encoding is based on the size of the image.

In addition, the number of bits of the additional information can be reduced, because information on the value of compensation�tion to compensate for the pixel values between the restored image and the original image, which requires the decoder to improve the quality of restored images by, encoded and transmitted without information about the location of the pixel.

Fig. 9 is a block diagram of an apparatus 90 of decoding a video for compensating a pixel value after performing loop filtering based on coding units having a tree structure, in accordance with the illustrative option implementation.

The device 90 decoding video includes extractor 91, generator 94 of the restored image, block 97 definition of the group of pixels and the block 98 compensation of the restored pixel. Extractor 91 includes receiver block 92 and 93 of extracting data of the image information of the encoding mode, information of the loop filter coefficient and the information of the compensation value (hereinafter, called an extractor). The generator 94 of the restored image includes a decoder block 95 and 96 perform loop filtering.

Definitions of terms, such as encoding, depth, element predictions, the element of transformation and different encoding modes for different processes that are used to describe the device 90 video decoding are identical to the terms described in relation to device 80 video encoding in Fig. 8.

Extractor 91 and takes �intoxicate parses a bitstream of an encoded image, and extracts encoded image data and the compensation value of the bit stream. The receiver extractor 92 91 receives and syntactically parses the bit stream of the encoded image. The information extractor extracts the image data according to maximum coding elements parsed from the bit stream and outputs the extracted image data to the decoder 95. The extractor 93 information can extract information about the maximum element size coding the current picture header of the current image.

In addition, the extractor 93 information retrieves information about the depth of the encoding and the encoding mode for the coding elements having a tree structure according to each maximum element of the encoding of the parsed bitstream. The extracted information about the depth of the encoding and the encoding mode is issued to the decoder 95. In other words, the image data in the bit string is broken at the maximum element of the encoding, so that the decoder 95 decrypted image data for each maximal element encoding.

Information about the depth of the encoding and the encoding mode according to maximum element of the encoding can be set for the information from at least one element of the encoding, corresponding to the depth of encoding, and information about R�the bench encoding may include information about a partition type of a corresponding element in the encoding corresponding to the depth of encoding of the prediction mode and the size of the translation element.

Information about the depth of the encoding and the encoding mode according to each maximum element of the encoding extracted extractor 93 information is information about the depth of the encoding and the encoding mode determined to achieve the minimum error encoding when the encoder, for example, the device 80 video encoding, with repetition performs encoding for each deeper element of encoding according to depths according to each maximal element encoding. In accordance with this, the device 90 video decoding can restore an image by decoding the image data in accordance with the depth of the encoding mode and encoding, which form the minimum error encoding.

Since encoding information about the depth of the encoding and the encoding mode may be assigned a predefined data item among the corresponding element encoding element predictions and the minimum element, extractor 93 information can extract information about the depth of the encoding and the encoding mode according to the predefined data elements. Predefined data elements cotrimazine the same information about the depth of the encoding and the encoding mode, can be output as the data elements included in the same maximum element of the encoding.

The decoder 95 restores the current picture by decoding the image data in each maximum element of the encoding on the basis of the depth information encoding and the encoding mode according to maximum coding elements. In other words, the decoder 95 may decode the encoded image data based on the extracted information about the partition type, the prediction mode and the conversion element for each element in the encoding among the coding elements having a tree structure, each maximal element encoding. The decoding process may include a prediction, including intra prediction and motion compensation, and inverse transformation. The inverse transform may be performed in accordance with the method of the inverse orthogonal transformation or inverse integer transform.

In addition, the decoder 95 may perform inverse transformation according to each element of the transformation element in the encoding by reading conversion element having a tree structure, on the basis of information about the size of the translation element element encoding in accordance with Chapter�beans-encoding, in order to perform the inverse transformation according to maximum coding elements.

The decoder 95 may define at least one depth of a current maximum coding element coding by using information split according to depths. If the partitioning information indicates that image data is no longer divided into current depth current depth is a depth encoding. In accordance with this decoder 95 may decode encoded data of at least one element of the encoding, corresponding to each depth coding, the current maximum element of the encoding by using information about a partition type of the element of prediction, the prediction mode and the size of the translation element for each element of the encoding, corresponding to the depth of encoding, and produce image data of the current maximum element of the encoding.

In other words, the data elements containing information of encoding that includes the same information the sample can be collected by tracking the plurality of information encoding that is assigned to a predefined data item among the element encoding element predictions and the minimum element, and the collected data elements� can be considered as one data element, which will be decoded by the decoder 95 in the same encoding mode.

When the bit stream inserted information about the filter coefficient for the adaptive loop filtering, the extractor 93 information can be extracted from the bitstream information about the coefficient of the filter. Block 96 perform loop filtering may take the information of the filter coefficient extracted by the extractor 93 information, and to generate a restored image by performing loop filtering on the image data decoded by the decoder 95.

The same technical element of the generator 24 of the restored image can be applied to the block 96 perform loop filtering. In accordance with this block 96 perform loop filtering can selectively filter removal of the deblocking and adaptive loop filtering on the decoded image data. Adaptive loop filtering may be performed by using the continuous set of one-dimensional filters.

The generator 94 of the restored image may output the filter coefficient of each one-dimensional filter by using the difference information of the filter coefficient extracted from the extractor 93 information. For example, the current filter coefficient of each of the one-dimensional filter can be�conducted by adding the difference between the current filter coefficient and the previous coefficient of the filter to the previous filter coefficient. Continuous one-dimensional filtering may be performed on the exposed remove blockiness data through the use of an extracted filter coefficient of each one-dimensional filter. Removal of the deblocking is performed to reduce the effect of deblocking the decoded data, and a loop filter minimizes the error between the restored image and the original image.

The extractor 93 retrieves information from a bitstream of encoded image data and information relating to the compensation value. Information relating to the compensation value may include information about the compensation value. Alternatively, if information relating to the compensation value includes information about the process of determining a group of pixels that have to be compensated by using the compensation value, extractor 93 information can be extracted from the bit stream the compensation value and information about the process of determining a group of pixels that have to be compensated. The extractor 93 information can retrieve the value of the compensation or information relating to the compensation value in accordance with at least one item of data among the sequence of image sections of the frame and encoding the input video.

B�OK 97 definition of the group of pixels to identify the group of pixels, comprising the restored pixel, which must be compensated by using the compensation value relative to the restored pixels of a predefined group in the restored image by receiving the restored image generated by the generator 94 of the restored image, and the compensation value extracted by the extractor 93 information. Block 98 compensation of the restored pixel compensates for the pixel value of the restored pixel by using the compensation value, and outputs a restored image having the restored pixel value, by the acceptance of the compensation value extracted by the extractor 93 information, and information about the group of pixels defined by the block 97 definition of the group of pixels.

When the extractor 93 information extracted information about the process of determining a group of pixels that have to be compensated, block 97 definition of the group of pixels may be selectively define a group of pixels having a pixel value that should be compensated on the basis of the method. For example, block 97 definition of the group of pixels to determine whether to classify the restored pixels in accordance with the levels of extreme and/or boundary values, stripes pixel�values or lines, and to define a group of pixels having pixel values, which should be compensated on the basis of the method. In this case, the block 98 compensation of the restored pixel may be compensated pixel values of the restored pixels in the group of pixels through the use of the compensation values for the group of pixels in accordance with the levels of extreme and/or boundary values, the strips of the pixel values or lines.

The device 90 decoding video may receive information on at least one element of the coding that generates the minimum error encoding when the encoding is recursively performed for each maximum element of the encoding, and can use the information to decode the current image. In other words, can be decoded, the coding elements having a tree structure, which is the optimal coding elements in each maximal element encoding. In addition, the maximum element size coding is determined based on the resolution and the amount of image data.

Accordingly, even if image data has high resolution and a large amount of data, the image data may be efficiently decoded and restored by using the element size coded�I and the encoding mode, are adaptively determined in accordance with characteristics of image data by using information about an optimum encoding mode received from the encoder.

The device 80 video encoding and device 90 video decoding can compensate for the systematic error, is formed between the restored image and the original image when the encoded image is decoded and restored.

Coding and video decoding based on the coding elements having a tree structure, in accordance with illustrative embodiments of the implementation.

Fig. 10 is a diagram for describing the concept of coding elements in accordance with the illustrative option implementation.

The element size coding can be expressed as width×height, and may be equal to 64×64, 32×32, 16×16 and 8×8. The element coding with size of 64×64 may be split into sections with dimensions 64×64, 64×32, 32×64 or 32×32, and an encoding with a size of 32×32 may be split into sections with dimensions 32×32, 32×16, 16×32 or 16×16, an encoding of size 16×16 may be divided into sections with dimensions 16×16, 16×8, 8×16 or 8×8, and an encoding with a size of 8×8 may be divided into sections with dimensions 8×8, 8×4, 4×8 or 4×4.

In video data 310, the resolution is 1920×1080, maximum esmeraldina encoding is 64, and the maximum depth is 2. In video data 320, the resolution is 1920×1080, the maximum element size coding is 64 and the maximum depth is 3. In the video data 330 resolution is 352×288, the maximum size of the encoding is equal to 16, and the maximum depth is 1. The maximum depth shown in Fig. 10, denotes the total number of splits from the maximum element of the encoding to a minimal element of decoding.

If the resolution is high or the amount of data is large, the maximum element size coding can be large, in order not only to increase the encoding efficiency, but also accurately reflect the characteristics of the image. In line with this, the maximum element size of encoding video data 310 and 320 having higher resolution than the video data 330 may be equal to 64.

Since the maximum depth of the video data 310 is 2, the elements 315 encoding video data 310 may include the maximum element of the encoding, with the size 64 long axis, and the coding elements having dimensions of 32 and 16 on the long axis, because the depths are increased by two levels by two-fold split maximal element encoding. Meanwhile, since the maximum depth of the video data 330 is 1, elements 335 Kodirov�of the video data 330 may include a maximum element encoding having size 16 long axis, and the coding elements having a size 8 on the long axis, because the depths are increased one level by a single split maximal element encoding.

Since the maximum depth of the video data 320 is 3, the elements 325 encoding video data 320 may include the maximum element of the encoding, with the size 64 long axis, and the coding elements having dimensions of 32, 16 and 8 on the long axis, because the depths are increased by 3 layers through the triple split maximal element encoding. As the depth increases, detailed information can be expressed precisely.

Fig. 11 is a block diagram of the encoder 400 of the image based on the coding elements in accordance with the illustrative option implementation.

The encoder 400 of the image performs the operations of the encoder 81 of the device 80 video encoding for encoding the image data. In other words, the block 410 intra-prediction performs intra-prediction on the coding elements in the intra mode from among the current frame 405, and a block 420, the motion estimation and block 425 motion compensation perform inter estimation and motion compensation on coding elements in the inter mode from among the current frame 405 by using the current frame 405 and a reference frame 495.

Data �dannie from block 410 intra-prediction, block 420, the motion estimation and block 425 motion compensation is issued as a quantized transformation coefficient through the block 430 conversion, and block 440 quantization. Quantized transformation coefficient is restored as data in the spatial domain by block 460 inverse quantization and block 470 inverse transformation, and the restored data in the spatial domain are given as the reference frame 495 after being post-processing through the block 480 removal of blockiness and block 490 loop filter. Quantized conversion coefficient may be issued as a bitstream 455 via statistical encoder 450.

In order to apply the encoder 400 of the image in device 80 video encoding, the constituent elements of the encoder 400 of the image, i.e., the block 410, the intra-prediction unit 420 motion estimation, block 425 motion compensation unit 430 transforms, block 440 quantization, statistical encoder 450, block 460 inverse quantization unit 470 of the inverse transform unit 480 removal of blockiness and block 490 loop filtering operations on the basis of each item of coding among the coding elements having a tree structure, with the maximum depth of each maximum element of the encoding.

In particular, block 410 intra-prediction block 420, the motion estimation and block� 425 motion compensation determines partitions and a prediction mode of each element of the encoding elements of the encoding, having a tree structure, with the maximum size and the maximum depth of the current maximum element of the encoding, and block 430 conversion determines the size of the translation element in each element of coding among the coding elements having a tree structure.

Fig. 12 is a block diagram of the decoder 500 images on the basis of the coding elements in accordance with the illustrative option implementation.

The parser 510 syntactically analyzes the coded image data which will be decoded, and the encoding used for decoding of the bit stream 505. Encoded image data are subjected to inverse quantization of the data through statistical decoder block 520 and 530 of the inverse quantization, and subjected to inverse quantization of the data is restored to image data in the spatial domain through the block 540 reverse conversion.

Block 550 intra-prediction performs intra-prediction on the coding elements in the intra-mode relative to the image data in the spatial domain, and block 560 motion compensation performs motion compensation on the coding elements in the inter mode by using the reference frame 585.

Image data in the spatial�Oh area which pass through the block 550 intra-prediction and the block 560 motion compensation, can be given as a restored frame 595 after being post-processing through the block 570 removal of blockiness and block 580 loop filter. In addition, image data which is subjected to post-processing through the block 570 removal of blockiness and block 580 loop filter, can be given as the reference frame 585.

To decode the image data in the decoder 95 device 90 decoding the video decoder 500 images can perform operations that are performed after the parser 510.

In order to apply the decoder 500 images in the device 90 video encoding 90, the elements of the decoder 500 images, i.e., the parser 510, the statistical decoder 520, block 530 inverse quantization unit 540 of the inverse transform unit 550 intra-prediction block 560 motion compensation block 570 removal of blockiness and block 580 loop filtering operations on the basis of the coding elements having a tree structure, for each maximal element encoding.

In particular, block 550 intra-prediction and the block 560 motion compensation operations on the basis of sections and the prediction mode for each of the coding elements having a tree structure, and block 540 reverse conversion�tion performs operations based on the size of the translation element for each element in the encoding.

Fig. 13 is a diagram illustrating deeper coding elements in accordance with the depth and sections in accordance with the illustrative option implementation.

The device 80 video encoding and device 90 of decoding video using a hierarchical coding elements to take into account the characteristics of the image. Maximum height, maximum width and maximum depth of the coding elements can be adaptively determined according to characteristics of the image or can be set by the user. The dimensions of the deeper coding elements in accordance with the depths may be determined in accordance with a predefined maximum size of the encoding.

In a hierarchical structure 600 of coding elements in accordance with the illustrative option implementation of maximum height and maximum width of the coding elements is 64, and the maximum depth is 4. As the depth increases along the vertical axis of the hierarchical structure 600, a height and width of the deeper encoding is broken. Moreover, the element of prediction and sections, which are the foundations for predictive coding of each deeper encoding is shown along the horizontal axis of the hierarchical structure 600.

In other words, the element 610 encoding is the maximum element of the encoding in a hierarchical structure 600, wherein a depth is 0 and a size, i.e., height and width are 64×64. The depth increases along the vertical axis, and there is an element 620 of coding, have a size of 32×32 and a depth of 1, the element 630 coding, have a size of 16×16 and a depth of 2, the element 640 coding, have a size of 8×8 and a depth of 3, and the element 650 coding with 4×4 and depth 4. Element 650 coding, have a size of 4×4 and depth is 4 is a minimum element of the encoding.

Element predictions and the sections of the encoding is placed along the horizontal axis according to each depth. In other words, if the element 610 encoding, with the size 64×64 and the depth of 0, is an element of prediction, the element of prediction can be divided into sections, included in the element 610 encoding, i.e. a partition 610 having a size of 64×64, partitions 612 having the size of 64×32, partitions 614 having the size of 32×64, or partitions 616 having the size of 32×32.

Similarly, the element of prediction element 620 of coding, have a size of 32×32 and a depth of 1 may be split into partitions included in the element 620 encoding, i.e., a partition 620 having a size of 32×32, partitions 622 having a size of 32×16, partitions 624 having a size of 16×32, and partitions 626 having �Asmar 16×16.

Similarly, element predictions of element 630 coding, have a size of 16×16 and a depth of 2 may be split into partitions included in the element 630 encoding, i.e., the partition having a size of 16×16, included in item 630 encoding, partitions 632 having a size of 16×8, sections 634 having a size of 8×16, and sections 636 having a size of 8×8.

Similarly, element predictions of element 640 coding, have a size of 8×8 and a depth of 3 may be split into partitions included in the element 640 encoding, i.e., the partition having the size of 8×8 included in item 640 encoding, partitions 642 having a size of 8×4, partitions 644 having a size of 4×8, and sections 646 having a size of 4×4.

Element 650 coding, have a size of 4×4 and depth is 4 is a minimum element of the encoding and the encoding the lowest depth. Element predictions of element 650 encoding can only be assigned to a section with a size of 4×4. Alternatively, you can use the sections 652 having a size of 4×2, partitions 654 having a size of 2×4, or partitions 656 having a size of 2×2.

To determine at least one depth coding coding elements constituting the maximum element 610 encoding, block 83 to determine the depth of encoding and the encoding mode of the device 80 performs video encoding encoding for Fe�tov encoding, corresponding to each depth included in the maximum element 610 encoding.

The number of deeper coding elements in accordance with the depths that include data in the same range and the same size increase with depth. For example, the four elements of the encoding, corresponding to the depth of 2, are required for the coverage of the data included in one element of the encoding, corresponding to the depth of 1. Accordingly, to compare the results of coding of the same data according to depths, encoding, corresponding to the depth of 1, and each of the four coding elements corresponding to the depth of 2 are encoded.

To perform the encoding for the current depth among depths, the smallest error encoding can be selected for the current depth by performing encoding for each prediction in the coding elements corresponding to the current depth, along the horizontal axis of the hierarchical structure 600. Alternatively, the minimum error encoding might give us by comparing the smallest error encoding according to depths, by performing encoding for each depth as the depth increases along the vertical axis of the hierarchical structures� 600. The depth of the section and having a minimum accuracy of coding in the element 610 encoding can be selected as the depth of the encoding and the partition type of the element 610 encoding.

Fig. 14 is a diagram for describing the relationship between the element 710 coding and elements 720 conversion in accordance with the illustrative option implementation.

The device 80 video encoding or device 90 decode video encode or decode the image in accordance with coding elements having dimensions less than or equal to the maximum element of the encoding for each maximal element encoding. The size of the conversion element to convert the encoding can be selected based on the data elements that are not greater than the corresponding element in the encoding.

For example, the device 80 video encoding or device 90 video decoding, if the size of the element 710 coding is 64×64, transformation may be performed through the use of elements 720 conversion, having the size of 32×32.

In addition, the data element 710 coding, have a size of 64×64 may be encoded by performing the transformation on each of the conversion elements having a size 32×32, 16×16, 8×8 and 4×4 that are smaller than 64×64, and then can be �Imran conversion element, having the smallest error encoding.

Fig. 15 is a diagram for describing information coding coding elements corresponding to the depth of encoding, in accordance with the illustrative option implementation.

Transmitter 88 of the device 80 video encoding can encode and transmit information 800 about a partition type, information 810 about the prediction mode, and information 820 about a size of the translation element for each element of the encoding, corresponding to the depth of encoding, as information about the encoding mode.

Information 800 indicates information about the shape of the partition obtained by splitting element predictions of the current element coding, and a section is an element of data for predictive coding of the current element coding. For example, the current element encoding CU_0 having the size of 2N×2N may be split into any section of a partition 802 having a size of 2N×2N, a partition 804 having a size of 2N×N, the partition 806 having a size of N×2N, and the partition 808 having a size of N×N. In this case, the information 800 about the partition type is set to point to one section of the partition 804 having a size of 2N×N, the partition 806 having a size of N×2N, and the partition 808, having the size N×N.

Information 810 indicates the prediction mode of each partition. For example, the information 810 may indicate �Jerzy predictive coding, taken on the section indicated by the information 800, i.e., an intra mode 812, the inter mode 814, or 816 passes.

The information 820 indicates a conversion element on which to build, when the transformation is executed on the current element encoding. For example, the conversion element may be the first element 822 intra-conversion, the second element 824 intra-conversion, the first element 826 inter-conversion or the second element 828 intra-transform.

The extractor 93 device information 90 video decoding can extract and use the information 800, 810 and 820 for decoding, according to each deeper element encoding.

Fig. 16 is a diagram of the deeper coding elements in accordance with the depths in accordance with the illustrative option implementation.

Information partitioning can be used to indicate changes in depth. The partitioning information indicates whether the element is broken encoding the current depth to the elements of coding a lower depth.

Element 910 prediction for coding with prediction element 900 of coding, having a depth of 0 and a size 2N_0×2N_0, may include the type section 912 having a size 2N_0×2N_0, type section 914 having a size 2N_0×N_0, type section 916 having a size N_0×2N_0, and the type section 918, have�its size N_0×N_0. Fig. 9 illustrates only the types 912-918 section, which are obtained by symmetrically splitting element 910 predictions, but the partition type is not limited, and the sections of the element 910 predictions may include asymmetrical partitions, partitions having a predefined shape, and partitions having a geometrical shape.

Coding with prediction repeatedly performed on one partition having a size 2N_0×2N_0, two partitions having a size 2N_0×N_0, two partitions having a size N_0×2N_0, and four partitions having a size N_0×N_0, according to each partition type. Coding with prediction in the intra mode and the inter mode may be performed on the partitions having the sizes 2N_0×2N_0, N_0×2N_0, 2N_0×N_0 and N_0×N_0. Coding with prediction in the skip mode is performed only on the partition having the size 2N_0×2N_0.

Error encoding includes encoding with the prediction in the types 912-918 sections are compared, and among types of sections is determined by the smallest error encoding. If the error encoding is at least one of the types 912-916 section, element 910 predictions may not be divided to a lower depth.

If the error coding is the lowest for the type of section 918, a depth is changed from 0 to 1 for the partition type 918 section in step 920, and kodirovanie repeat runs over the elements 930 encoding, having a depth 2 and size N_0×N_0, to search for the minimum error encoding.

Element 940 prediction for coding with prediction element 930 coding with depth 1 and size 2N_1×2N_1 (=N_0×N_0), may include the type section 942 having a size 2N_1×2N_1, type section 944 having a size 2N_1×N_1, type section 946 having a size N_1×2N_1, and the type section 948 having a size N_1×N_1.

If the error coding is the lowest for the type of section 948, a depth is changed from 1 to 2 for the partition type 948 section in step 950, and encoding is repeatedly performed on the elements 960 coding with depth 2 and size N_2×N_2, to search for the minimum error encoding.

When a maximum depth is d, split operation according to each depth may be performed until, when the depth becomes equal to d-1, and partitioning information can be encoded until, when the depth is equal to one of values from 0 to d-2. In other words, when encoding is performed until, when the depth is d-1 after the encoding, corresponding to the depth d-2, broken in step 970, the element 990 prediction for coding with prediction element 980 encoding having a depth of d-1 and a size of 2N_(d-1)×2N_(d-1), may include a partition type 992 partition having a size of 2N_(d-1)×2N_(d-1), type 994 RA�case, having a size of 2N_(d-1)×N_(d-1), type 996 partition having a size of N_(d-1)×2N_(d-1), and the type section 998 having the size of N_(d-1)×N_(d-1).

Coding with prediction can repeatedly be performed on one partition having a size of 2N_(d-1)×2N_(d-1), two partitions having a size of 2N_(d-1)×N_(d-1), two partitions having a size of N_(d-1)×2N_(d-1), four partitions having a size of N_(d-1)×N_(d-1) among types 992-998 section to search for a partition type having a minimum error encoding.

Even when the type of the section 998 has the minimum error coding, since the maximum depth is d, an encoding CU_(d-1) having a depth of d-1 is no longer split into a lower depth, and the depth of the encoding element for encoding the components of the current maximum element 900 of coding, is defined as equal to d-1, and a partition type of the current maximum element 900 encoding may be determined to be N_(d-1)×N_(d-1). In addition, since the maximum depth is d, and the minimum element 980 encoding, with the bottom depth d-1 is no longer split into a lower depth, information partitioning for minimum element 980 encoding is not set.

The 999 element data may be "minimal unit" for the current maximum element of the encoding. The minimum unit in accordance with the illustrative option �of sushestvennee may be a rectangular data item, obtained by splitting the minimum element 980 encode at 4. By performing coding with the repetition device 80 video encoding may select a depth having the smallest error encoding by comparing the error encoding in accordance with the depths of the element 900 of coding to determine the depth of encoding, and set a corresponding partition type and a prediction mode as an encoding mode of the depth of the encoding.

Thus, the minimum error encoding in accordance with depths compared to all depths from 1 to d, and a depth having the smallest error encoding can be defined as the depth of the encoding. Depth of encoding, the type of partition element predictions and the prediction mode may be encoded and transmitted as information about the encoding mode. In addition, since the encoding is broken from depth 0 to depth coding, only information decomposition depth coding is set equal to 0, and the information of the breaking depth different from the depth of the encoding is set to 1.

The extractor 93 device information 90 video decoding can extract and use the information about the depth of the encoding element and the predictions of element 900 encoding decoding section� 912. The device 90 video decoding can determine the depth at which the partitioning information equal to 0, the depth of encoding through the use of information split according to depths, and use information about the encoding mode of the corresponding depth for decoding.

Fig. 17-19 are diagrams for describing the relationship between elements 1010 coding elements 1060 predictions and elements 1070 conversion in accordance with the illustrative option implementation.

Elements 1010 coding are the coding elements having a tree structure corresponding to depths of the coding defined by the device 80 video encoding, the maximum element of the encoding. Elements 1060 predictions represent the sections of the elements of the prediction of each of the elements 1010 coding, and elements 1070 conversion represent the elements of conversion of each of the elements 1010 encoding.

When the depth of the maximum element of the encoding of 0 in elements 1010 encoding, the depth of elements 1012 and 1054 encoding is set to 1, the depth of the elements 1014, 1016, 1018, 1028, 1050, and 1052 encoding is 2, the depth of elements 1020, 1022, 1024, 1026, 1030, 1032 and 1048 encoding is 3, and the depth of the elements 1040, 1042, 1044 and 1046 encoding is 4.

In the elements 1060 predictions some elements�you 1014, 1016, 1022, 1032, 1048, 1050, 1052 and 1054 encoding obtained by splitting the coding elements in blocks 1010 coding. In other words, the partition types in the elements 1014, 1022, 1050, and 1054 coding, have a size of 2N×N partition types in the elements 1016, 1048, and 1052 have same encoding an N×2N, and a partition type of the element 1032 encoding has a size of N×N. the Elements prediction and the sections of the elements 1010 coding is less than or equal to each element of the encoding.

The transformation or inverse transformation is performed on image data element 1052 is the coding elements 1070 conversion in the data item which is less than the element 1052 coding. In addition, elements 1014, 1016, 1022, 1032, 1048, 1050 and 1052 coding elements in 1070 conversion differ from the coding elements in the elements 1060 predictions from the point of view of dimensions and shapes. In other words, the devices 100 and 200 of encoding and decoding video may perform intra prediction, motion estimation, motion compensation, transformation, and inverse transformation individually on a data element of water and the same encoding.

In accordance with this encoding is performed recursively on each of the coding elements having a hierarchical structure in each region of the maximum element of the encoding, to determine the optimal element Kodir�tion, and, thus, can be obtained by the coding elements having a recursive tree structure. Encoding information may include information about the split element encoding information about a partition type, information about the prediction mode and the size of the translation element. Table 2 shows the information encoding, which can be installed devices 80 and 90 encode and decode video.

Table 2
Information partition 0 (Coding element coding, have a size of 2N×2N and current depth of d)Information partition 1
The prediction modeThe partition typeThe size of the translation element
Intra inter Pass (only 2N×2N)Symmetrical partition typeAsymmetrical partition typeInformation partition 0 of the translation elementInformation partition 1 of the translation elementSacudir�VAT with repetition coding elements, having a lower depth of d+1
2N×2N
2N×N
N×2N
N×N
2N×nU
2N×nD
nL×2N
nR×2N
2N×2NN×N (symmetric type)
N/2×N/2 (asymmetrical type)

Transmitter 88 of the device 80 encoding video can give information about coding the coding elements having a tree structure, and the extractor 93 device information 90 video decoding can extract information about coding the coding elements having a tree structure from a received bitstream.

The partitioning information indicates broken if the current element encoding for encoding a lower depth. If the information of the breaking current depth d is 0, the depth at which the current element coding is no longer split into a lower depth, is the depth of encoding, and thus information about a partition type, the prediction mode and the size of the translation element can be determined for depth coding. If the current element coding is further broken down in accordance with the partitioning information, the coding is performed independently on four broken coding elements with a lower depth.

The prediction mode may be a mode of intra-mode, inter-mode�mA and skip mode. Intra-mode and inter-mode can be defined in all partition types, and the skip mode may be defined only for a partition type having a size of 2N×2N.

Information about the partition type may indicate symmetrical partition types having sizes of 2N×2N, 2N×N, N×2N and N×N, which are obtained by symmetrically splitting the height or width of an element predictions, and asymmetrical partition types having the sizes of 2N×nU, 2N×nD, nL×2N and nR×2N, which are obtained by asymmetrically splitting the height or width of an element predictions. The asymmetrical partition types having the sizes of 2N×nU and 2N×nD may be respectively obtained by splitting the height of the element predictions of the ratio of 1:3 and 3:1, and the asymmetrical partition types having the sizes of nL×2N and nR×2N may be respectively obtained by splitting the width of the element predictions of the ratio of 1:3 and 3:1

The size of the translation element can be set as two types in intra-mode, and two types in the inter-mode. In other words, if the information of the breaking of the translation element is equal to 0, the size of the translation element may be 2N×2N, which is the size of the current element coding. If the information of the breaking of the translation element is equal to 1, the conversion element can be obtained by splitting the current element is encoded�me. In addition, if the partition type of the current element coding, have a size of 2N×2N is a symmetrical partition type, the size of the translation element may be N×N, and if the partition type of the current element coding is an asymmetrical partition type, the size of the translation element may be N/2×N/2.

Encoding information about the coding elements having a tree structure may include at least one element from the element coding corresponding to the depth of the encoding element predictions and minimum element. An encoding corresponding to the depth of encoding may include at least one element from the element predictions and the minimum element containing the same information encoding.

In accordance with that determined whether the adjacent data elements in the same element of the encoding, corresponding to the depth of encoding, by comparing the information encoding of related data elements. In addition, the corresponding element in the encoding that matches the depth of encoding is determined by using information of the encoding data element, and, thus, can be determined by the depth of the maximum encoding element for encoding.

Accordingly, if t�kushi element encoding predicted on the basis of information encoding of related data elements information encoding elements of the data in the deeper coding elements adjacent to the current element coding can be directly used, and it can be accessed.

Alternatively, if the current encoding is predicted on the basis of information encoding of related data elements, the data elements related to the current element coding is found using encoded information of the data elements to adjacent elements bracketed by the encoding, you can apply for the prediction of the current element encoding.

Fig. 20 is a diagram for describing the relationship between the element encoding element predictions or section and the conversion element in accordance with the information of the encoding mode from table 2.

The maximum element 1300 coding includes elements 1302, 1304, 1306, 1312, 1314, 1316 and 1318 of coding depth of coding. In this case, since the element 1318 coding is an element of coding depth of coding, information partitioning can be set equal to 0. Information about the partition type of the element 1318 coding, have a size of 2N×2N may be set as one type of section type of section 1322 having a size of 2N×2N, type 1324 section having the size of 2N×N, the type of section 1326 having a size of N×2N, type section 1328, they�existing size N×N, type section 1332 having a size of 2N×nU, type of section 1334 having a size of 2N×nD, type section 1336 having a size of nL×2N, and the type section 1338 having a size of nR×2N.

Information partition (flag size TU) of the translation element is an index type conversion, and the size of the translation element corresponding to the index of conversion, subject to change in accordance with the element type of a prediction or a partition type of the element encoding.

For example, when the partition type is set as symmetric, that is, the type 1322, 1324, 1326, or 1328, section, item 1342 conversion, having the size of 2N×2N is set if the flag of the TU size is 0, and the element 1344 conversion, having the size N×N is set if the flag of the TU size is equal to 1.

When the partition type is set to asymmetric, i.e. the type 1332, 1334, 1336, or 1338, section, item 1352 conversion, having the size of 2N×2N is set if the flag of the TU size is 0, and the element 1354 conversion, having size N/2×N/2, is set if the flag of the TU size is equal to 1.

As shown in Fig. 18, the flag of the TU size is a flag having a value of either 0 or 1, but the flag of the TU size is not limited by one bit, and the conversion element may be hierarchically split having a tree structure, while the flag of the TU size is increased from the value 0.

In this case, R�smear conversion element, actually used, can be expressed through the use of the flag of the TU size of the translation element in accordance with the illustrative option implementation together with the maximum size and minimum size of the translation element. In accordance with an illustrative variant implementation of the device 80 video encoding can encode the information on the maximum element size conversion, information on the minimum size of the translation element and the maximum flag size TU. The result of encoding information about the maximum size of the translation element, information about the minimum size of the translation element and the maximum flag size TU can be inserted in the SPS. In accordance with an illustrative variant implementation of the device 90 video decoding can decode video by using information about the maximum size of the translation element, information about the minimum size of the translation element and the maximum flag size TU.

For example, if the size of the current element coding is 64×64, and the maximum size of the translation element is 32×32, the size of the translation element may be 32×32 when the flag of the TU size is 0, may be 16×16, when the flag of the TU size is equal to 1, and can be 8x8,�when the flag of the TU size is equal to 2.

As another example, if the size of the current element coding is 32×32, and the minimum size of the translation element is 32×32, the size of the translation element may be 32×32 when the flag of the TU size is 0. The flag of the TU size may be set equal to another value except 0, since the size of the translation element may not be less than 32×32.

As another example, if the size of the current element coding is 64×64, and the maximum flag size TU is equal to 1, then the flag of the TU size may be equal to 0 or 1. The flag of the TU size may be set equal to another value besides 0 or 1.

Thus, if it is determined that the maximum flag of the TU size is equal to MaxTransformSizeIndex, minimum element size of the transform is MinTransformSize, and the size of the translation element equal RootTuSize when the flag of the TU size is 0, then the current minimum element size conversion CurrMinTuSize, which can be defined in the current element coding can be determined by equation 10.

[Equation 10]

CurrMinTuSize=max(MinTransformSize, RootTuSize/(2^MaxTransformSizeIndex)).

Compared with the current minimum element size conversion CurrMinTuSize, which can be defined in the current element encoding element size conversion RootTuSize when the flag of the TU size is 0, can code no�obtain the maximum element size conversion, which can be selected in the system. In equation 10 RootTuSize/(2^MaxTransformSizeIndex) denotes the size of the translation element, when the element size conversion RootTuSize when the flag of the TU size is 0, is divided several times in accordance with the maximum flag size TU and MinTransformSize denotes the minimum size of the transform. Thus, the smaller value among RootTuSize/(2^MaxTransformSizeIndex)' and MinTransformSize may be the current minimum element size conversion CurrMinTuSize, which can be defined in the current element encoding.

In accordance with the illustrative option implementation, the maximum size of the translation element RootTuSize may vary according to the type of prediction mode.

For example, if the current prediction mode is an inter mode, then RootTuSize can be determined by using equation (11) below. In equation 11 MaxTransformSize denotes the maximum element size conversion, and PUSize denotes the current size of the element predictions.

[Equation 11]

RootTuSize = min(MaxTransformSize, PUSize).

Thus, if the current prediction mode is an inter mode, the size of the translation element RootTuSize when the flag of the TU size is 0 may be a smaller value among the maximum size of the translation element and the current time�EPA element predictions.

If the prediction mode of the current block section is an intra-mode, RootTuSize can be determined by using equation 12 below. In equation 12 PartitionSize denotes the size of the current block section.

[Equation 12]

RootTuSize = min(MaxTransformSize, PartitionSize).

Thus, if the current prediction mode is an intra mode, the size of the translation element RootTuSize when the flag of the TU size is 0 may be a smaller value among the maximum size of the translation element and the size of the current block section.

However, the current maximum size of the translation element RootTuSize, which varies according to the type of prediction mode in the block section, is an example only and the present invention is not limited to this.

Fig. 21 is a block diagram of the sequence of operations illustrating a method of encoding a video for compensating a pixel value after performing loop filtering based on the coding elements having a tree structure, in accordance with the illustrative option implementation.

In step 2110 the current image is divided by at least one maximum element of the encoding, and the depth of coding for the issuance of a final encoding result according to at least one blastobasidae, which is obtained by splitting a region of each maximal element encoding according to depths, by encoding at least one area of separation. In addition, the encoding mode, which includes information about the depth of coding or partitioning information, information about the partition type of the depth coding, the prediction mode and the size of the translation element is determined in accordance with the deeper element of the coding according to the depths.

May be a predefined maximum depth indicating a total number of partitions maximum element of the encoding. The maximum element of the encoding can be divided hierarchically, and the encoding can be performed with repetitions for each deeper element encoding whenever increasing depth. Error coding all of the deeper coding elements are measured and compared in order to determine the depth of encoding, which generates the smallest error encoding element for encoding.

In step 2120, the encoded image data are decoded on the basis of depth of encoding and the encoding mode, and the restored image is formed by performing loop filtering on the decoded image data�of. The restored image can be formed by performing adaptive loop filtering, which continuously performs at least one of the one-dimensional filtering on the decoded image data or subjected to removal of the deblocking image data.

In step 2130 determines the compensation value of the error between each restored pixel in a predefined group of the restored image and the original pixel and the group of pixels comprising the restored pixel to be compensated. The group of pixels, including restored pixels having pixel values that have to be compensated may be determined in accordance with the levels of extreme and/or edge value of the pixel values, the strips of the pixel values or lines. The compensation value in accordance with the groups of pixels may be determined based on the mean values of the errors.

In step 2140 is issued to the image data constituting the final encoding result according to at least one area of separation, information about the depth of the encoding mode and encoding rate information loop filtering and information relating to the compensation value. Information about the regime to�of debugger may include information about the depth of coding or information splitting, information about the partition type of the depth coding, the prediction mode and the size of the translation element.

Information relating to the compensation value in accordance with the groups of pixels may be encoded together with information about the encoding mode, the video data and information about the coefficient of the loop filter, which is encoded in accordance with the method based on the coding elements having a tree structure, and can be transmitted to the decoder.

Fig. 22 is a block diagram of the sequence of operations illustrating a method of decoding a video for compensating a pixel value after performing loop filtering based on the coding elements having a tree structure, in accordance with the illustrative option implementation.

In step 2210 bit stream of encoded video in accordance with the method shown in Fig. 21, on the basis of the coding elements having a tree structure, is adopted and syntactically analyzed, and the image data of the current image assigned to the maximum element of the encoding, information about the depth of the encoding and the encoding mode according to maximum coding elements, information on the ratio of loop filtering, and information relating to the compensation value, are extracted from Cinta�cichecki parsed bitstream.

The depth of the coding according to the maximum coding elements, is chosen as the depth having the smallest error encoding in accordance with the maximum coding elements when encoding the current image. The coding is performed in accordance with the maximum coding elements by encoding the image data based on at least one item of data received by means of hierarchical decomposition of the maximum element of the encoding in accordance with the depths. In accordance with this, each piece of image data is decoded after the determination of the depth coding in accordance with coding elements, and thereby to increase the efficiency of encoding and decoding the image.

In step 2220, the image data are decoded in each maximal element encoding on the basis of the depth information encoding and the encoding mode, and the restored image is formed by performing loop filtering on the decoded image data. The restored image can be formed by performing adaptive loop filtering, and at least one one-dimensional filtering is continuously executed on the decoded image data or subjected UD�of deblocking image data.

In step 2230, the group of pixels comprising the restored pixel to be compensated is determined among restored pixels of the restored image through the use of the compensation value. The group of pixels, including restored pixels having pixel values that have to be compensated, can be determined by using the compensation values in accordance with the levels of extreme and/or edge value of the pixel values of the restored pixels, strips of the pixel values or lines through the use of the method of determining the group of pixels based on the extracted information relating to the compensation value.

In step 2240 may be issued, the restored image having compensated accuracy through compensation of errors between the restored pixels of a certain group of pixels and a corresponding original pixel by using the compensation value.

In accordance with the method of video encoding method and video decoding quality of the reconstructed image can be improved by compensation of systematic errors of the reconstructed image, and bit rate, the additional informatiile improve the quality of the reconstructed image can be reduced since the encoded and transmitted information only about the compensation value in accordance with the groups of pixels, and is not passed on information about the location of the pixel that should be compensated.

Illustrative embodiments of the present invention can be written as computer programs and can be implemented in the public use computers that execute the programs using a computer readable recording medium. Examples of computer readable recording media include magnetic storage media (e.g. read only memory (ROM), floppy disks, hard disks, etc.) and optical recording media (e.g., compact discs (CD-ROM) or digital versatile disk (DVD)).

Although the above have been shown and described in detail illustrative embodiments of, specialists in the field of technology will understand that they can be made various changes in form and details without deviation from the essence and scope of the inventive concept defined in the appended claims. Illustrative implementation options should only be considered in a descriptive sense and not for purposes of limitation. Thus, the scope of the inventive concept is defined not by the detailed description of illustrative embodiments and the attached claims, and discern all� within this volume will be considered as included in the concept of the present invention.

1. Method of decoding a video, comprising:
receiving from the bitstream information about the compensation pixel value in accordance with a strip of pixel values or boundary values,
if the information on the compensation pixel value specifies the bandwidth, the use of the compensation value of the predefined bands obtained from the bitstream, to the pixel included in a predetermined band, among pixels of the current block; and
if the information on the compensation pixel value indicates the level of the boundary values, application of the compensation value predefined directions of the boundaries obtained from the bitstream, to the pixel in the predefined direction of the border among pixels of the current block,
moreover, a predefined band is one of the bands formed by the decomposition of the full range of pixel values.

2. A method according to claim 1, wherein applying the compensation values predetermined bandwidth includes receiving compensation values respectively assigned to the multiple lanes, determining the current band, which includes the current pixel of the current block, among the many bands and application of the compensation value of the current band among received compensation values to the pixel value of the current pixel.

3. A method according to claim 1, wherein the application of W�of achene compensation predefined direction of the border contains the receipt of compensation values, respectively assigned to the plurality of directions of the boundaries, the determination of the current direction of the border, which forms the current pixel of the current block, among different types of boundaries and the use of the compensation value of the current directions of the boundaries among the obtained compensation values to the pixel value of the current direction of the border.



 

Same patents:

FIELD: physics, video.

SUBSTANCE: invention relates to means of encoding and decoding video. The method includes determining a first most probable intra-prediction mode and a second most probable intra-prediction mode for a current block of video data based on a context for the current block; performing a context-based adaptive binary arithmetic coding (CABAC) process to determine a received codeword, corresponding to a modified intra-prediction mode index; determining the intra-prediction mode index; selecting the intra-prediction mode.

EFFECT: high efficiency of signalling an intra-prediction mode used to encode a data block by providing relative saving of bits for an encoded bit stream.

50 cl, 13 dwg, 7 tbl

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a media device and a system for controlling access of a user to media content. Disclosed is a device (100, 200) for controlling access of a user to media content, the device comprising: an identification code output (102, 103, 202) for providing an identification code to the user, the identification code identifying the media device; a control code generator (104, 204) for generating a control code depending on the identification code and an access right; an access code input (106, 107, 206) for receiving an access code from the user. The access code is generated depending on the identification code and the access right by a certain access code device, and an access controller (108, 208) enables to compare the access code to the control code, and when the access code matches the control code, grants the user access to the media content in accordance with the access right.

EFFECT: managing user access to media content, wherein access is granted specifically on the selected media device.

14 cl, 6 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a method and an apparatus for controlling settings of a device for playback of a content item. Disclosed is a method of controlling settings of a rendering device for playback of a content item, said rendering device being configured to connect with at least one source device, said at least one source device providing at least one content item, wherein the method comprises steps of: generating a plurality of entries for said at least one source device, each of the plurality of entries corresponding to a different profile, each profile comprising settings for playback of a content item received from the corresponding source device. A user can request generation of a plurality of entries for the same source device and select one of said entries, wherein the rendering device is connected with the source device which corresponds to said selected entry; and settings of the rendering device for playback of the received content item are controlled according to the profile corresponding to said selected entry.

EFFECT: providing corresponding settings for playback of different types of content items.

9 cl, 2 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to video encoding/decoding techniques which employ a loop filter which reduces blocking noise. The technical result is achieved due to that a video encoding/decoding device, which encodes or decodes video using a loop filter, includes a deviation calculating unit which calculates deviation between a target noise cancellation pixel and a neighbouring pixel of the target pixel using a decoded image. A pattern form establishing unit limits the pattern form such that the less the deviation from the maximum deviation in the decoded image, the smaller the pattern form. When removing target pixel noise, using a weight coefficient in accordance with the degree of similarity between the pattern of the target pixel and the pattern of each search point in the form of a search and a weighted sum of pixel values at search points, the loop filter compares patterns using the limited pattern form and removes the target pixel noise.

EFFECT: reduced computational complexity of the noise cancellation filter, thereby preventing deterioration of encoding efficiency.

5 cl, 19 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to technology of automatic selection of extra data, for example, ad, guide data, extra data, data on operating performances. Thus, processing, storage and/or transmission resources can be saved. This device for automatic selection of extra data to be included in content comprises classifier connected with user profile and selection means connected with extra data base. Extra data of definite category is placed in appropriate or contrasting context depending on used interest in thus goods category. Profiles of user are automatically classified as profiles with either pronounces or weak interest in this category.

EFFECT: adapted selection of extra data to be included in the content for twofold decrease in total volume of extra data.

11 cl, 2 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to means of encoding and decoding images with prediction. The method includes receiving accessibility information of reference units of a current image and determining if the reference units are accessible for intra prediction according to the accessibility information. In the method, the accessibility information includes an indication of whether the reference unit is located within the image boundaries where the current image unit is located; whether the reference unit is located in the same layer as the current image unit; and whether the reference unit has already been encoded or decoded. In the method, reference units of the current image unit include a left side unit, an upper side unit and a upper left unit of the current image unit.

EFFECT: high efficiency of predicting an image unit.

16 cl, 8 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to techniques for encoding and decoding video images. Disclosed is a method of encoding image information containing motion data by selecting a motion vector from a group of at least three possible motion vectors for at least one current unit of a current image to be encoded. The method includes a step of determining an optimum selection subgroup comprising part of the possible motion vectors. Further, the method includes selecting a motion vector from the vectors of the optimum selection subgroup and inputting into said information data on allocation of a motion vector selected from the vectors of the optimum selection subgroup.

EFFECT: high efficiency of encoding and decoding video images by determining an optimum selection subgroup containing part of possible motion vectors.

12 cl, 8 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computations, particularly, to display control systems. This device comprises imaging section capture of dynamic image in present range relative to image display direction. Besides, it comprises image analysis section for analysis of dynamic image caught by imaging section and calculation of the position of a particular user from multiple users. Note here that it comprises system optimisation section to computer system control data for system optimisation proceeding from said position of a particular user computed by image analysis section.

EFFECT: optimised state of imaging device for particular user.

7 cl, 23 dwg

FIELD: physics.

SUBSTANCE: proposed process comprises the steps that follow. construction of blocs in space relative to current block of forecasts. Note here that current forecast block is arranged inside current unit of coding. Accessible adjacent blocs are defined relative to current block in compliance with the type of coding current unit separation. Note here that accessible adjacent blocs are located outside the current coding unit. Motion vector predictors are obtained from accessible adjacent blocs in preset sequence in compliance with predictors of accessible adjacent blocs. Said obtained predictors are added to the list of motion vectors.

EFFECT: higher efficiency of compression in coding.

16 cl, 10 dwg

FIELD: physics.

SUBSTANCE: method for motion compensation in digital dynamic video, wherein during motion compensation in frames of a video stream using a video codec, a combination is used of a search algorithm for motion compensation of frame fragments with approximation of movement of frame fragmental projections approximated by physical laws of motion of real captured mobile objects corresponding to said projections. Owing to fragmental approximation of motion in the image using a timer, real-time approximation of fractional values of velocities and positions of predictions of part of the mobile fragments of the frame is performed, and further refinement of the positions of said found preliminary approximation predictions is performed using the motion compensation search algorithm, but with smaller sizes of the prediction search regions and shift of the centres of said regions by the found approximation shift vectors.

EFFECT: higher average throughput frame capabilities of video codecs operating in real time, with insignificant decrease in average code volumes and quality of the decoded images.

2 cl, 17 dwg, 3 tbl

FIELD: information technology.

SUBSTANCE: like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.

EFFECT: method for personalised filtration of content elements which does not require logic input or user identification procedures.

5 cl, 1 dwg

FIELD: information technology.

SUBSTANCE: like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.

EFFECT: method for personalised filtration of content elements which does not require logic input or user identification procedures.

5 cl, 1 dwg

FIELD: information technologies.

SUBSTANCE: method of a conversion system operation to manage digital rights to grant a license to a client's device corresponding to coded content consists in the following. The first content of the first type of digital rights content and the first license corresponding to the first content are converted to manage digital rights in order to generate the second content of the second type of digital rights content and the second license corresponding to the second content. A license request is received, corresponding to the second content distributed by means of superdistribution to a third party. The second license corresponding to the second content distributed by means of superdistribution is requested from a server corresponding to the second management of digital rights. The second license corresponding to the second content distributed by means of superdistribution is received and sent to a third party.

EFFECT: expansion of functional resources due to development of a license granting mechanism for appropriate content distributed by means of superdistribution.

17 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: network server of television server sets in random manner according to Internet protocol (IPTV) time of request for receiving main license within time period starting from time of broadcast transmission and ending at preset time in accordance with request for receiving license for playback of encrypted content, where request for receive comes from IPTV client terminal, and transmits to IPTV client terminal information about time of request for receiving main license and temporary license comprising temporary key of content which key corresponds to playback of broadcast transmission content from time of broadcast transmission start till preset time. License server transmits main license including content main key which corresponds to full playback of content according to request for receiving main license which request is executed using IPTV client terminal based on information about request for receive.

EFFECT: stabilisation of license server operation by eliminating concentration of license receive requests from large number of clients during time just after starting broadcast transmission of content.

6 cl, 11 dwg

FIELD: information technology.

SUBSTANCE: multimedia content purchasing system comprising: a memory area associated with a multimedia service; a multimedia server connected to the multimedia service via a data communication network; a portable computing device associated with a user; and a processor associated with the portable computing device, said processor being configured to execute computer-executable instructions for: establishing a connection to the multimedia server when the multimedia server and the portable computing device are within a predefined proximity; authenticating the multimedia server and the user with respect to the authenticated multimedia server; transmitting digital content distribution criteria; receiving, in response, promotional copies of one or more of the multimedia content items and associated metadata; and purchasing, when the multimedia server and the portable computing device are outside the predefined proximity, at least one of said one or more multimedia content items.

EFFECT: enabling flexible sharing of multimedia content between subjects.

17 cl, 9 dwg

FIELD: information technologies.

SUBSTANCE: device (600) to process stored data packets (110; 112) in a container of media data (104) and stored related meta information in a container of meta data (106); related meta information, including information on timing of transportation and information on location, indicating location of storage of saved data packets in the media data container (104); a device, comprising a processor (602) for detection, based on stored data packets (110; 112) and stored related meta information (124; 128); information on decoding (604; 704) for media useful load of stored data packets (110; 112), where information on decoding (604; 704) indicates, at which moment of time to repeatedly reproduce which useful load of stored data packets.

EFFECT: immediate accurate timing of synchronisation between different recorded media streams without complicated processing during each reproduction of recorded media streams.

21 cl, 12 dwg

FIELD: information technology.

SUBSTANCE: provided is an integrated interface device for performing a hierarchical operation for specifying a desired content list. The interface device has a function to display a content list, content specified by the content list, or the like by efficiently using a vacant area in the lower part of the display by displaying icons which display a hierarchical relationship, for example, "display in a row", in the upper part of the screen, thereby freeing a large space in the lower part of the display.

EFFECT: efficient use of the entire screen even after displaying an interface for performing an operation.

17 cl, 42 dwg

FIELD: radio engineering, communication.

SUBSTANCE: channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.

EFFECT: provision of a method to generate a recommendation for an additional element of content, the method is specially adapted for use with channels of individualised content.

13 cl, 1 dwg

FIELD: radio engineering, communication.

SUBSTANCE: channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.

EFFECT: provision of a method to generate a recommendation for an additional element of content, the method is specially adapted for use with channels of individualised content.

13 cl, 1 dwg

FIELD: information technology.

SUBSTANCE: wireless transmission system includes: a device (1) which wirelessly transmits AV content and a plurality of wireless recipient devices (5, 6) for reproducing the transmitted AV content. The device (1) for transmitting content has a group identification table which stores a group identifier for identification of a group formed by the wireless recipient device (5, 6). The device (1) adds the group identifier extracted from the group identification table to a control command for controlling recipient devices (5, 6) and wirelessly transmits the control command having the group identifier. The recipient devices (5, 6) receive the wirelessly transmitted control command from the device (1) if the corresponding group identifier has been added to the control command. The device (1) for transmitting content consists of a wired source device and a relay device which is connected by wire to the wired source device, and the relay device is wirelessly connected to the wireless recipient device and mutually converts the wired control command transmitted to the wired source device, and the wireless control command transmitted to the wireless recipient device, wherein the wired source device and the relay device are connected via HDMI (High-Definition Multimedia Interface).

EFFECT: providing the minimum required volume of transmitting control commands during wireless audio/video transmission.

21 cl, 13 dwg

Up!