Video encoding method and device, video decoding method and device based on hierarchical structure of coding unit

FIELD: radio engineering, communication.

SUBSTANCE: device contains a receiver receiving and syntactically analyzing a bit flow of a coded image; a processor forming a coding unit which is included into a maximum coding block which has hierarchical structure by means of use of the information which indicates this hierarchical structure, syntactically analyzed from the accepted bit flow and forms one sub-block for prediction of a coding block from the coding block, by means of use of information on blocks of a prediction of the named coding block, an image recovery decoder.

EFFECT: higher efficiency of decoding of high-resolution images by determining the depth of the coding unit and the operating mode of the encoding tool according to data characteristics of the image.

5 cl, 8 tbl, 23 dwg

 

The technical field to which the invention relates

Devices and methods that are compatible with exemplary embodiments of the implementation, refer to the encoding and decoding of video.

The level of technology

As developed and sold hardware for storage and playback of video content with high resolution or high quality, increases the need for the video codec for effectively encoding or decoding video content with high resolution or high quality. In the codec prior art video is encoded according to a limited encoding, based on a macroblock having a predetermined size.

Summary of the invention

The technical problem

One or more exemplary embodiments provide a method and apparatus for encoding video and method and apparatus for decoding video in the operating mode tool encoding, which varies according to the size of the unit of encoding with a hierarchical structure.

The solution of the problem

According to aspect of an exemplary variant implementation is provided a method of encoding video data, the method includes: dividing a current video frame of video data for at least one maximum unit of encoding; the definition of codiovan�th depth to output a final encoding result by encoding at least one split region of at least one of the maximum unit of encoding in accordance with at least one the working mode of at least one encoding tool, respectively, based on the dependence between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of encoding and the operating mode in which at least one split region is generated by a hierarchical division of at least one of the maximum unit of encoding according to depths; and outputting the bitstream comprising encoded video data of the coded depth, information regarding the coded depth of the at least one maximum unit of encoding information regarding the encoding mode, and information concerning the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of encoding and the operating mode in at least one of the maximum block coding and block coding can be characterized by a maximum size and a depth, the depth indicates the number of times the unit of coding is divided hierarchically, and, as the depth increases, deeper blocks of the coding according to depths may be split from the maximum block encoding�tion to obtain the minimum encoding units, in which the depth is deeper than the top depth to bottom depth, where depth is deeper, increasing the number of times when half the maximum unit of encoding, and the total number of possible times, when half the maximum unit of encoding that corresponds to the maximum depth and the maximum size and the maximum depth of the block encoding can be specified in advance. The operating mode tool coding for block encoding is determined in accordance with the depth of the block encoding.

Advantageous effects of invention

The encoding device according to exemplary embodiments of the implementation can determine the depth of the block coding and the operating mode tool coding in accordance with characteristics of image data, to improve the coding efficiency and coding information relating to the relationship between the depth of the block coding tools coding and working mode. In addition, the device of videodictionary according to an exemplary implementation options can restore the original image by decoding a received bit stream, based on the information concerning the relationship between the depth of the block coding, instrumental�tion encoding means and the operating mode. Therefore, the encoding device according to an exemplary implementation options and device videodictionary according to exemplary embodiments of the implementation can efficiently encode and decode a large amount of image data, such as images with high resolution or high quality, respectively.

Brief description of the drawings

The foregoing and/or other aspects will become more apparent through detailed description of exemplary embodiments with reference to the accompanying drawings, in which:

Fig.1 is a block diagram of an encoding device according to an exemplary embodiment of the;

Fig.2 is a block diagram of the device of videodictionary according to an exemplary embodiment of the;

Fig.3 is a diagram for describing the concept of encoding units according to an exemplary embodiment of the;

Fig.4 is a block diagram of an image encoder based on the encoding blocks according to an exemplary embodiment of the;

Fig.5 is a block diagram of the decoder of the image based on the encoding blocks according to an exemplary embodiment of the;

Fig.6 is a diagram illustrating deeper blocks of the coding according to the depths and sections �according to an exemplary embodiment of the;

Fig.7 is a diagram for describing dependencies between block coding and units conversion according to an exemplary embodiment of the;

Fig.8 is a diagram for describing information encoding encoding units corresponding to a coded depth, according to an exemplary embodiment of the;

Fig.9 is a diagram of a deeper encoding units according to depths according to an exemplary embodiment of the;

Fig.10-12 are diagrams for describing the dependencies between the blocks of the coding unit, the prediction and block transform according to one or more exemplary variants of implementation;

Fig.13 is a diagram for describing the relationship between the block encoding, the block prediction or division and unit conversion in accordance with the information about the encoding mode exemplary table 1 below according to an exemplary embodiment of the;

Fig.14 is a block diagram of the sequence of operations illustrating a method of encoding according to an exemplary embodiment of the;

Fig.15 is a block diagram of the sequence of operations illustrating a method of videodictionary according to an exemplary embodiment of the;

Fig.16 is a block diagram of a Ustra�of STV encoding, based on the instrumental means of encoding that takes into account the block size of the encoding, according to an exemplary embodiment of the;

Fig.17 is a block diagram of the device of videodictionary-based tool coding that takes into account the block size of the encoding, according to an exemplary embodiment of the;

Fig.18 is a diagram for describing the relationship between the size of the block coding tools coding and working mode according to an exemplary embodiment of the;

Fig.19 is a diagram for describing the relationship between the depth of the block coding tools coding and working mode according to an exemplary embodiment of the;

Fig.20 is a diagram for describing the relationship between the depth of the block coding tools coding and working mode according to an exemplary embodiment of the;

Fig.21 illustrates a syntax parameter set sequence that contains information relating to the relationship between the depth of the block coding tools coding and working mode according to an exemplary embodiment of the;

Fig.22 is a block diagram of the sequence of operations illustrating STRs�about the encoding, based on the instrumental means of encoding that takes into account the block size of the encoding, according to an exemplary embodiment of the; and

Fig.23 is a block diagram of the sequence of operations illustrating a method of videodictionary-based tool coding that takes into account the block size of the encoding, according to an exemplary embodiment of the.

Best mode of carrying out the invention

According to aspect of an exemplary variant implementation is provided a method of encoding video data, the method includes: dividing a current video frame of video data for at least one maximum unit of encoding; the definition of the coded depth to output a final encoding result by encoding at least one split region of at least one of the maximum unit of encoding in accordance with at least one operating mode of at least one encoding tool, respectively, based on the dependence between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode, wherein at least one divided region generated by hierarchies�CSO separation of at least one of the maximum unit of encoding according to depths; and the output bit stream comprising encoded video data of the coded depth, information regarding the coded depth of the at least one maximum unit of encoding information regarding the encoding mode, and information regarding the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of encoding and the operating mode in at least one maximal block encoding in which a block encoding can be characterized by a maximum size and a depth, wherein the depth represents the number of times the unit of coding is divided hierarchically, and when depth increases, deeper blocks of the coding according to depths may be split from the maximum unit of encoding to obtain the minimum encoding units, in which the depth increases from the upper depth lower depth, in which, when the depth increases, the number of times, when half the maximum unit of encoding, and the total number of possible times, when half the maximum unit of encoding that corresponds to the maximum depth and the maximum size and the maximum depth of the block encoding can be determined in advance. Working p�press tool coding for block encoding is determined in accordance with the depth of the block encoding.

Information concerning the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode, can pre-installed in the blocks of the slice, the frame blocks or blocks of a sequence of frames of the current video frame.

At least one tool coding for encoding at least one of the maximum unit of encoding may include at least one of quantization, transformation, internal predictions, external prediction, motion compensation, entropy coding and contour filtering.

If the instrumental means of encoding, an operating mode which is determined in accordance with the depth of the block coding is an intra prediction, the operating mode may include at least one mode of internal predictions, classified in accordance with the number of internal predictions or may include the mode of internal predictions to smooth areas in the blocks of the encoding, corresponding to the depths, and the mode of internal predictions to save the boundary lines.

If a tool encoding operation mode which determines�I in accordance with the depth of the block encoding, is an external prediction, the operating mode may include the external mode prediction in accordance with at least one method of determining motion vectors.

If the instrumental means of encoding, an operating mode which is determined in accordance with the depth of the block encoding is a conversion, the operating mode may include at least one mode conversion, classified in accordance with the index of the matrix of rotational transformation.

If the instrumental means of encoding, an operating mode which is determined in accordance with the depth of the block encoding is a quantization, an operating mode may include at least one mode of quantization, classified in accordance with whether to use the Delta quantization parameters.

According to another aspect of an exemplary variant implementation is provided a method of decoding video data, the method includes: receiving and parsing the bitstream comprising encoded video data; the selection of the bit stream of encoded video data, information regarding the coded depth of the at least one maximum unit of encoding information regarding the encoding mode, and information�, concerning the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode; and decoding encoded video data into at least one maximum unit of encoding in accordance with the working mode of encoding tool, consistent with the unit of coding corresponding to at least one coded depth, based on the information regarding the coded depth of the at least one maximum unit of encoding information regarding the encoding mode, and information concerning the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode in which the operating mode tool coding for block coding is determined according to coded depth of the block encoding.

Information concerning the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode, can be allocated in blocks of the slice, the frame blocks or blocks of a sequence of frames of the current VI�of yokata.

Tool encoding to encode at least one of the maximum unit of encoding may include at least one of quantization, transformation, internal predictions, external prediction, motion compensation, entropy coding and contour filtering, in which the decoding of encoded video data may include performing decoding tool corresponding to the tool coding for encoding at least one of the maximum unit of encoding.

According to another aspect of an exemplary variant implementation is provided a device for encoding video data, wherein the device includes: a separator maximum unit of coding, dividing the current video frame of video data for at least one maximum unit of encoding; the determinant of a block of coding that defines a coded depth to output a final encoding result by encoding at least one split region of at least one of the maximum unit of encoding in accordance with at least one operating mode of at least one of the instrumental tools for encoding, respectively, based on the relation between the depth p� least one block encoding at least one of the maximum unit of encoding, tool coding and the operating mode in which at least one split region is generated by a hierarchical division of at least one of the maximum unit of encoding according to depths; and an output unit which outputs a bitstream including encoded video data that represent the final result of encoding information regarding the coded depth of the at least one maximum unit of encoding information regarding the encoding mode, and information regarding the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, tool coding and the operating mode in at least one of the maximum unit of encoding. The operating mode tool coding for block encoding is determined in accordance with the depth of the block encoding.

According to another aspect of an exemplary variant implementation is provided a device for decoding video data, wherein the device includes: a receiver which receives and parses the bitstream comprising encoded video data; a selector that selects from the bit stream encoded videodan�s, information regarding the coded depth of the at least one maximum unit of encoding information regarding the encoding mode, and information regarding the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode; and a decoder that decodes the coded video data in at least one of the maximum unit of encoding in accordance with the working mode of encoding tool, consistent with the unit of coding corresponding to at least one coded depth, based on the information regarding the coded depth of the at least one maximum unit of encoding, information regarding the encoding mode, and information relating to the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode in which the operating mode tool coding for block coding is determined according to coded depth of the block encoding.

According to another aspect of an exemplary variant implementation is provided a method of decoding video data, the method includes�t: decoding encoded video data into at least one maximum unit of encoding in accordance with the working mode of encoding tool, consistent with the unit of coding corresponding to at least one coded depth, based on the information regarding the coded depth of the at least one maximum unit of encoding information regarding the encoding mode, and information relating to the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode in which the operating mode tool coding for block coding is determined according to coded depth of the block encoding.

According to another aspect of an exemplary embodiment provides a computer-readable recording medium having recorded therein a program for executing the method of encoding video data.

According to another aspect of an exemplary embodiment provides a computer-readable recording medium having recorded therein a program for executing the method of decoding video data.

Variant implementation of the invention

Below in this document are described in more detail exemplary embodiments of implementation with reference to the accompanying drawings. In addition, expressions such as "at least one", when they precede the list of elements�s, modify the entire list of elements and do not modify the individual elements of the list. In exemplary embodiments, a "block" can refer or may not refer to the block size depending on its context. Specifically, the encoding and videodictionary performed based on a spatial hierarchical data blocks in accordance with one or more exemplary variants of implementation, described with reference to Fig.1-15. Also, the encoding and videodictionary performed in the operating mode tool encoding, which varies in accordance with the block size of the encoding according to one or more exemplary variants of implementation, described with reference to Fig.16-23.

In the following exemplary embodiments, "block coding" or refers to the data block encoding in which the image data are encoded on the side of the encoder, or a block of encoded data, the encoded image data are decoded on the decoder side. Also, the "coded depth refers to the depth with which the encoded block is encoded. Later in this document, an "image" can refer to still image for video or moving image, i.e., the video itself.

Apparatus and method for encoding video, and method and device �La video decoding according to an exemplary implementation options are described below with reference to Fig.1-15.

Fig.1 is a block diagram of the device 100 encoding according to an exemplary embodiment of the. As shown in Fig.1, the device 100 encoding includes a separator 110 maximum unit of encoding, the determiner 120 of block coding and block 130 output.

The separator 110 is the maximum unit of encoding can split the current video frame image, based on a maximum unit of encoding for the current video frame. If the current video frame is greater than the maximum unit of encoding, the image data of the current video frame may be divided into at least one maximum unit of encoding. The maximum unit of encoding according to an exemplary embodiment of the may be a data unit having a size 32×32, 64×64, 128×128, 256×256 etc., and the shape of the data unit is a square having a width and height in squares of 2. The image data can be displayed on the determiner 120 of block encoding in accordance with at least one maximum unit of encoding.

Block encoding according to an exemplary embodiment of the can be characterized by a maximum size and depth. The depth denotes the number of times the block encoding spatially split from the maximum unit of encoding, and when the depth is deeper or �velocipede, deeper blocks of the coding according to depths may be split from the maximum unit of encoding to the minimum unit of encoding. The depth of the maximum unit of encoding is the top-most depth, and the depth of the minimum unit of encoding is the bottom depth. Since the block size of the encoding corresponding to each depth decreases as the depth of the maximum unit of encoding is increased, the block coding corresponding to an upper depth may include a variety of encoding units corresponding to lower depths.

As described above, the image data of the current video frame can be divided into maximal blocks of the coding according to the maximum block size encoding, and each of the maximum encoding units may include more in-depth units of coding, which are divided in accordance with the depths. As the maximum unit of encoding according to an exemplary embodiment of the split according to depths, the image data of the spatial domain included in the maximum unit of encoding can be classified hierarchically in accordance with the depths.

Can be pre-specified maximum depth and the maximum code block size�of duplication, limiting the total number of times the maximum height and width of the block coding may be hierarchically split.

The determiner 120 of the block encoding encodes at least one split region obtained by splitting a region of the maximum unit of encoding according to depths, and determines a depth to output a coded image data in accordance with at least one split region. I.e. the determinant of the 120 block of the encoding determines a coded depth by encoding the image data in the deeper blocks of the coding according to the depths, based on a maximum unit of encoding the current video frame and selecting a depth having the least encoding error. Thus, the encoded image data block encoding corresponding to that particular coded depth, are displayed on the unit 130 output. Also, the encoding blocks corresponding to the coded depth may be considered as encoded blocks of the encoding.

Certain coded depth and the encoded image data according to a coded depth are displayed on the unit 130 output.

The image data in the maximum unit of encoding is encoded based on the deeper encoding blocks, with�westwoodi at least one depth, equal to or below the maximum depth, and results of encoding the image data are compared based on each of a deeper encoding units. Depth having the least encoding error may be selected after comparison of coding errors deeper encoding units. At least one coded depth may be chosen for each maximum unit of encoding.

The size of the maximum unit of encoding is divided, when the unit of encoding is hierarchically split according to depths, and as the number of encoding units. Also, even if the encoding blocks correspond to the same depth in one maximum unit of encoding is determined, whether to divide each of the encoding units corresponding to the same depth to a lower depth by measuring the error encoding of the image data of each block coding separately. Therefore, even when image data is included in one maximum unit of encoding, the image data are divided into areas in accordance with the depths and the encoding errors may differ according to the areas within the maximum unit of encoding, and thus the coded depths may differ according to regions in the image data. Therefore,�on or more coded depths may be determined within the maximum unit of encoding, and the image data of the maximum unit of encoding can be divided in accordance with the units of encoding at least one coded depth.

Therefore, the determiner 120 of the block coding may determine the encoding blocks having a tree structure included in the maximum unit of encoding. The encoding blocks having a tree structure according to an exemplary embodiment of the comprise encoding blocks corresponding to a depth determined as a coded depth, from among deeper encoding units included in the maximum unit of encoding. Block encoding the coded depth may be hierarchically determined according to depths in the same region, the maximum unit of encoding, and can be determined independently in other areas. Similarly, a coded depth in the current scope can be determined independently from the coded depth in another area.

The maximum depth according to an exemplary embodiment of the represents the index associated with the number of times of separation with the maximum unit of encoding to the minimum unit of encoding. The first maximum depth according to an exemplary embodiment of the may denote the total number of times of separation with maximum block Kodirov�tion to the minimum unit of encoding. The second maximum depth according to an exemplary embodiment of the may denote the total number of depth levels from the maximum unit of encoding to the minimum unit of encoding. For example, when the depth of the maximum unit of encoding is 0, the depth of the block coding, where the maximum unit of encoding is split once, can be set to 1, and the depth of the block coding, where the maximum unit of encoding is split twice, can be installed on 2. In this case, if the minimum unit of encoding is a block encoding in which the maximum unit of encoding is split four times, there are 5 depth levels of depths 0, 1, 2, 3 and 4. Thus the first maximum depth may be set to 4, and the second maximum depth may be set to 5.

Coding with prediction and conversion can be performed in accordance with the maximum unit of encoding. Coding with prediction and conversion is also performed through a deeper encoding units according to depths, depths equal to or less than the maximum depth, based on a maximum unit of encoding. The transformation can be performed in accordance with the method of orthogonal transform or integer stamping�.

Since the number of deeper encoding units increases whenever the maximum unit of encoding is split according to depths, the coding, such as coding with prediction and conversion is performed in all the deeper encoding blocks generated when depth increases. For convenience of description coding with prediction and conversion later in this paper, we describe the block-based encoding of the current depth in the maximum unit of encoding.

The device 100 encoding variable may select at least one of size and shape of a data unit for encoding the image data. To encode the image data, operations can be performed, such coding with prediction, transform and entropy encoding, and at this point, the same data block can be used for all operations or different data units may be used for each operation.

For example, the device 100 encoding can choose the unit of encoding to encode the image data and the data block different from the block of the encoder to perform the encoding with the prediction image data block encoding.

To perform the encoding with the prediction of the maximum unit of encoding, encoded�e prediction may be performed on the basis of a block encoding, corresponding coded depth, i.e. on the basis of block coding that is no longer split to encoding units corresponding to a lower depth. Later in this document, the unit of encoding that is no longer split and becomes a basic unit for coding with prediction, referred to as a prediction block. Section, sectioning of the block prediction may include prediction block or a block data obtained by dividing at least one of the height and width of the block prediction.

For example, when the block encoding 2N×2N (where N is a positive integer) is no longer split and becomes a prediction block is 2N×2N, the size of the partition may be 2N×2N, 2N×N, N×2N or N×N. examples of a partition type include symmetrical partitions that are obtained by the symmetric division of at least one of the height and width of the block prediction, sections, obtained with an asymmetric splitting the height or width of the block prediction (such as 1:n or n:1), sections, which are obtained by geometrical separation of the prediction block, and partitions having arbitrary shapes.

The prediction mode of the prediction block may be at least one internal mode, external mode and the skip mode. For example, the internal mode or external�th mode can be performed on the partition of 2N×2N, 2N×N, N×2N or N×N. In this case, the skip mode may be performed only on the partition of 2N×2N. The encoding is independently performed on one prediction block in the block encoding, thereby selecting a prediction mode having the smallest encoding error.

The device 100 encoding can also perform the conversion of the image data in the block coding based on the block coding to encode the image data and the block data, which is different from the block encoding.

To perform the transformation in block coding, the transform may be performed on the basis of a data unit having a size less than or equal to the block coding. For example, the data block for converting may include a data block for internal mode and a data unit for the external mode.

The data block used as the base conversion later in this document referred to as unit conversion. Depth conversion, indicating the number of times of separation to achieve the conversion unit by splitting the height and width of the block encoding can also be installed in the unit conversion. For example, in the current block encoding the 2N×2N depth conversion can be equal to 0 when the size of the transform block is also equal to 2N×2N, may be equal to 1 when each of Usvyaty and width of the current block encoding is divided into two equal parts, in General divided into 4 transform block, and the block size conversion, thus N×N, and may be 2 when each of the height and width of the current block encoding is divided into four equal portions, generally separated into 4^2 conversion blocks, and the block size conversion, thus equal to N/2×N/2. For example, the conversion unit may be set in accordance with a hierarchical tree structure, in which the conversion unit of the upper depth of the transformation is divided into four unit conversion lower depth conversion in accordance with the hierarchical characteristics of the depth of the conversion.

Similarly, the block encoding unit conversion in the unit of encoding may be recursively split into areas of smaller dimensions, so that the conversion unit can be determined independently of the blocks of fields. Thus, residual data in the block coding may be separated in accordance with the transformation that has a tree structure in accordance with the depths of the conversion.

Information encoding in accordance with the encoding blocks corresponding to the coded depth, uses the information about the coded depth and the information related to coding with prediction and conversion. Therefore, the determiner 120 b�Oka encoding determines a coded depth, having the least encoding error, and determines a partition type in the prediction block, the prediction mode in accordance with the prediction blocks and the block size conversion for conversion.

The units of encoding in accordance with a tree structure in the maximum unit of encoding and method for determining the partition according to an exemplary implementation options are described in detail below with reference to Fig.3-12.

The determiner 120 of the block coding may measure an encoding error of deeper encoding units according to depths by using distortion optimization for speed, based on the Lagrange multipliers.

Block 130 output displays the image data of the maximum unit of encoding that is encoded based on the at least one coded depth determined by the determiner 120 of the block coding, and information about the encoding mode according to coded depth, in bit streams.

Encoded image data may be obtained by encoding residual data of the image.

Information about the encoding mode according to coded depth may include at least one of the information about the coded depth, the partition type in the prediction block, the prediction mode and the block size of the transform.

�information about the coded depth may be determined by using information about the division in accordance with the depths, which indicates whether the coding over blocks of coding a lower depth instead of a current depth. If the current depth of the current block encoding is encoded depth image data in the current block encoding are encoded and displayed. In this case, information about the division can be determined so as not to divide the current block encoding to a lower depth. Alternatively, if the current depth of the current block encoding is not the coded depth, the encoding is performed over the block of coding a lower depth. In this case, information about the division can be determined in order to divide the current block coding to get blocks of coding a lower depth.

If the current depth is a coded depth, the encoding is performed over the unit of coding, which is divided into a unit of encoding with a lower depth. In this case, since at least one unit of encoding with a lower depth there is in one unit encoding the current depth, the encoding is repeatedly performed on each unit of coding a lower depth, and thus the encoding may be recursively performed for encoding units having the same depth.

As the units of encoding, keyshiacolevevo structure, determined for one maximum unit of encoding, and information about at least one encoding mode is determined for the unit of encoding the coded depth, information about at least one encoding mode may be determined for one maximum unit of encoding. Also, a coded depth of the image data of the maximum unit of encoding can be different in accordance with the locations, since the image data is hierarchically split according to depths, and thus information about the coded depth and the encoding mode may be set for the image data.

Therefore, the block 130 output may designate information encoding of a corresponding coded depth and the encoding mode for at least one of the block encoding, the block prediction and the minimum unit included in the maximum unit of encoding.

The minimum unit according to an exemplary embodiment of the is a rectangular data unit obtained by splitting the minimum unit encoding the lowermost depth by 4. Alternatively, the minimum unit may be a maximum rectangular data unit that may be included in all blocks of the encoding blocks of the prediction blocks of the partition and conversion units included in the Mac�imally block encoding.

For example, the encoding information output through the block 130 output may be classified on information encoding in accordance with the units of coding and information encoding in accordance with the prediction blocks. Information encoding in accordance with the encoding blocks may include information about the prediction mode and the size of the partitions. Information encoding in accordance with the prediction blocks may include information about the estimated direction of the external mode, the reference index image of the external mode, the motion vector, the color component of the internal mode and interpolation method of an internal mode. Also, information about the maximum block size encoding as defined in accordance with the video frames, slices or groups of frames (GOP), and information about a maximum depth may be inserted into at least one of the set of parameters sequence (SPS) or the header of the bitstream.

In the device 100 encoding a deeper block coding may be the unit of coding obtained by dividing at least one of the height and width of the block coding of an upper depth, which is one layer above, by two. For example, when the block size of the encoding of the current depth is 2N×2N, the size of the block coding over �iscoe depth can be N×N. Also, the block encoding of the current depth having the size of 2N×2N may include a maximum of 4 blocks of coding a lower depth.

Consequently, the device 100 encoding can generate encoding blocks having a tree structure by defining encoding units having an optimum shape and an optimum size for each maximum unit of encoding, based on the maximum size of the block coding and the maximum depth determined considering characteristics of the current video frame. Also, since encoding may be performed on each maximum unit of encoding through the use of any one of various prediction modes and transformations, the optimal encoding mode may be determined based on the characteristics of the block coding of the different sizes of the image.

Thus, if an image having high resolution or a large amount of data, the macroblock is encoded in the prior art, significantly increases the number of macroblocks in the video frame. Therefore, increasing the number of portions of compressed data generated for each macroblock, and thus, it is difficult to convey concise information, and decreases the efficiency of data compression. However, by using the apparatus 100 V�of eurodiaconia according to an exemplary embodiment of the may increase the efficiency of image compression, as adjusted block coding based on the characteristics of the image and increases the maximum size of the block coding based on the size of the image.

Fig.2 is a block diagram of a device 200 videodictionary according to an exemplary embodiment of the. As shown in Fig.2, device 200 videodictionary includes receiver 210, the selector 220 of the image data and information of the encoding and the decoder 230 of the image data. The various terms, such as block coding, depth, block prediction and block transform, and information about various encoding modes, for various operations of the device 200 videodictionary similar to those described above with reference to Fig.1.

The receiver 210 receives and parses the bit stream of the encoded video. The selector 220 of the image data and information of the encoding allocates the coded image data for each block of the encoding of the bit stream to perform parsing, and encoding blocks have a tree structure according to each maximum unit of encoding, and outputs the selected image data to the decoder 230 of the image data. The selector 220 of the image data and information of the encoding may extract information about the maximum size of the Blo�and encoding the current video frame from the header of the current video frame or SPS.

Also, the selector 220 of the image data and information of the encoding selects information about the coded depth and the encoding mode for encoding units having a tree structure according to each maximum unit of encoding of the bit stream to perform syntactic analysis. Selected information about the coded depth and the encoding mode is output to the decoder 230 of the image data. I.e., image data in the bit stream are separated to the maximum unit of encoding, so that the decoder 230 decodes the image data, the image data for each maximum unit of encoding.

Information about the coded depth and the encoding mode in accordance with the maximum unit of encoding may be set for information about at least one unit of encoding corresponding to the coded depth, and information about the encoding mode may include information about at least one of a partition type of a corresponding block of the encoding corresponding to the coded depth, the prediction mode and the block size conversion. Also, information about the division in accordance with the depths can be allocated as the information about the coded depth.

Information about the coded depth and the encoding mode according to each maximum b�eye-encoding, selected by the selector 220 of the image data and information of the encoding is the information about the coded depth and the encoding mode determined to generate a minimum encoding errors when an encoder, such as the device 100 encoding according to an exemplary embodiment of the, repeatedly performs encoding for each deeper block coding based on depth according to each maximum unit of encoding. Consequently, the device 200 videodictionary can restore an image by decoding the image data in accordance with the coded depth and encoding mode that generates the minimum encoding error.

Since encoding information about the coded depth and the encoding mode may be assigned to a previously established data block from among the corresponding block encoding, the block prediction and the minimum unit, the selector 220 of the image data and information of the encoding may extract information about the coded depth and the encoding mode according to pre-defined data blocks. Pre-defined blocks of data that are assigned the same information about the coded depth and the encoding mode may be a b�Oki data included in the same maximum unit of encoding.

The decoder 230 restores the image data of the current video frame by decoding the image data in each maximum unit of encoding, based on the information about the coded depth and the encoding mode according to maximum units of encoding. For example, the decoder 230 of the image data can decode the coded image data based on the extracted information about the partition type, the prediction mode and the unit conversion for each block of the encoding of the number of encoding units having a tree structure included in each maximum unit of encoding. The decoding process may include a prediction, including intra prediction and motion compensation, and inverse transformation. The inverse transform can be performed in accordance with the method of the inverse orthogonal transformation or inverse integer transform.

The decoder 230 of the image data can perform at least one of the internal prediction and motion compensation in accordance with section and the prediction mode of each block coding based on the information about the partition type and the prediction mode of the prediction block of the block coding according to�Rovaniemi depths.

Also, the decoder 230 of the image data may perform inverse transformation according to each transformation block in the block coding based on the information about the block size of the transformation unit of the coding according to coded depths, so as to perform the inverse transformation according to maximum units of encoding.

The decoder 230 of the image data may define at least one coded depth of a current maximum unit of encoding by using information about the division in accordance with the depths. If the information on the division indicates that the image data are no longer divided over the current depth current depth is a coded depth. Consequently, the decoder 230 of the image data can decode encoded data of at least one block of the encoding corresponding to each coded depth in the current maximum block coding by applying at least one of information about a partition type of the prediction block, the prediction mode and the block size of conversion for each block of the encoding corresponding to the coded depth, and output the image data of the current maximum unit of encoding.

For example, blocks of data that includes details Cody�hardware, having the same information on the division can be collected through observation of a set of information encoding that is assigned to a predetermined data block from among the block encoding, the block prediction and the minimum unit, and the gathered data units may be considered as one block of data to be decoded by the decoder 230 of the image data in the same encoding mode.

The device 200 videodictionary may obtain information about at least one unit of encoding that generates the minimum encoding error when encoding is performed recursively for each maximum unit of encoding, and can use the information to decode the current video frame. I.e. can be decoded blocks of the encoding that has a tree structure defined by the optimal encoding blocks each maximum unit of encoding. Also, the maximum block size encoding may be determined based on at least one of the resolution and the amount of image data.

Therefore, even if image data has high resolution and a large amount of data, the image data may be efficiently decoded and restored by using the block size of the encoding and the encoding mode, which ADP�the EIT shall be determined in accordance with characteristics of image data, and information about an optimum encoding mode received from the encoder.

Method of determining the encoding units having a tree structure, the block prediction and transform block according to one or more exemplary variants of implementation described below with reference to Fig.3-13.

Fig.3 is a diagram for describing the principle of encoding units according to an exemplary embodiment of the. The block size of the encoding can be expressed as width × height. For example, the block size encoding may be 64×64, 32×32, 16×16 or 8×8. Block encoding of 64×64 may be split into sections 64×64, 64×32, 32×64 or 32×32, and the block encoding of 32×32 may be split into sections 32×32, 32×16, 16×32 or 16×16, the block encoding of 16×16 may be divided into sections 16×16, 16×8, 8×16 or 8×8, and the block coding of 8×8 may be divided into sections 8×8, 8×4, 4×8 or 4×4.

As shown in Fig.3, in the example provided, the first video data 310 with a resolution of 1920×1080 and a unit of encoding with a maximum size of 64 and a maximum depth of 2. In addition, the example provided by the second video data 320 with a resolution of 1920×1080 and a unit of encoding with a maximum size of 64 and a maximum depth of 3. Also, as an example, are provided by a third video data 330 with a resolution of 352×288 and a unit of encoding with a maximum size of 16 and a maximum �lubenau 1. The maximum depth shown in Fig.3, indicates the total number of splits from the maximum block coding to a minimum decoding unit.

If the resolution is high or the amount of data is large, the maximum block size encoding can be great to improve the encoding efficiency and accurately reflect the characteristics of the image. Therefore, the maximum block size encoding first and second video data 310 and 320 having higher resolution than the third video data 330 may be equal to 64.

Since the maximum depth of the first video data 310 is 2, the blocks 315 encoding first video data 310 may include a maximum unit of encoding, with the size of the major axis 64, and the encoding blocks having a major axis 32 and 16 since depths are increased by two levels by dividing twice the maximum unit of encoding. Meanwhile, since the maximum depth of the third video data 330 is 1, the blocks 335 encoding a third video data 330 may include a maximum unit of encoding, with the size of the major axis 16, and encoding blocks having the size of the major axis 8, as the depths are increased one level by subdividing once the maximum unit of encoding.

Since the maximum depth of the second video data 320 is 3, BL�325 CI encoding the second video data 320 may include a maximum unit of encoding, with the size of the major axis 64, and the encoding blocks having a major axis 32, 16 and 8, since the depth increase by 3 levels by dividing three times the maximum unit of encoding. As the depth increases, can accurately be expressed detailed information.

Fig.4 is a block diagram of the encoder 400 of the image based on the encoding blocks according to an exemplary embodiment of the. The encoder 400 of the image may perform operations of the determiner 120 of the block coding device 100 encoding according to an exemplary embodiment of the encoding of the image data. I.e., referring to Fig.4, the internal predictor 410 performs intra prediction encoding units from among the current frame 405 in internal mode, and block 420, the motion estimation and the compensator 425 motion to perform an external evaluation and a motion compensation encoding units from among the current frame 405 in the external mode by using the current frame 405 and a reference frame 495.

The data output from the internal predictor 410 and block 420, the motion estimation and compensator 425 motion output as the quantized conversion coefficient by the inverter 430 and a quantizer 440. Quantized transformation coefficient is restored as data in the spatial domain after�STV inverse quantizer 460 and an inverse Converter 470, and the restored data in the spatial domain are displayed as the reference frame 495 after being post-processing by means of the unlocking block block 480 and 490 contour filtering. Quantized conversion coefficient may be output as a bitstream 455 through an entropy encoder 450.

To the encoder 400 images were used in the device 100 encoding, the elements of the encoder 400 of the image, i.e. the internal predictor 410, the unit 420 estimates the motion compensator 425 motion, the transformer 430, the quantizer 440, the entropy encoder 450, the inverse quantizer 460, the inverse Converter 470, the unlocking block block 480 and 490 contour filtering, perform operations based on each block of the encoding of the number of encoding units having a tree structure, with the maximum depth of each maximum unit of encoding.

Specifically, the internal predictor 410, the block 420, the motion estimation and the compensator 425 motion determine partitions and a prediction mode of each block of the encoding of the number of encoding units having a tree structure, with the maximum size and the maximum depth of a current maximum unit of encoding, and the Converter 430 determines the size of the transform block in each block of the encoding of the number of encoding units having a tree structure�ur.

Fig.5 is a block diagram of a decoder 500 images based on the blocks of the coding according to an exemplary embodiment of the. As shown in Fig.5, block 510 parser parses the encoded image data to be decoded, and information about the encoding used for decoding of the bit stream 505. The coded image data are displayed as inversely quantized data through an entropy decoder 520 and an inverse quantizer 530, and inversely quantized data is restored to image data in the spatial domain through the inverse Converter 540.

Internal predictor 550 performs intra prediction encoding units in internal mode in relation to the image data in the spatial domain, and the compensator 560 movement performs motion compensation on a block encoding in external mode by using the reference frame 585.

The image data in the spatial domain, which passed through the internal predictor 550 and the compensator 560 movement, can be output as a restored frame 595 after being post-processing by means of the unlocking block block 570 and 580 contour filtering. Also, the image data, which is podobrat�and through the unlocking block block 570 and 580 contour filter can be output as the reference frame 585.

To decode the image data decoder 230 of the image data in the device 200 videodictionary according to an exemplary embodiment of the decoder 500 images can perform operations that are performed after block 510 parsing. The decoder 500 images were used in the device 200 videodictionary, the elements of the decoder 500 images, i.e., block 510 parsing, entropy decoder 520, the inverse quantizer 530, the inverse Converter 540, internal predictor 550, the compensator 560 movement, the unlocking block block 570 and 580 contour filtering, perform operations based on the blocks of the encoding that has a tree structure, for each maximum unit of encoding.

Specifically, the internal predictor 550 and the compensator 560 movement perform operations based on the sections and the prediction mode for each of encoding units having a tree structure, and the inverse Converter 540 performs operations based on the size of the unit conversion for each block of the encoding.

Fig.6 is a diagram illustrating deeper blocks of the coding according to the depths and sections, according to an exemplary embodiment of the.

The device 100 encoding � device 200 videodictionary according to an exemplary implementation options use hierarchical blocks encoding to account for the characteristics of the image. Maximum height, maximum width and maximum depth of encoding units can adaptively be determined in accordance with characteristics of the image or, otherwise, may variously be set by the user. The dimensions of deeper encoding units according to depths may be determined in accordance with pre-set maximum block size encoding.

As shown in Fig.6, in a hierarchical structure 600 of encoding units according to an exemplary embodiment of the maximum height and maximum width of the encoding units each equal to 64, and the maximum depth is 4. As the depth increases along the vertical axis of the hierarchical structure 600, each divided from the height and width of the deeper block coding. Also, the block prediction and the sections that constitute the Foundation for predictive coding of each deeper block coding is shown along the horizontal axis of the hierarchical structure 600.

I.e. the first unit 610 encoding represents the maximum unit of encoding in a hierarchical structure 600, in which the depth is 0, and the size, i.e. the height by width, is 64×64. The depth increases along the vertical axis, and a second unit 620 encoding with time�EP 32×32 and a depth of 1, the third block 630 coding, have a size of 16×16 and a depth of 2, the fourth block 640 coding, have a size of 8×8 and a depth of 3, and the fifth block 650 coding, have a size of 4×4 and depth 4. The fifth block 650 coding, have a size of 4×4 and depth is 4 is a minimum unit of encoding.

The block prediction and the sections of the block coding arranged along the horizontal axis according to each depth. Ie if the first unit 610 encoding, with the size 64×64 and the depth of 0 is a prediction block, the prediction block may be split into partitions included in the first unit 610 encoding, i.e. a partition 610 having a size of 64×64, partitions 612 having the size of 64×32, partitions 614 having the size of 32×64, or partitions 616 having the size of 32×32.

Similarly, the prediction block of the second block 620 coding, have a size of 32×32 and a depth of 1 may be split into partitions included in the second unit 620 encoding, i.e. a partition 620 having a size of 32×32, partitions 622 having a size of 32×16, partitions 624 having a size of 16×32, and partitions 626 having a size of 16×16.

Similarly, the prediction block of the third block 630 coding, have a size of 16×16 and a depth of 2 may be split into partitions included in the third block 630 encoding, i.e. a partition having a size of 16×16, included in the third block 630 encoding, partitions 632 having a size of 16×8, sections 634 having a size� 8×16, and sections 636 having a size of 8×8.

Similarly, the fourth block prediction unit 640 coding, have a size of 8×8 and a depth of 3 may be split into partitions included in the fourth block 640 encoding, i.e. a partition having a size of 8×8, included in the fourth block 640 encoding, partitions 642 having a size of 8×4, partitions 644 having a size of 4×8, and sections 646 having a size of 4×4.

The fifth block 650 coding, have a size of 4×4 and depth is 4 is a minimum unit of encoding and block encoding the lowest depth. The fifth block prediction block 650 encoding is assigned to the partition that has size 4×4.

To determine at least one coded depth units of the maximum coding unit 610 encoding, the determiner 120 of the block coding device 100 performs encoding encoding for encoding units corresponding to each depth included in the maximum unit 610 encoding.

The number of deeper encoding units according to depths, including data in the same range and the same size, increases as the depth increases. For example, the four blocks of the encoding, corresponding to the depth of 2, are used to embracing data included in one block of the encoding, corresponding to the depth of 1. Therefore, in order� to compare the results of coding of the same data according to depths, is encoded in each block of the encoding, corresponding to the depth of 1, and four encoding units corresponding to a depth of 2.

To perform the encoding for the current depth from among the depths, the least encoding error may be selected for the current depth by performing encoding for each prediction block in the encoding blocks corresponding to the current depth, along the horizontal axis of the hierarchical structure 600. Alternatively, you can search minimum coding error by comparing the least encoding errors according to depths, by performing encoding for each depth as the depth increases along the vertical axis of the hierarchical structure 600. The depth and the section having the minimum encoding error in the first block 610 encoding can be selected as the coded depth and a partition type of the first unit 610 encoding.

Fig.7 is a diagram for describing dependencies between block 710 coding and blocks 720 conversion according to an exemplary embodiment of the.

The device 100 or 200 of encoding or videodictionary according to an exemplary variants of implementation encodes or decodes the image in accordance with the encoding blocks having dimensions that are less than or �any the maximum unit of encoding for each maximum unit of encoding. Block size conversion for conversion during encoding can be selected on the basis of data blocks that are not more than the corresponding block of the encoding.

For example, device 100 or 200 of encoding or videodictionary, if the block size is 710 coding is 64×64, transformation may be performed through the use of blocks 720 conversion, having the size of 32×32.

Also, the data block 710 coding, have a size of 64×64 may be encoded by performing the transformation on each of the conversion units having the size 32×32, 16×16, 8×8 and 4×4 that are smaller than 64×64, so you can get the transform block having the least encoding error.

Fig.8 is a diagram for describing information encoding encoding units corresponding to a coded depth, according to an exemplary embodiment of the. As shown in Fig.8, block 130 output device 100 encoding according to an exemplary embodiment of the can encode and transmit information 800 about a partition type, information 810 about the prediction mode, and information 820 about a size of a transform block for each block of the encoding corresponding to the coded depth, as information about the encoding mode.

Information 800 about the partition type represents information about the shape of the RA�case, sectioning of the block prediction of the current block encoding, wherein the partition is a data unit for predictive coding of the current block encoding. For example, the current block CU_0 coding, have a size of 2N×2N may be split into any one of a partition 802 having a size of 2N×2N, a partition 804 having a size of 2N×N, the partition 806 having a size of N×2N, and the partition 808 having a size of N×N. In this case, the information 800 about the partition type is set to indicate one of the partition 804 having a size of 2N×N, the partition 806 having a size of N×2N, and the partition 808, having the size N×N.

Information 810 about the prediction mode indicates the prediction mode of each partition. For example, the information 810 about the prediction mode may indicate the coding mode with the prediction performed on the partition specified information 800 about the partition type, i.e. internal mode 812, an external mode 814, or 816 passes.

Information 820 about a size conversion unit specifies the unit of conversion, subject to when the conversion is performed on the current block encoding. For example, the unit conversion can be the first block 822 internal conversion, the second block 824 internal conversion, the first block 826 external conversion or the second block 828 internal conversion.

The selector 20 of the image data and information of the encoding device 200 videodictionary according to an exemplary variant of the implementation can allocate and use the information 800, 810 and 820 for decoding, according to each deeper block coding.

Fig.9 is a diagram of a deeper encoding units according to depths according to an exemplary embodiment of the.

Information about the division can be used to indicate changes in depth. Information about the split indicates whether a block is divided encoding the current depth into blocks of coding a lower depth.

As shown in Fig.9, block 910 predictions for predictive coding unit 900 encoding having a depth of 0 and the size 2N_0×2N_0, may include a partition type 912 partition with size 2N_0×2N_0, type section 914 having a size 2N_0×N_0, type section 916 having a size N_0×2N_0, and the type section 918 having a size N_0×N_0. Although Fig.9 illustrates only the types 912-918 section, which are obtained by the symmetric division of block 910 predictions, it is clear that the partition type is not limited thereto. For example, according to another exemplary embodiment of the sections of the block 910 predictions may include asymmetrical partitions, partitions having a predetermined shape, and partitions having a geometrical shape.

Coding with prediction is repeatedly performed on one partition having a size 2N_0×2N_0, two partitions having a size 2N_0×N_0, two sections�, having a size N_0×2N_0, and four partitions having a size N_0×N_0, according to each partition type. Coding with prediction in internal mode and external mode may be performed on the partitions having the sizes 2N_0×2N_0, N_0×2N_0, 2N_0×N_0 and N_0×N_0. Coding with prediction in the skip mode is performed only on the partition having the size 2N_0×2N_0.

Compares coding errors, including encoding with the prediction in the types 912-918 section, and the smallest coding error is determined among the partition types. If an encoding error is smallest in one of the types 912-916 section, block 910 predictions may not be divided to a lower depth.

For example, if the encoding error is the smallest in the type section 918, a depth is changed from 0 to 1 to split the type section 918 in operation 920, and encoding is repeatedly performed on a block 930 coding with depth 2 and size N_0×N_0 search for a minimum encoding errors.

Block 940 predictions for predictive coding unit 930 coding with depth 1 and size 2N_1×2N_1 (=N_0×N_0), may include a partition type 942 partition with size 2N_1×2N_1, type section 944 having a size 2N_1×N_1, type section 946 having a size N_1×2N_1, and the type section 948 having a size N_1×N_1.

As an example, if the encoding error is n�imensa in the type section 948, depth varies from 1 to 2 to split the type section 948 in operation 950, and encoding is repeatedly performed on a block 960 coding, that have depth 2 and size N_2×N_2 search for a minimum encoding errors.

When a maximum depth is d, the split operation according to each depth may be performed until the moment when the depth becomes equal to d-1, and information about the division can be encoded up to the moment when the depth is equal to one of 0 to d-2. For example, when encoding is performed up to the moment when the depth is equal to d-1, after the unit of encoding that corresponds to the depth d-2 is split in operation 970, block 990 predictions for predictive coding unit 980 encoding having a depth of d-1 and a size of 2N_(d-1)×2N_(d-1), may include a partition type 992 partition having a size of 2N_(d-1)×2N_(d-1), type 994 section, having a size of 2N_(d-1)×N_(d-1), type 996 partition having a size of N_(d-1)×2N_(d-1), and the type section 998 having the size of N_(d-1)×N_(d-1).

Coding with prediction can be repeatedly performed on one partition having a size of 2N_(d-1)×2N_(d-1), two partitions having a size of 2N_(d-1)×N_(d-1), two partitions having a size of N_(d-1)×2N_(d-1), four partitions having a size of N_(d-1)×N_(d-1) from among the types 992-998 section to search for a partition type having a minimum encoding error./p>

Even when the type of the section 998 has the minimum encoding error, since a maximum depth is d, the unit CU_(d-1) encoding having a depth of d-1 is no longer split to a lower depth. In this case, a coded depth for blocks of a current maximum coding unit 900 encoding is defined equal to d-1, and a partition type of the current maximum unit 900 encoding may be determined equal N_(d-1)×N_(d-1). Also, since the maximum depth is d, and the minimum block 980 encoding having a lowest depth d-1 is no longer split to a lower depth, there shall be information about the split for the minimum block 980 encoding.

Unit 999 of data may be a minimum unit for current is the maximum unit of encoding. The minimum unit according to an exemplary embodiment of the may be a rectangular data unit obtained by splitting the minimum block 980 encode at 4. Through repeated execution of the encoding device 100 of the encoding according to an exemplary embodiment of the may select a depth having the least encoding error by comparing encoding errors according to depths of the 900 block of coding to determine the coded depth, and set the appropriate �SP partition and a prediction mode as an encoding mode of the coded depth.

Essentially, the minimum encoding errors according to depths are compared in all depths from 1 to d, and a depth having the least encoding error may be determined as a coded depth. Coded depth, the partition type of the prediction block and the prediction mode may be encoded and transmitted as information about the encoding mode. Also, since block coding is separated from a depth of 0 to a coded depth, information about the division of the coded depth is set to 0, and information about the division of depths excluding the coded depth is set to 1.

The selector 220 of the image data and information of the encoding device 200 videodictionary according to an exemplary variant of the implementation can allocate and use the information about the coded depth and the prediction block unit 900 encoding to decode the partition 912. The device 200 videodictionary can determine the depth at which information about the division is equal to 0 as a coded depth by using the depth information according to depths, and use information about the encoding mode of the corresponding depth for decoding.

Fig.10-12 are diagrams for describing dependencies between blocks 1010 coding blocks 1060 predictions and BL�kami 1070 conversion according to one or more implementation options.

As shown in Fig.10, blocks 1010 encoding are encoding blocks having a tree structure corresponding to coded depths, a certain device 100 of encoding, according to an exemplary implementation options in the maximum unit of encoding. As shown in Fig.11 and 12, block 1060 predictions are sections of prediction blocks each of the blocks 1010 coding, and blocks 1070 conversion are conversion units of each block 1010 encoding.

When the depth of the maximum unit of encoding is equal to 0 in block 1010 coding, depth units 1012 and 1054 encoding is set to 1, the depth of blocks 1014, 1016, 1018, 1028, 1050, and 1052 encoding is 2, the depth of the blocks 1020, 1022, 1024, 1026, 1030, 1032 and 1048 encoding is equal to 3, and depths of blocks 1040, 1042, 1044 and 1046 encoding is 4.

In block 1060 predictions some blocks 1014, 1016, 1022, 1032, 1048, 1050, 1052 and 1054 coding obtained by dividing units of the coding units 1010 coding. In particular, the types of the partition in blocks 1014, 1022, 1050, and 1054 coding, have a size of 2N×N, the types of the partition in blocks 1016, 1048, and 1052 of coding, have a size of N×2N, and a partition type of block 1032 encoding has a size of N×n Blocks of the prediction blocks and sections 1010 coding is less than or equal to each block of the encoding.

The transform or inverse transform performing�makes over the data image block 1052 coding in blocks 1070 conversion of the data block which is smaller than the block 1052 coding. Also, blocks 1014, 1016, 1022, 1032, 1048, 1050 and 1052 encoding blocks 1070 conversion differ from blocks in blocks 1060 conversion from the point of view of dimensions and shapes. I.e. devices 100 and 200 of encoding and videodictionary according to exemplary embodiments of the implementation can perform intra prediction, motion estimation, motion compensation, transformation, and inverse transformation individually on a data unit in the same block encoding.

Consequently, the encoding is performed recursively on each of encoding units having a hierarchical structure in each region, the maximum unit of encoding to determine the optimal block coding, and can be obtained, thus, the encoding blocks having a recursive tree structure. Encoding information may include information on the division of the unit of encoding information about a partition type, information about the prediction mode and size information of a transform block. Exemplary table 1 shows the information encoding, which can be installed in devices 100 and 200 of encoding and videodictionary.

Table 1
Information 0 razdelenii (encoding block encoding, having the size of 2N×2N and current depth of d)Information on the division 1
The prediction modeThe partition typeThe block size conversionTo repeatedly encode the encoding blocks having a lower depth of d+1
Internal - external-skipping (only 2N×2N)The balanced emergency hexadecimal partition typeAsymmet-hex partition typeInformation
0 on transfer
of block transformation
tical
Information
1 of separation
of unit
change
tical
2N×2N2Nx NN×2NN×N2NxnU2Nxn DnL×2NnRx 2N2N×2NN×N(symmetric type)
N/2×N/2 (asymmetrical type)

Block 130 output device 100 can output encoding information encoding on blocks of the encoding that has a tree structure, and the selector 220 of the image data and information of the encoding device 200 videodictionary may extract information encoding on blocks of the encoding that has a tree structure from a received bitstream.

Information about split�and indicates is divided whether the current block encoding on blocks of coding a lower depth. If information about the split of the current depth is 0, the depth at which the current block encoding is no longer split to a lower depth, is a coded depth. Information about the partition type, the prediction mode and the block size of the transform can be defined for the coded depth. If the current block encoding further divided in accordance with the information about the split, the coding is performed independently on separated blocks of coding a lower depth.

Mode prediction may be one of the internal mode, external mode and skip mode. Internal mode and external mode can be determined in all types of partition, and the skip mode may be defined only in the partition type having a size of 2N×2N.

Information about the partition type may indicate symmetrical types section, having a size of 2N×2N, 2N×N, N×2N and N×N, which are obtained by a symmetric splitting the height or width of the block prediction, and asymmetric types section, having a size of 2N×nU, 2N×nD, nL×2N and nR×2N, which are obtained by an asymmetric division of the height or width of the block prediction. Asymmetric types section, having a size of 2N×nU and 2N×nD, can be obtained respectively by separating high�s block prediction in the relationship 1:3 and 3:1, and asymmetrical types section, having a size of nL×2N and nR×2N, can be obtained respectively by dividing the width of the block prediction in the relationship 1:3 and 3:1.

The size of the transform block may be set to two types in internal mode and two types in the external mode. For example, if information about the division of the transform block is 0, the size of the transform block may be equal to 2N×2N, which is the size of the current block encoding. If the information on the division of the transform block is equal to 1, conversion units can be obtained by dividing the current block encoding. Also, if a partition type of the current block coding, have a size of 2N×2N is a symmetrical partition type, a size of the transform block may be N×N, and if the partition type of the current block encoding is an asymmetrical partition type, the size of the unit conversion may be N/2×N/2.

Encoding information about the encoding blocks having a tree structure may include at least one block of the encoding corresponding to the coded depth, block coding, corresponding to the prediction block, and the block coding corresponding to the minimum unit. Block encoding corresponding to the coded depth may include at least about�but from the block prediction and the minimum unit, include the same information encoding.

Consequently, it is determined whether the adjacent data blocks in the same block encoding corresponding to the coded depth, by comparing the information encoding of neighboring blocks of data. Also, the corresponding block of the encoding corresponding to the coded depth is determined by using information of the encoding data block, and, thus, can determine the distribution of coded depths in the maximum unit of encoding.

Therefore, if the current block encoding is predicted on the basis of information coding adjacent data units, encoding information of data blocks in the deeper encoding blocks adjacent to the current block encoding can be directly referred to and used. However, it is understood that another exemplary variant implementation is not limited thereto. For example, according to another exemplary embodiment of the if the current block encoding is predicted based on the information of neighboring coding blocks of data, is searched for data blocks adjacent to the current block encoding using encoded information data blocks, and a search of the neighboring blocks of the encoding can be referred to for prediction of the current block encoding.

Fig.13 performance�ulation of a scheme to describe the dependence between the block encoding, the block prediction or division and unit conversion in accordance with the information about the encoding mode of the approximate table 1 according to an exemplary embodiment of the.

As shown in Fig.13, maximum 1300 block encoding comprises blocks 1302, 1304, 1306, 1312, 1314, 1316 and 1318 of encoding the coded depth. In this case, since the block 1318 coding is a block-encoding the coded depth, information about the division can be set to 0. Information about the partition type of the block 1318 coding, have a size of 2N×2N may be set equal to one of type section 1322 having a size of 2N×2N, type 1324 section having the size of 2N×N, the type of section 1326 having a size of N×2N, type section 1328 having a size of N×N type section 1332 having a size of 2N×nU, type of section 1334 having a size of 2N×nD, type section 1336 having a size of nL×2N, and the type section 1338 having a size of nR×2N.

When the type of the partition is symmetric, i.e. the type 1322, 1324, 1326, or 1328 section, set the block 1342 conversion, having the size of 2N×2N, if the information about the split (flag size TU (unit conversion) the unit conversion is equal to 0, and sets the block 1344 conversion, having the size N×N, if the flag of the TU size is equal to 1.

When the type of the partition is asymmetrical, i.e. the type 1332, 1334, 1336, or 1338 section�, sets the block 1352 conversion, having the size of 2N×2N, if the flag of the TU size is equal to 0, and sets the block 1354 conversion, having size N/2×N/2, if the flag of the TU size is equal to 1.

As shown in Fig.13, the flag of the TU size is a flag having a value or 0 or 1, although it is clear that the flag of the TU size is not limited to 1 bit, and the conversion unit may be hierarchically split having a tree structure, when the flag of the TU size is increased from 0.

In this case, the block size of the transform, which has actually been used, can be expressed through the use of the flag of the TU size of the transform block according to an exemplary embodiment of the together with the maximum size and minimum size of the transform block. According to an exemplary embodiment of the device 100 encoding capable of encoding information about the maximum size of the transform block, the information about the minimum size of the transform block and the flag of the maximum TU size. The result of encoding information about the maximum size of the transform block, information about the minimum size of the transform block and the flag of the maximum TU size can be inserted in the SPS. According to an exemplary embodiment of the device 200 videodictionary can decode video through the use� information about the maximum size of the transform block, information about the minimum size of the transform block and the flag of the maximum TU size.

For example, if the size of the current block encoding is 64×64 and the maximum size of the transform block is equal to 32×32, the size of the transform block may be equal to 32×32, when the flag of the TU size is 0, may be equal to 16×16, when the flag of the TU size is equal to 1, and may be equal to 8×8, when the flag of the TU size is equal to 2.

As another example, if the size of the current block encoding is equal to 32×32 and a minimum size of the transform block is equal to 32×32, the size of the transform block may be equal to 32×32, when the flag of the TU size is 0. In this case, the flag of the TU size cannot be set to a value other than 0, since the size of the unit conversion may not be less than 32×32.

As another example, if the size of the current block encoding is 64×64, and the flag of the maximum TU size is equal to 1, the flag of the TU size may be equal to 0 or 1. In this case, the flag of the TU size may not be set at other than 0 or 1.

Thus, if is determined that the flag of the maximum TU size is equal to MaxTransformSizeIndex, the minimum size of the transform block is equal to MinTransformSize, and the size of the transform block is RootTuSize when the flag of the TU size is 0, the current minimum size of the transform block CurrMinTuSize, which can be defined in the current code block�of duplication, can be determined by equation (1):

Compared with the current minimum block size conversion CurrMinTuSize, which can be defined in the current block encoding, the block size conversion RootTuSize when the flag of the TU size is 0, may denote the maximum size of the transform block, which can be selected in the system. In equation (1) RootTuSize/(2^MaxTransformSizeIndex) denotes the size of the transform block, when the block size conversion RootTuSize when the flag of the TU size is 0, is split so many times that corresponds to the flag of the maximum TU size. In addition, MinTransformSize denotes the minimum size of the transform. Thus, a smaller value from among RootTuSize/(2^MaxTransformSizeIndex) and MinTransformSize may be the current minimum size of the transform block CurrMinTuSize, which can be defined in the current block encoding.

According to an exemplary embodiment of the maximum size of the transform block RootTuSize may vary according to the type of prediction mode.

For example, if the current prediction mode is an external mode, then RootTuSize may be determined by use of equation (2) below. In equation (2) MaxTransformSize denotes the maximum size of the transform block, and PUSize denotes the current block size of the prediction.

I.e., if the current prediction mode is the external mode, the block size conversion RootTuSize when the flag of the TU size is 0, may be a smaller value from among the maximum size of the transform block and the current block size of the prediction.

If the prediction mode of the current block of the partition is the internal mode, RootTuSize may be determined by use of equation (3) below. In equation (3) PartitionSize denotes the size of the current block section.

I.e., if the current prediction mode is an internal mode, the block size conversion RootTuSize when the flag of the TU size is 0, may be a smaller value from among the maximum size of the transform block and the size of the current block section.

However, the current maximum size of the transform block RootTuSize, which varies according to the type of prediction mode in the block section, is merely exemplary and is not limited to them in another exemplary variant of the implementation.

Fig.14 is a block diagram of the sequence of operations illustrating a method of encoding according to an exemplary embodiment of the. As shown in Fig.14, in operation 1210, the current video frame is divided into at least one maximum unit of encoding. Hi�depredates maximum depth, specifies the total number of possible splits.

In operation 1220 coded depth to output a final encoding result according to at least one split region obtained by splitting a region of each maximum unit of encoding according to depths, by encoding at least one split region, and determines the unit of encoding in accordance with a tree structure.

The maximum unit of encoding spatial splits whenever depth increases, and, thus, is divided into blocks of coding a lower depth. Each unit of coding can be divided into blocks of coding a different lower depth by spatial separation independently of the neighboring blocks of the encoding. The coding is performed repeatedly on each block of the coding according to the depths.

Also, the conversion unit in accordance with the types section, having the least encoding error is determined for each of the deeper block coding. To determine a coded depth having the minimum encoding error in each maximum unit of encoding, the encoding errors can be measured and compared in all the deeper blocks �of tiravanija in accordance with the depths.

In operation 1230 coded image data which represent the final result of the coding according to coded depth, are displayed for each maximum unit of encoding information of coding on the coded depth and the encoding mode. Information about the encoding mode may include at least one of information about a coded depth or information on sharing, information about the partition type of the prediction block, the prediction mode and the block size conversion. Encoded information about the encoding mode may be transmitted to the decoder, the coded image data.

Fig.15 is a block diagram of the sequence of operations illustrating a method of videodictionary according to an exemplary embodiment of the. As shown in Fig.15, in operation 1310 is received and parsing a bitstream of the encoded video.

In operation 1320 coded image data of the current video frame that is assigned to the maximum unit of encoding, and information about the coded depth and the encoding mode according to maximum units of encoding are allocated from the bitstream with a completed parse. Coded depth of each maximum unit of encoding is a deep�, having the least encoding error in each maximum unit of encoding. When encoding each maximum unit of encoding the image data encoded based on the at least one data block received via a hierarchical separation of each maximum unit of encoding in accordance with the depths.

In accordance with the information about the coded depth and the encoding mode maximum unit of encoding can be divided into encoding blocks having a tree structure. Each one of encoding units having a tree structure is defined as a unit of encoding corresponding to the coded depth, and optimally coded to output the smallest coding error. Consequently, the efficiency of encoding and decoding image can be improved by decoding each portion of encoded image data in units of coding after identifying at least one coded depth, in accordance with the encoding blocks.

In operation 1330, the image data of each maximum unit of encoding are decoded based on the information about the coded depth and the encoding mode according to maximum units of encoding. The decoded image data can be replayed on a device vos�of proizvedeniia, be stored on a storage medium or transmitted over the network.

Below in this document is described with reference to Fig.16-23 the encoding and videodictionary performed in the operating mode encoding tool based on the size of the block encoding according to an exemplary implementation options.

Fig.16 is a block diagram of the device 1400 encoding based on the instrumental means of encoding that takes into account the block size of the encoding, according to an exemplary embodiment of the. As shown in Fig.16, the device 1400 includes a separator 1410 maximum unit of encoding, the determiner 1420 block coding and block 1430 output.

The splitter 1410 maximum block coding divides the current video frame into at least one maximum unit of encoding.

Determiner 1420 block encoding encodes at least one maximum unit of encoding in the encoding blocks corresponding to the depths. In this case, the determiner 1420 block encoding can encode multiple split panes at least one of the maximum unit of encoding in the operating modes corresponding coding tools, in accordance with the depths of encoding units, respectively, based on the relation between the depth of the block code�of duplication, tool coding and working mode.

Determiner 1420 block encoding encodes the encoding blocks corresponding to all depths and to compare the results of coding with each other and determines the depth of the unit of encoding that has the highest encoding efficiency as a coded depth. As in the divided regions, at least one of the maximum unit of encoding depth having the highest encoding efficiency may vary according to the location, a coded depth of each of the divided regions, at least one of the maximum unit of encoding may be determined regardless of the depth of other areas. Thus, more than one coded depth may be determined within the maximum unit of encoding.

Examples of encoding tool for encoding may include quantization, transformation, intra prediction, external prediction, the motion compensation, entropy encoding and contour filtering, which are methods of encoding. According to an exemplary embodiment of the device 1400 encoding each of the plurality of encoding tools can be performed in accordance with at least one working mode. In �this case, the term "operating mode" specifies the way that is a tool encoding.

For example, if a tool coding is an external prediction, the operating mode tool coding can be classified in the first working mode, which selects the median value of motion vectors of adjacent blocks of the prediction, the second operating mode in which the selected motion vector of the prediction block in a specific location from among adjacent blocks of the prediction, and a third operating mode in which the selected motion vector of the block prediction, which includes the template that is most similar to the template of the current block prediction from among adjacent blocks of the prediction.

According to an exemplary embodiment of the device 1400 encoding variable can be set the working mode of the instrumental means of encoding in accordance with the block size of the encoding. In the present exemplary embodiment of the device 1400 encoding may be adjustable to set the operating mode of at least one encoding tool in accordance with the block size of the encoding. As the depth of the block coding corresponds to the block size coding, working�th mode at least one encoding tool may be determined based on the depth of the block encoding, corresponding to the block size of the encoding. Thus, there may be a correlation between the depth of the block coding tools coding and working mode. Similarly, if a tool encoding can be performed in the prediction block or section of the block encoding, the operating mode encoding tool may be determined based on the size of the prediction block or section.

The device 1400 encoding can establish the relationship between the depth of the block coding tools coding and working mode before doing the encoding. For example, according to another exemplary embodiment of the device 1400 encoding can establish the relationship between the depth of the block coding tools coding and working mode by means of encoding units encoding at least one of the maximum unit of encoding, corresponding to the depths, in all operating modes predefined tool coding and detection operation mode having the highest encoding efficiency, from among operating modes.

The device 1400 encoding may designate an operating mode that causes the service bits for encoding units, suitable�x depth sizes greater than or equal to a predetermined size, and may appoint an operating mode that does not cause service bits for other encoding units smaller than a predetermined size.

The device 1400 encoding can encode and transmit information relating to the relationship between the depth of the block coding tools coding and working mode of the blocks of the slice, the blocks of the frame, the blocks of the video frame or GOP blocks of the image. According to another exemplary embodiment of the device 1400 encoding may insert the information regarding the encoding, and information concerning the relationship between the depth of the block coding tools coding and working mode, in SPS.

If the determiner 1420 block coding performs intra prediction, which is the type of encoding tool, an operating mode of the internal predictions can be classified in accordance with the number of predictions, i.e. the directions in which it can be referred to information about the area. Thus, the operating mode of the internal predictions performed by the device 1400 encoding may include modes internal predictions that represents the number on�management predictions which change in accordance with the block size of the encoding.

Also, if the determiner 1420 block coding performs intra prediction, the operating mode of the internal predictions can be classified in accordance with whether to perform smoothing based on the structure of the image. Thus, the operating mode of the internal predictions performed by the device 1400 encoding that can represent whether to perform intra prediction in accordance with the size of the block coding by means of distinction between a mode of internal predictions for smoothing the scope of the encoding and mode of internal predictions to save the boundary lines.

If the determiner 1420 unit performs encoding of the external prediction, which is another type of tool coding determiner 1420 block coding may selectively perform at least one method of determining motion vectors. Thus, the operating mode of the external prediction performed by the device 1400 encoding may include the external mode prediction representing a method of determining motion vectors that selectively performed in accordance with the block size of the encoding.

If the determiner 1420 unit performs encoding p�obrazovanie, which is another type of tool coding determiner 1420 block coding may selectively perform rotational transformation in accordance with the structure of the image. Determiner 1420 unit of coding can save the matrix of rotational transformation, is multiplied by the data matrix of a predefined size, which is to transform to effectively perform rotational transformation. Thus, the operating mode of the conversion performed by the device 1400 encoding may include the conversion mode that represents the index of the matrix of rotational transformation corresponding to the block size of the encoding.

If the determiner 1420 unit performs quantization encoding, which is another type of tool coding, then you can use the Delta quantization parameters, representing the difference between the current quantization parameter and the predefined representative quantization parameter. Thus, the operating mode of the quantization performed by the device 1400 encoding, may include a quantization mode indicating whether to use the Delta quantization parameters, which changes in accordance with the block size of the encoding.

If the determiner 1420 block coding performs interpolation, which is another type of encoding tool that can be used interpolation filter. Determiner 1420 block coding can be selectively set rates or the number of taps of the interpolation filter based on the block size of the encoding block prediction or section and the depth of the block coding. Thus, the operating mode of the interpolation filtering performed by the device 1400 encoding may include an interpolation mode that specifies the coefficients or the number of taps of the interpolation filter are changed in accordance with the size or depth of the block coding and block size of a prediction or section.

Block 1430 output can output bit stream in which encoded video data (i.e., a final encoding result received from the determiner 1420 block coding), information regarding the coded depth and the encoding mode included for each of the at least one maximum unit of encoding. Encoded video data may represent a set of the plurality of portions of video data that is encoded in the encoding blocks corresponding to coded depths split panes at least one �maximum block coding, respectively.

Also, the above-mentioned operating modes encoding tools for encoding units corresponding to depths, can be encoded in the form of information relating to the relationship between the depth of the block coding tools coding and working mode and then may be inserted into the bitstream.

According to an exemplary embodiment of the device 1400 can perform encoding tool coding, such as quantization, transformation, internal predictions, external prediction, motion compensation, entropy encoding and contour filtering. These tools coding can be performed in different operating modes in encoding blocks corresponding to depths, respectively. The above operating modes are only illustrative examples, shown for convenience of explanation, and the relationship between the depth of the block encoding (or block size encoding), instrumental method of coding and working mode of the device 1400 encoding is not limited to the aforementioned exemplary embodiments of the implementation.

Fig.17 is a block diagram of the device 1500 videodictionary-based tool coding that takes into account the size of the block coding, �according to an exemplary embodiment of the. As shown in Fig.17, the device 1500 videodictionary includes a receiver 1510, the selector 1520 and decoder 1330.

The receiver 1510 receives and parses the bitstream comprising encoded video data. The selector 1520 allocates the encoded video data, information regarding the encoding, and information concerning the relationship between the depth of the block coding tools coding and working mode from the bit stream received by the receiver 1510.

The coded video data obtained by encoding image data in the maximum units of encoding. The image data in each maximum units of encoding hierarchically divided into multiple divided areas in accordance with the depths, and each of the divided regions is encoded in the block encoding corresponding to the coded depth. Information relating to the encoding that includes information regarding the coded depth of the maximum units of coding and encoding modes.

For example, information relating to the relationship between the depth of the block coding tools coding and working mode, can be installed in blocks of image data, for example, the maximum units of encoding blocks of the frame, blocks, fields, blocks als�sa or the GOP blocks. In another example, information regarding the encoding, and information concerning the relationship between the depth of the block coding tools coding and working mode, can be removed from SPS. The image data encoded in the encoding blocks of image data which can be decoded in a selective operating mode encoding tool based on the information concerning the relationship between the depth of the block coding tools coding and working mode, which is determined in predetermined blocks of image data.

The decoder 1530 may decode coded video data to the maximum units of coding, and operating modes tools for encoding in the encoding blocks corresponding to at least one coded depth, respectively, based on the information regarding the encoding, and information concerning the relationship between the depth of the block coding tools coding and the working modes that are distinguished by the selector 1520. Operating mode encoding tool can be installed in accordance with the block size of the encoding. Since the block size of the encoding corresponding to the coded depth, corresponds to the coded depth, �otor mode tool coding for block encoding, corresponding coded depth may be determined on the basis of the coded depth. Similarly, if a tool coding for block encoding is performed on the basis of the prediction block or section of a block encoding operation mode encoding tool may be determined based on the size of the prediction block or section.

Even if the relationship between the depth of the block coding tools coding and the operating mode is set in accordance with an instrumental means of encoding, the decoder 1530 may perform an instrumental means of decoding the corresponding tool coding. For example, the decoder 1530 may inversely quantize the bit stream in the unit of encoding corresponding to the coded depth, based on the information concerning the relationship between the depth of the block coding, quantization and working mode.

If the decoder 1530 performs intra prediction, which is the type of tool decoding, the decoder 1530 may perform intra prediction of the current block encoding corresponding to the coded depth, based on the information concerning the relationship between the depth of the block coding with internal prediction mode and vnutrishnioregionalna. For example, the decoder 1530 may perform intra prediction of the current block encoding corresponding to the coded depth, based on the information concerning the relationship between the depth of the block coding with internal prediction and mode of internal predictions, and information about the area in accordance with the number of internal predictions, corresponding to the size of the current block encoding.

Also, the decoder 1530 may determine whether to perform the intra prediction in accordance with a coded block of the current block encoding by means of distinction between a mode of internal predictions for smoothing and mode of internal predictions for the preservation of the boundary line, based on the information concerning the relationship between the depth of the block coding with internal prediction mode internal predictions.

If the decoder 1530 performs external prediction, which is another type of tool decoding, the decoder 1530 may perform external prediction of the current block encoding corresponding to the coded depth, based on the information concerning the relationship between the depth of the block coding, external prediction mode external predictions. For example, the decoder 1530 may perform mode EXT�mob prediction of the current block encoding the coded depth by using a method of determining motion vectors based on the information concerning the relationship between the depth of the block coding, external prediction mode external predictions.

If the decoder 1530 performs the inverse transformation, which is another type of tool decoding, the decoder 1530 may selectively perform a reverse rotational transformation based on the information concerning the relationship between the depth of the block coding transformation and conversion mode. Thus, the decoder 1530 may perform inverse rotational transformation of the current block encoding corresponding to the coded depth, through the use of a matrix of rotational transformation index corresponding to the coded depth, based on the information concerning the relationship between the depth of the block encoding, the mode conversion and reverse conversion.

If the decoder 1530 performs the inverse quantization, which is another type of tool coding, the decoder 1530 may perform inverse quantization of the current block encoding corresponding to the coded depth, through the use of the Delta quantization parameters corresponding to the coded depth, based on the information concerning the depth of the block coding, a quantization mode and� quantization.

If the decoder 1530 performs interpolation or extrapolation, which is another type of tool coding, can be used a filter for interpolation or extrapolation. The decoder 1530 may perform filtering using the filter for interpolation or extrapolation for the current block encoding corresponding to the coded depth, through the use of ratios or the number of taps of the filter for interpolation or extrapolation based on the operating mode filtering for interpolation or extrapolation, indicating the coefficients or the number of taps of the filter for interpolation or extrapolation. Operating mode filtering for interpolation or extrapolation may correspond to at least one of the size of the current block coding and block size of a prediction or partition the current block encoding.

The device 1500 videodictionary can recover the original image from the image data decoded by the decoder 1530. The restored image can be reproduced by a display device (not shown) or may be stored on a storage medium (not shown).

In the device 1400 encoding device and 1,500 videodictionary according to an exemplary implementation options the block size of the encoding can be changed according�accordance with characteristics of the image and the encoding efficiency of the image. The size of the data block, such as block encoding block prediction or the transform block may increase to encode a large amount of image data, such as images with high resolution or high quality. The size of a macroblock having a hierarchical structure according to the standards H. 264, can be 4×4, 8×8 or 16×16, but the device 1400 encoding and device 1500 videodictionary according to one or more exemplary embodiments of the implementation can extend the size of the data block to 4×4, 8×8, 16×16, 32×32, 64×64, 128×128 or more.

The larger the data block, the more the image data included in the block data, and the more diverse the characteristics of the image data in blocks. Thus, it would be inefficient to encode all the data blocks having different sizes, by using only one tool encoding.

Consequently, the device 1400 encoding can determine the depth of the block coding and the operating mode tool coding in accordance with characteristics of image data, to improve the coding efficiency and coding information relating to the relationship between the depth of the block coding tools coding and working mode. In addition, the device 1500 videode�of tiravanija can restore the original image by decoding the received bitstream, based on the information concerning the relationship between the depth of the block coding tools coding and working mode.

Consequently, the device 1400 encoding and device 1500 videodictionary can efficiently encode and decode a large amount of image data such as an image with high resolution or high quality, respectively.

Fig.18 is a diagram for describing the relationship between the size of the block coding tools coding and working mode according to an exemplary embodiment of the.

As shown in Fig.18, according to an exemplary embodiment of the device 1400 encoding or device 1500 videodictionary block 1610 coding 4×4, block 1620 encoding 8×8 block 1630 encoding 16×16, the block 1640 encoding a 32×32 block 1650 encoding 64×64 can be used as encoding units. If the maximum unit of encoding is the unit 1650 encoding 64×64, the depth of the block 1650 encoding 64×64 is 0, the depth of the block 1640 encoding 32×32 is equal to 1, the depth of block 1630 encoding of 16×16 is equal to 2, the depth of block 1620 encoding 8×8 is equal to 3, and the depth of the block 1610 coding 4×4 is $ 4.

The device 1400 encoding may adaptively determine a working mode of a coding tool�Oia in accordance with the depth of the block coding. For example, if the first tool TOOL1 encoding can be performed in the first operating mode TOOL1-1 1660, the second operating mode TOOL1-2 1662 and the third operating mode TOOL1-3, the device 1400 can perform encoding first tool TOOL1 encoding in a first operating mode TOOL1-1 1660 in respect of block 1610 coding 4×4 and block 1620 encoding 8×8 could run the first tool TOOL1 encoding in the second operating mode 1662 in respect of block 1630 encoding 16×16 block 1640 encoding 32×32, and may perform the first tool TOOL1 encoding in the third operating mode, 1664 in respect of the block 1650 encoding 64×64.

The relationship between block size encoding tool coding and the operating mode may be determined by encoding the current block encoding in all operating modes of the corresponding tool coding and the detection of the operating mode causes the encoding result with the highest encoding efficiency, from among operating modes during the encoding of the current block encoding. In another exemplary embodiment of the relationship between block size encoding tool coding and the operating mode may be determined in advance, for example, through�Twomey at least one of performance characteristics of the system encoding, user requirements or environmental conditions.

As the size of the maximum unit of encoding is fixed in relation to pre-defined data block size of the encoding corresponds to the depth of the block coding. Thus, the relationship between instrumental method of encoding adaptive to the size of the block coding, and the operating mode can be encoded by using information concerning the relationship between the depth of the block coding tools coding and working mode.

Information concerning the relationship between the depth of the block coding tools coding and working mode, can indicate good operational practice tools coding in units of depth of encoding units, respectively.

Table 2
The depth of the block coding = 4The depth of the block, encoding = 3The depth of the block coding = 2The depth of the block coding = 1The depth of the block encoding = 0
the operating mode of the first instrumental setstacktrace the first operating modethe first operating modethe second operating modethe second operating modethe third working mode
the operating mode of the second encoding toolthe first operating modethe second operating modethe second operating modethe third working modethe third working mode

According to table 2 approximate operating modes of the first and second encoding tools can be altered to apply to the encoding code blocks with depth 4, 3, 2, 1 and 0 respectively. Information concerning the relationship between the depth of the block coding tools coding and working mode, can be encoded and transmitted in blocks of the sequence, GOP blocks, the blocks of the video frame or the blocks of the slice image.

Below are described various exemplary embodiments of the implementation of the relationship between the depth of the block coding tools coding and working mode.

Fig.19 is a diagram for describing the relationship between the depth of the block to�of debugger, instrumental method of coding (e.g., external prediction) and the operating mode according to an exemplary embodiment of the.

If the device 1400 encoding according to an exemplary embodiment of the performs external prediction, can be used at least one method of determining motion vectors. Thus, the operating mode of the external prediction, which is the type of tool coding can be classified according to the method of determining motion vectors.

For example, as shown in Fig.19, in a first operating mode of the external prediction of the median value of motion vectors of mvpA, mvpB and mvpC adjacent blocks A, B and C 1710, 1720 and 1730 encoding is selected as the predicted vector MVP of motion of the current block 1700 encoding, as indicated in equation (4) below:

If you use the first operating mode, the amount of computation is low, may not be for service bits. Thus, even if the prediction is performed on the encoding blocks of small size in the first operating mode, the amount of calculation or the amount of bits to be transferred is small.

For example, in the second operating mode external predictions directly displays the index of the motion vector of the block� encoding, which is selected as the predicted motion vector of the current block 1700 encoding from among motion vectors of adjacent blocks A, B and C 1710, 1720 and 1730 encoding.

For example, if the device 1400 performs encoding of the external prediction of the current block 1700 encoding vector mvpA movement adjacent A block 1710 encoding can be selected as the optimal predicted motion vector of the current block 1700 encoding and can be encoded by the index of the vector mvpA movement. Thus, although the service is load-side coding caused by the index representing the predicted motion vector, the amount of computation when performing external predictions in the second operating mode is a little on the side of decoding.

For example, in the third operating mode external predictions 1705 pixels in a predefined location on the current 1700 block coding are compared with pixels 1715, 1725, 1735 at pre-determined locations on the adjacent blocks A, B and C 1710, 1720 and 1730 encoding the pixels, the degree of distortion which are the smallest, are detected from the number of pixels 1715, 1725, 1735, and the motion vector of the adjacent block coding, which includes the detected pixels is selected as the predicted motion vector �Cusago 1700 block of coding.

Thus, although the amount of calculation can be great for side decoding for the detection pixels, the degree of distortion which are the least, the encoding does not experience service load in bits to be transferred. In particular, if the external prediction is performed on the image sequence that includes the specific structure of the image in the third operating mode, the result of the prediction is more accurate than when you are using the median of motion vectors of adjacent blocks of the encoding.

The device 1400 encoding can encode information relating to dependencies between the first operating mode, the second operating mode and a third operating mode external predictions determined in accordance with the depth of the block coding. The device 1500 videodictionary according to an exemplary embodiment of the can decode the image data through the allocation information regarding the first operation mode, the second operating mode and the third operation mode external predictions, defined according to the depth of the block encoding from the received bit stream and performing decoding tool related to motion compensation and external prediction performed natcomm unit of encoding the coded depth, based on the selected information.

The device 1400 encoding checks whether a service load of bits to be transmitted, to determine the operating mode of the external prediction in accordance with the size or depth of the block coding. If encoded small block coding, additional office load can significantly reduce the coding efficiency, whereas, if coded big block encoding, the encoding efficiency has no significant effect of additional office load.

Can therefore be effective implementation of external predictions in the third operating mode, which does not cause extra service load, when encoded small block coding. In this respect, the example of the dependence between the size of the block coding and the working mode of the external prediction shown in the exemplary table 3 below:

Table 3
The block size coding = 4The block size encoding = 8The block size coding = 16The block size encoding = 32The block size encoding = 64
operating mode external predictionsthe third working modethe third working modethe first operating modethe second operating modethe second operating mode

Fig.20 is a diagram for describing the relationship between the depth of the block coding tools coding (for example, internal prediction) and the operating mode according to an exemplary embodiment of the.

The device 1400 encoding according to an exemplary embodiment of the can perform directional extrapolation as an internal predictions through the use of a restored 1810 pixels adjacent to the current block encoding 1800. For example, the direction of internal predictions can be determined as tan-1(dx, dy), and external prediction may be performed in various directions in accordance with the set of parameters (dx, dy).

Adjacent pixel 1830 on a line extending from the current pixel 1820 in the current block 1800 of coding that needs to be predicted, and is inclined at an angle tan-1(dy/dx) defined by the values of dx and dy from the current pixel 1820, can be used as a predictor of the current pixel 1830. P�elegushi pixel 1830 may belong to the block encoding, located on the top or left side of the current block 1800 of coding that was previously encoded and restored.

If intra prediction is performed, the device 1400 encoding may adjust the number of lines of the internal predictions in accordance with the block size of the encoding. Thus, the operating modes internal predictions, which is the type of tool coding can be classified according to the number of internal predictions.

The number of internal predictions may vary in accordance with size and a hierarchical tree structure of the block coding. Service bits used to represent the mode of internal predictions can reduce the encoding efficiency of a small block of coding, but has no effect on the encoding efficiency of a large block encoding.

Thus, the device 1400 encoding can encode information concerning the relationship between the depth of the block coding and the number of internal predictions. Also, the device 1500 videodictionary according to an exemplary embodiment of the can decode the image data by allocating the information concerning the dependence of m�the depth of the block coding and the number of internal predictions, taken from the bit stream and performing decoding tool relating to internal prediction performed on the current block encoding the coded depth, based on the selected information.

The device 1400 encoding considers the structure image of the current block encoding to determine the operating mode of the internal predictions in accordance with the size or depth of the block coding. In the case of images containing detailed components, intra prediction can be performed by using linear extrapolation, and thus, can be used a large number of internal predictions. However, in the case of a flat image area, the number of internal predictions can be relatively small. For example, planar mode or bilinear mode, which uses interpolation restored adjacent pixels, can be used for predicting flat areas of the image.

Since a large block encoding is defined probably in a flat image area, the number of internal predictions can be relatively small when the internal mode prediction is performed on a large block encoding. The�, as a small block coding likely be determined in the field, including detailed components of the image, the number of internal predictions can be relatively large, when the internal mode prediction is performed on a small block coding. Thus, the relationship between the size of the block coding mode of internal predictions can be considered as a correlation between the size of the block coding and the number of internal predictions. An example of dependence between the size of the block coding and the number of internal predictions shown in exemplary table 4 below:

Table 4
The block size coding = 4The block size encoding = 8The block size coding = 16The block size encoding = 32The block size encoding = 64
The number of internal predictions9933175

Big� block encoding may include patterns images which are placed in various directions, and intra prediction, thus, can be performed on a large block encoding by using linear extrapolation. In this case, the dependence between the size of the block coding mode of internal predictions can be installed as shown in exemplary table 5 below:

Table 5
The block size coding = 4The block size encoding = 8The block size coding = 16The block size encoding = 32The block size encoding = 64
The number of internal predictions99333317

According to an exemplary embodiment of the coding prediction is performed in a variety of modes internal predictions that are installed in accordance with the size of the encoding units, thus more effectively compressing the image in accordance with characteristics of the image.

Forecast�referred to as encoding blocks, output from the device 1400 encoding by performing various modes of internal predictions in accordance with the depths of encoding units that have a predetermined orientation in accordance with the type of mode of internal predictions. Due to the focus in these predicted blocks, the encoding efficiency of prediction can be high, when the pixels of the current block encoding, which must be encoded, have a predetermined direction and may be low, when the pixels of the current block encoding does not have a pre-defined orientation. Thus, can be performed post-processing of the predicted block encoding obtained using internal predictions, with the next predicted block coding by modifying the values of pixels in the predicted block of the encoding through the use of these pixels and at least one adjacent pixel, thereby improving the efficiency of the prediction image.

For example, in the case of a flat field image can be effective implementation of postprocessing to smooth the predicted block of the coding obtained using internal predictions. Also, in the case of a region with detail�s components of the image, can be effective implementation of post-processing to save the detailed components of the prediction block of the coding obtained using internal predictions.

Thus, the device 1400 encoding can encode information concerning the relationship between the depth of the block coding and the operating mode that indicates whether post-processing of the predicted block coding, obtained using the internal predictions. Also, the device 1500 videodictionary can decode the image data by means of allocation of information concerning the relationship between the depth of the block coding and the operating mode that indicates whether post-processing the predicted block of the coding obtained using internal prediction from a received bit stream and performing decoding tool relating to internal prediction performed on the current block encoding the coded depth, based on the selected information.

In the device 1400 encoding mode of internal predictions, which executes post-processing to smooth out, and the mode of internal predictions, which is not performed post-processing for smoothing, you can get to a flat area�and image and region, including detailed components of the image respectively, as the operating mode that indicates whether post-processing the predicted block of the coding obtained using internal predictions.

Large block encoding can be defined in a flat image area, and a small block encoding can be determined in the field that contains the detailed components of the image. Thus, the device 1400 encoding may determine that the mode of internal predictions, which performs post-processing for smoothing, is performed on a large block encoding, and the mode of internal predictions, which is not performed post-processing for smoothing, is performed on a small block encoding.

Thus, the relationship between the depth of the block coding and the operating mode that indicates whether post-processing of the predicted block coding, obtained through internal predictions may be regarded as a dependency between the size of the block coding and the fact whether the post-processing. In this respect, the example of the dependence between the size of the block coding and the working mode of internal predictions can be shown in exemplary table 6 below:

Table 6
The block size coding = 4The block size encoding = 8The block size coding = 16The block size encoding = 32The block size encoding = 64
The postprocessing mode internal predictions00111

If the device 1400 performs encoding conversion, which is the type of tool coding, the rotational transform can be performed selectively in accordance with the structure of the image. For efficient calculation of the rotational transform, the data matrix for the rotational transformation can be stored in memory. If the device 1400 performs encoding of the rotational transform, or if the device 1500 videodictionary performs the inverse rotational transform related data can be called up by using the index data of rotational transformation used for the calculation. Such data is rotationally�of conversion can be determined in the encoding blocks, or blocks of transformation, or in accordance with the sequence type.

Thus, the device 1400 can set encoding conversion mode specified by the index matrix of rotational transformation, the corresponding depth block coding, as a working conversion mode. The device 1400 encoding can encode information concerning the relationship between the block size encoding and conversion mode that indicates the index of the matrix of rotational transformation.

The device 1500 videodictionary can decode the image data by means of allocation of information concerning the relationship between the depth of the block encoding and conversion mode that indicates the index of the matrix of rotational transformation, from a received bit stream and performing an inverse rotational transformation of the current block encoding the coded depth, based on the selected information.

Thus, the relationship between the depth of the block encoding, the rotational transform and the operating mode can be viewed as a correlation between the size of the block coding and the index matrix of rotational transformation. In this respect, the relationship between the size of the block coding and the working mode of the rotational transform can be shown to approximate �Alice 7 below:

Table 7
The block size coding = 4The block size encoding = 8The block size coding = 16The block size encoding = 32The block size encoding = 64
The index of the matrix of rotational transformation4-74-70-30-30-3

If the device 1400 performs quantization encoding, which is the type of encoding tool that can be used in the Delta quantization parameters, representing the difference between the current quantization parameter and the predefined representative quantization parameter. Delta quantization parameters may be changed in accordance with the block size of the encoding. Thus, in the device 1400 operating mode encoding quantization may include a quantization mode indicating whether to use the Delta quantization parameters changing in accordance with the block size of the encoding.

So �by Braz, the device 1400 can set encoding a quantization mode indicating whether to use the Delta quantization parameters corresponding to the block size of the quantization, as the operating mode of quantization. The device 1400 encoding can encode information concerning the relationship between the depth of the block coding mode and quantization indicating whether to use the Delta quantization parameters.

The device 1500 videodictionary can decode the image data by means of allocation of information concerning the relationship between the depth of the block coding mode and quantization indicating whether to use the Delta quantization parameters, from a received bit stream and performing an inverse quantization of the current block encoding the coded depth, based on the selected information.

Thus, the relationship between the depth of the block encoding, the quantization and the operating mode can be viewed as the relationship between block size encoding and whether to use the Delta quantization parameters. In this respect, the example of the dependence between the size of the block coding and the working mode of the quantization shown in exemplary table 8 below:

Table 8
The block size coding = 4The block size encoding = 8The block size coding = 16The block size encoding = 32The block size encoding = 64
Delta quantization parametersfalsefalsetruefalsefalse

Fig.21 illustrates the syntax of a set 1900 sequence settings in which to insert the information relating to the relationship between the depth of the block coding tools coding and working mode according to an exemplary embodiment of the.

Fig.21 sequence_parameter_set denotes the set syntax 1900 sequence settings for the current slice. As shown in Fig.21, information concerning the relationship between the depth of the block coding tools coding and working mode, is inserted into the syntax of the set 1900 sequence settings for the current slice.

In addition, in Fig.21 picture_width is the width entered image, picture_height indicates the height of the entered image, max_coding_nit_size denotes the size of the maximum block coding, and max_coding_unit_depth denotes the maximum depth.

According to an exemplary embodiment of the syntax use_independent_cu_decode_flag indicating whether the decoding of independently performed in units of coding, use_independent_cu_parse_flag indicating whether the parse is independently performed in units of coding, use_mv_accuracy_control_flag indicating whether the motion vector to be controlled accurately, use_arbitrary_direction_intra_flag indicating whether the intra prediction performed in an arbitrary direction, use_frequency_domain_prediction_flag indicating whether the encoding/decoding with prediction to be performed in the frequency domain conversion, use_rotational_transform_flag, specifying whether to perform a rotational transformation, use_tree_significant_map_flag indicating whether the encoding/decoding to be performed using a meaningful map of the tree use_multi_parameter_intra_prediction_flag indicating whether the coding with internal prediction be performed using numerous parameters, use_advanced_motion_vector_prediction_flag pointing should be improved the prediction motion vector, use_adaptive_loop_filter_flag indicating whether to perform adaptive loop filtering, use_quadtree_adaptive_loop_filter_flag indicating whether to perform adaptive loop filtering, Quad-tree, use_delta_qp_flag indicate�schy, should the quantization be performed using the Delta quantization parameters, use_random_noise_generation_flag indicating whether or not to perform the generation of random noise, use_asymetric_motion_partition_flag indicating whether the motion estimation performed in asymmetric prediction blocks may be used as examples of the parameter sequence of the slice. It is possible to effectively encoding or decoding the current slice by setting whether the aforementioned operations to be used through the use of these syntaxes.

In particular, the length of the adaptive loop filter alf_filter_length, type adaptive loop filter alf_filter_type, the reference value for quantization coefficient of the adaptive loop filter alf_qbits, and the number of the color components of the adaptive contour filtering alf_num_color can be installed in a set 1900 sequence settings based on use_adaptive_loop_filter_flag and use_quadtree_adaptive_loop_filter_flag.

Information concerning the relationship between the depth of the block coding tools coding and the operating mode used in the device 1400 encoding device and 1,500 videodictionary according to an exemplary variants of implementation, can specify the operating mode of the external prediction, the corresponding depth block coding uiDepth mvp_mode[uiDpth], and operating mode significant_map_mode[uiDepth] indicating the type of meaningful maps of the number of significant tree cards. Ie or the correlation between internal prediction and the corresponding working mode in accordance with the depth of the block coding, or the relationship between encoding and decoding using a meaningful map of the tree and the corresponding working mode in accordance with the depth of the block encoding can be set in the set 1900 sequence settings.

The bit depth of the input reference input_sample_bit_depth and bit depth internal reference internal_sample_bit_depth can also be set in the set 1900 sequence settings.

Information concerning the relationship between the depth of the block coding tools coding and working mode, the encoded device 1400 or decoded encoding device 1500 videodictionary according to an exemplary variant implementation is not limited to the information inserted into the set 1900 parameters of the sequence shown in Fig.21. For example, the information may be encoded or decoded in the maximum units of coding units of the slice, the blocks of the frame, the blocks of the video frame or GOP blocks of the image.

Fig.22 is a block diagram of the sequence of operations illustrating a method of encoding, DOS�without tool coding taking into account the block size of the encoding, according to an exemplary embodiment of the. As shown in Fig.22, in operation 2010, the current video frame is divided into at least one maximum unit of encoding.

At operation 2020 coded depth determined by the coding of at least one of the maximum unit of encoding in the encoding blocks corresponding to depths in the operating modes encoding tools, respectively, based on the dependence between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode. Thus, at least one maximum unit of encoding includes encoding blocks corresponding to at least one coded depth.

The relationship between depth of at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode can be set in units of slices, frames, GOP or sequences of image frames. The relationship between depth of at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and work re�Imam may be determined by comparing results of encoding units encoding corresponding to the depths, in at least one operating mode, a matching coding tools with each other, and selecting the mode with the highest coding efficiency from among the at least one operating mode during encoding at least one of the maximum unit of encoding. Otherwise, the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of encoding and the operating mode may be determined in such a way that the blocks of the encoding, corresponding to depths smaller than or equal to the predefined size may correspond to the operating mode that does not cause the insertion of overhead bits in the encoded data stream, and the other blocks of the encoding, the size greater than a predetermined size may correspond to the operating mode, the calling official bits.

At operation 2030 output bit stream comprising encoded video data of at least one coded depth, information regarding the encoding, and information concerning the relationship between the depth of the at least one block encoding at least one maximal block coding tool Kodirov�tion and the operating mode in at least one of the maximum unit of encoding. Information regarding the encoding may include at least one coded depth and the information regarding the encoding mode to at least one of the maximum unit of encoding. Information concerning the relationship between the depth of the at least one block encoding at least one of the maximum unit of encoding, instrumental method of coding and working mode, can be inserted in the blocks of the slice, the blocks of the frame, GOP, or sequence of image frames.

Fig.23 is a block diagram of the sequence of operations illustrating a method of videodictionary-based tool coding that takes into account the block size of the encoding, according to an exemplary embodiment of the. As shown in Fig.23, in operation 2110 is received and parsing a bitstream comprising encoded video data.

In operation 2120, the encoded video data, information regarding the encoding, and information concerning the relationship between the depth of the block coding tools coding and working mode are obtained from the bitstream. Information concerning the relationship between the depth of the block coding tools coding and working mode can be released from betoog� flow in the maximum units of coding, the blocks of the slice, block, frame, GOP blocks or sequences of image frames.

In operation 2130 encoded video data are decoded in the maximum units of encoding in accordance with the working mode of encoding tool, consistent with the unit of coding corresponding to at least one coded depth, based on the information regarding the encoding, and information concerning the relationship between the depth of the block coding tools coding and working mode extracted from the bitstream.

Although not limited to these, one or more exemplary embodiments can be written as computer programs and can be implemented in digital public use computers that execute the programs using a computer readable recording medium. Examples of computer readable recording media include magnetic storage media (e.g. read only memory (ROM), floppy disks, hard disks, etc.) and optical recording media (e.g., compact discs (CD-ROM) or digital multi-function disk (DVD)). In addition, although not required in all exemplary embodiments, one or more blocks of the device 100 or 1400 encoding, the device 200 or 1500 videodictionary to�Dera 400 image and the decoder 500 images can include a processor or microprocessor, executing a computer program stored on a computer readable medium.

Although exemplary embodiments of the implementation have been specifically shown and described with reference to the drawings above, those skilled in the art will understand that various changes in form and details may be made without deviation from the essence and scope of the inventive concept defined in the appended claims. Exemplary embodiments of the implementation should only be considered in a descriptive sense and not for purposes of limitation. Therefore, the amount of substance of the invention is defined not by the detailed description of exemplary embodiments, but the appended claims, and all differences within the scope will be construed as included in the present invention.

1. A device for decoding video, comprising:
a receiver that receives and syntactically parses a bitstream of an encoded image;
a processor that determines at least one block coding included in the maximum unit of encoding that has a hierarchical structure by using information that indicates that a hierarchical structure, parsed from a received bitstream, and determines at least one sub block to the prediction block of the coding from less� least one of block coding, by using information about the blocks of prediction mentioned at least one block coding, parsed from a received bitstream, and mentioned at least one subblock contains at least two partitions obtained by splitting at least one of the height and width of the mentioned at least one block coding according to one of the symmetric ratio and the asymmetric ratio; and
the decoder, which reconstructs the image by performing decoding including motion compensation using the mentioned at least two sections referred to at least one of block coding, using information encoding, parsed from a received bitstream,
moreover, the maximum unit of encoding hierarchically divided into the mentioned at least one block coding, and mentioned at least one block coding is divided in accordance with the depths and independently from neighboring blocks of the encoding of the mentioned at least one unit in the maximum coding unit encoding.

2. The device according to claim 1, wherein the processor determines a partition type and a prediction mode for the current block encoding of these�on at least one of block coding based on the information about the blocks of prediction and if a particular prediction mode specifies the external prediction, and a certain type of partition is the partition type for external predictions derived by dividing the current block encoding according to one of the symmetric ratio and an asymmetric ratio, determines mentioned at least two partitions obtained by splitting at least one of the height and width of the current block encoding according to one of the symmetric ratio and the asymmetric ratio.

3. The device according to claim 1, wherein the decoder selectively determines whether to perform motion compensation with use of said at least two partitions obtained by splitting the mentioned at least one block coding according to one of the symmetric ratio and the asymmetric ratio on the basis of information indicating a partition type and a prediction mode of external predictions.

4. The device according to claim 1, in which the mentioned at least two sections contain a prediction block having a size equal to the size of the current block encoding or section, the first sections are obtained by symmetrically splitting one of a height and width of the current block encoding, and the second sections are obtained simply�PTO asymmetric division of one of the height and width of the current block encoding.

5. The device according to claim 1, wherein the processor determines the maximum block coding, which divides the image based on the information about the maximum size of the mentioned at least one block coding, and determines the mentioned at least one block coding on the basis of at least one of the depth information of the mentioned at least one block coding, which hierarchically divides the maximum unit of encoding,
moreover, the information about the maximum size of the mentioned at least one unit of coding and information about the depth of the mentioned at least one block coding syntactically parsed from a received bitstream.



 

Same patents:

FIELD: information technology.

SUBSTANCE: in the method, after using a modified LBP technique, calculation of LBP code values and search for equivalent LBP code values are performed in rank and domain regions, formed by the same number of pixels located on a circle, where the radius of the circle of the domain region is greater than the radius of the circle of the rank region; the number of pixels, the radius of the circle and the coordinates of the position of the centre pixel for the rank and domain regions are stored.

EFFECT: faster encoding through selection of image characteristics which describe the domain and rank regions.

4 dwg

FIELD: physics.

SUBSTANCE: adaptation is performed by rearranging fragments of discrete cosine transform (DCT) coefficients obtained after two-dimensional DCT on the time axis and subsequent one-dimensional DCT such that the total number of non-zero transform coefficients after three-dimensional DCT is less than the number of non-zero DCT coefficients obtained after three-dimensional DCT without rearranging two-dimensional DCT fragments. In the disclosed method, after forming a domain measuring n×n×n pixels, DCT coefficients are calculated on spatial coordinates x and y for each fragment of the domain. The fragments are then rearranged in the form of a rearrangement vector and a time DCT operation is performed. The DCT coefficients are sampled, encoded and transmitted over a communication channel with the rearrangement vector. At reception, said procedures are performed in reverse order and the original video stream is restored.

EFFECT: high degree of compression of video data with a given image reconstruction error at reception owing to adaptation to variation of static properties of images.

3 cl, 9 dwg

FIELD: information technologies.

SUBSTANCE: method of coding of video data is offered which comprises the obtaining from the coded stream of bits of one or more units of network abstraction level (NAL) for each component of view from a set of components of view of the coded video data where each component of view from the set of components of view corresponds to the common temporary location and where one or more NAL units encapsulate at least a part of the coded video data for the respective components of view and comprise an information specifying the sequence of decoding of the respective components of view. The method also comprises the received information separate from NAL units specifying the relations between the view identifiers for these views and sequence of decoding of components of view. One or more NAL units also comprise the information specifying, whether the first view component of the first view as the reference for prediction between the views of the second component of view for the second different view is used.

EFFECT: coding efficiency improvement.

68 cl, 18 tbl, 12 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. A method of coding video data comprises maintaining a plurality of context models for entropy coding transform coefficients of the video data, wherein the plurality of context models includes one or more context models that are each used for a different transform unit size and at least one joint context model used for two or more transform unit sizes; selecting the joint context model shared by a first transform unit and a second transform unit; selecting contexts for the transform coefficients associated with one of the first transform unit or the second transform unit according to the joint context model; and entropy coding the transform coefficients of said one of the transform units using context adaptive binary arithmetic coding (CABAC) based on the selected contexts.

EFFECT: reduced amount of memory needed to store contexts and probabilities on video coders and decoders.

34 cl, 9 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to image processing means. The method includes creating a plurality of frames of a picture and related prediction reference frames; for each frame and related prediction reference frame, calculating the intensity value and the colour value in a first colour domain; for each frame and related prediction reference frame, calculating weighted prediction gains; if said gains are non-negative, determining that a global transition with zero offset is occurs in a second colour domain; and if not all of said gains are non-negative, determining that a global transition with gradual change in illumination does not occur.

EFFECT: high efficiency of an image display means when encoding and processing video.

28 cl, 16 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to the field of digital signal processing and, in particular, to the field of video signal compression using the movement compensation. The coding method includes the obtaining of target number of movement information predictors to be used for the coded image section and generation of the set of movement information predictors using the obtained target quantity. The set is generated by means of: obtaining of the first set of movement information predictors, each of which is connected with the image section having the pre-set spatial and/or time ratio with the coded image section; modifications of the first set of movement information predictors by removal of the duplicated movement information predictors for obtaining of the reduced set of movement information predictors containing the first number of movement information predictors, and each movement information predictor from the reduced set differs from any other movement information predictor from the reduced set; comparisons of the first number of movement information predictors with the target quantity obtained, and if the first quantity is less than the target quantity, obtaining of the additional movement information predictor and its addition to the reduced set of movement information predictors.

EFFECT: decrease of spatial and time redundancies in video flows.

26 cl, 8 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to a broadcasting system for transmitting a digital television program, particularly a transmission device and a transmission method, in which content which meets needs can be acquired. A server generates a script PDI-S for obtaining a user side PDI-A representative of an answer of a user to a question about user preferences; generates launch information for executing the PDI-A; and transmits the launch information and PDI-S in response to the delivery of broadcast content, and transmits to the client in response to the delivery of reference content a provider side PDI-A representative of an answer set by a provider to the question. The client executes the PDI-S based on detection of launch information and carries out matching between the user side PDI-A and the provider side PDI-A to determine acquisition of reference content delivered by the server.

EFFECT: facilitating delivery of content to a client which satisfies the needs thereof at that time.

10 cl, 48 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. An apparatus for encoding an image using intraframe prediction comprises a unit for determining an intraframe prediction mode, which determines the intraframe prediction of the current unit to be encoded, wherein the intraframe prediction mode indicates a defined direction from a plurality of directions, wherein the defined direction is indicated by one number dx in the horizontal direction and a constant number in the vertical direction and a number dy in the vertical direction and a constant number in the horizontal direction; and a unit for performing intraframe prediction, which performs intraframe prediction applied to the current unit in accordance with the intraframe prediction mode, wherein the intraframe prediction includes a step of determining the position of adjacent pixels through a shift procedure based on the position of the current pixel and one of the parameters dx and dy, indicating the defined direction, wherein adjacent pixels are located on the left side of the current unit or on the upper side of the current unit.

EFFECT: high efficiency of compressing images through the use of intraframe prediction modes having different directions.

9 cl, 21 dwg, 4 tbl

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a method for bit-plane coding of signals, for example, an image or video signal in the DCT transform domain. The bit planes of the DCT blocks are transmitted plane by plane in order of significance. As each plane contains more signal energy than the less significant layers together, the resulting bitstream is scalable in the sense that it may be truncated at any position. The later the bitstream is truncated, the smaller the residual error when the image is reconstructed. For each bit plane, a zone or partition of bit plane is created that encompasses all the non-zero bits of the DCT coefficients in that bit plane. The partition is created in accordance with a strategy which is selected from a number of options depending on the content of the overall signal and/or the actual bit plane. A different zoning strategy may be used for natural images than for graphic content, and the strategy may vary from bit plane to bit plane. The form as well as other properties such as the size of each partition can thus be optimally adapted to the content. Two-dimensional rectangular zones and one-dimensional zigzag scan zones may be mixed within an image or even within a DCT block. The selected zone creating strategy is embedded in the bitstream, along with the DCT coefficient bits in the actual partition.

EFFECT: high efficiency of a scalable method of compressing signal content.

13 cl, 5 dwg

FIELD: radio engineering, communication.

SUBSTANCE: invention relates to means of detecting illegal use of a processing device of a security system, used to descramble various media data distributed over multiple corresponding channels. The method includes counting new messages ECMj,c, received by the processing device of the security systems for channels, other than channel i, after the last received message ECMi,p; verifying that the message ECMi,c was received during said time interval by verifying that the number of new messages ECMj,c, received for channels other than i, reaches or exceeds a given threshold greater than two; increasing the counter Kchi by the given value each time when after verification a message ECMi,c is received during a given time interval, immediately after a message ECMi,p, otherwise the counter Kchi is reset to the initial value; detecting illegal use once the counter Kchi reaches said threshold.

EFFECT: reducing the probability of illegal use of a processing device.

10 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.

EFFECT: method for personalised filtration of content elements which does not require logic input or user identification procedures.

5 cl, 1 dwg

FIELD: information technology.

SUBSTANCE: like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.

EFFECT: method for personalised filtration of content elements which does not require logic input or user identification procedures.

5 cl, 1 dwg

FIELD: information technologies.

SUBSTANCE: method of a conversion system operation to manage digital rights to grant a license to a client's device corresponding to coded content consists in the following. The first content of the first type of digital rights content and the first license corresponding to the first content are converted to manage digital rights in order to generate the second content of the second type of digital rights content and the second license corresponding to the second content. A license request is received, corresponding to the second content distributed by means of superdistribution to a third party. The second license corresponding to the second content distributed by means of superdistribution is requested from a server corresponding to the second management of digital rights. The second license corresponding to the second content distributed by means of superdistribution is received and sent to a third party.

EFFECT: expansion of functional resources due to development of a license granting mechanism for appropriate content distributed by means of superdistribution.

17 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: network server of television server sets in random manner according to Internet protocol (IPTV) time of request for receiving main license within time period starting from time of broadcast transmission and ending at preset time in accordance with request for receiving license for playback of encrypted content, where request for receive comes from IPTV client terminal, and transmits to IPTV client terminal information about time of request for receiving main license and temporary license comprising temporary key of content which key corresponds to playback of broadcast transmission content from time of broadcast transmission start till preset time. License server transmits main license including content main key which corresponds to full playback of content according to request for receiving main license which request is executed using IPTV client terminal based on information about request for receive.

EFFECT: stabilisation of license server operation by eliminating concentration of license receive requests from large number of clients during time just after starting broadcast transmission of content.

6 cl, 11 dwg

FIELD: information technology.

SUBSTANCE: multimedia content purchasing system comprising: a memory area associated with a multimedia service; a multimedia server connected to the multimedia service via a data communication network; a portable computing device associated with a user; and a processor associated with the portable computing device, said processor being configured to execute computer-executable instructions for: establishing a connection to the multimedia server when the multimedia server and the portable computing device are within a predefined proximity; authenticating the multimedia server and the user with respect to the authenticated multimedia server; transmitting digital content distribution criteria; receiving, in response, promotional copies of one or more of the multimedia content items and associated metadata; and purchasing, when the multimedia server and the portable computing device are outside the predefined proximity, at least one of said one or more multimedia content items.

EFFECT: enabling flexible sharing of multimedia content between subjects.

17 cl, 9 dwg

FIELD: information technologies.

SUBSTANCE: device (600) to process stored data packets (110; 112) in a container of media data (104) and stored related meta information in a container of meta data (106); related meta information, including information on timing of transportation and information on location, indicating location of storage of saved data packets in the media data container (104); a device, comprising a processor (602) for detection, based on stored data packets (110; 112) and stored related meta information (124; 128); information on decoding (604; 704) for media useful load of stored data packets (110; 112), where information on decoding (604; 704) indicates, at which moment of time to repeatedly reproduce which useful load of stored data packets.

EFFECT: immediate accurate timing of synchronisation between different recorded media streams without complicated processing during each reproduction of recorded media streams.

21 cl, 12 dwg

FIELD: information technology.

SUBSTANCE: provided is an integrated interface device for performing a hierarchical operation for specifying a desired content list. The interface device has a function to display a content list, content specified by the content list, or the like by efficiently using a vacant area in the lower part of the display by displaying icons which display a hierarchical relationship, for example, "display in a row", in the upper part of the screen, thereby freeing a large space in the lower part of the display.

EFFECT: efficient use of the entire screen even after displaying an interface for performing an operation.

17 cl, 42 dwg

FIELD: radio engineering, communication.

SUBSTANCE: channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.

EFFECT: provision of a method to generate a recommendation for an additional element of content, the method is specially adapted for use with channels of individualised content.

13 cl, 1 dwg

FIELD: radio engineering, communication.

SUBSTANCE: channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.

EFFECT: provision of a method to generate a recommendation for an additional element of content, the method is specially adapted for use with channels of individualised content.

13 cl, 1 dwg

FIELD: information technology.

SUBSTANCE: wireless transmission system includes: a device (1) which wirelessly transmits AV content and a plurality of wireless recipient devices (5, 6) for reproducing the transmitted AV content. The device (1) for transmitting content has a group identification table which stores a group identifier for identification of a group formed by the wireless recipient device (5, 6). The device (1) adds the group identifier extracted from the group identification table to a control command for controlling recipient devices (5, 6) and wirelessly transmits the control command having the group identifier. The recipient devices (5, 6) receive the wirelessly transmitted control command from the device (1) if the corresponding group identifier has been added to the control command. The device (1) for transmitting content consists of a wired source device and a relay device which is connected by wire to the wired source device, and the relay device is wirelessly connected to the wireless recipient device and mutually converts the wired control command transmitted to the wired source device, and the wireless control command transmitted to the wireless recipient device, wherein the wired source device and the relay device are connected via HDMI (High-Definition Multimedia Interface).

EFFECT: providing the minimum required volume of transmitting control commands during wireless audio/video transmission.

21 cl, 13 dwg

Up!