Device for coding of dynamic images, device for decoding of dynamic images, method for coding of dynamic images and method for decoding of dynamic images

FIELD: information technologies.

SUBSTANCE: it is suggested to do coding and decoding uniformly for multiple colouration formats. Based on control signal providing for type of colouration format of inlet signal from dynamic image, if colouration format is 4:2:0 or 4:2:2, the first unit of prediction mode detection with intra-coding and the first unit of predication image coding with intra-coding are applied to component of dynamic image inlet signal colouration component, and the second unit of prediction mode detection with intra-coding and the second unit of prediction image formation with intra-coding are applied to colouration component. If colouration format is 4:4:4, the first unit of prediction mode detection with intra-coding and the first unit of prediction image formation with intra-coding are applied to all colour components to do coding, and unit of coding with alternating length multiplexes control signal as data of coding, which should be applied to element of dynamic image sequence in bitstream.

EFFECT: improved mutual compatibility between coded video data of various colouration formats.

12 cl, 24 dwg

 

The technical field to which the invention relates

The present invention relates to a device for coding digital image signals, the device for decoding digital signals of images, the method of coding digital image signals and the method of decoding digital signals of images used for encryption and compression of images or data transmission technology compressed images.

The level of technology

The system of coding a video signal of international standard, such as MPEG or ITU-T H.26x (for example, the standard "Information Technology Coding of Audio-Visual Objects Part 10: Advanced Video Coding", ISO/IEC 14496-10, 2003 (hereinafter referred to as non-patent document 1)), has traditionally been based on the use of a standardized input format called 4:2:0. Format 4:2:0 is the format in which the color signal of the dynamic image in RGB components or similar form is converted into a brightness component (Y) and two components of chrominance (Cb, Cr), and the number of times the component of the color is reduced to half the quantity of the luminance components in both the horizontal and vertical directions. Component color visually varies less than the brightness. Under this traditional system of coding signals is as the international standard was based on the premise the amount of initial information, which must be coded is reduced by domain downsampling chrominance components before performing the coding, as mentioned above. When encoding video for business purposes, such as broadcasting video that can be used in the format 4:2:2 with downsampled components Cb and Cr to reduce the number of samples of components by half the number of samples in the value component in the horizontal direction only. Thus, the color resolution in the vertical direction becomes equal to the brightness, thus increasing the color reproducibility in comparison with the format 4:2:0. On the other hand, the recent increase in resolution and number of tones of the video was accompanied by a research system for performing coding by supporting the number of samples equal to the number of the luminance components, without domain downsampling chrominance components. The format in which the number of samples of the luminance components and chrominance fully equal, is called a format of 4:4:4. The traditional format of 4:2:0 was limited by the definitions of the components Y, Cb and Cr color space due to the background domain downsampling chrominance components. However, in case the format is 4:4:4, because there is no difference in the ratio of the number of samples between the color of the new components, can be directly used components R, G, and B in addition to the components Y, Cb and Cr, and can be used many definitions of the color space. An example of encoding a video aimed at format 4:4:4, is to publish Woo-Shik Kim, Dae-Sung Cho and Hyun Mun Kim, "INTER-PLANE PREDICTION FOR RGB VIDEO CODING", ICIP 2004, October 2004 (hereafter referred to as non-patent document 2).

In the format of 4:2:0 AVC encoding of non-patent document 1 in the field macroblock composed of the luminance components with a size of 16×16 pixels corresponding to the color component are blocks of size 8×8 pixels for both components Cb and Cr. When the prediction with motion compensation in the format of 4:2:0 are multiplexed with information about the block size, which becomes an element of prediction with motion compensation only for the luminance components, information of a reference picture used for prediction, and information of a motion vector of each block, and the prediction with motion compensation is performed for the chrominance components using the same information as for the luminance components. Format 4:2:0 has such characteristics in the definition of the color space that almost all elements of the structural information about the image is integrated in the component (texture) of the brightness component of the color visibility distortions lower the, than for brightness component, and the contribution to the reproducibility of the signal is small, and the prediction and encoding in the format of 4:2:0 is based on the characteristics of the format. On the other hand, in case the format is 4:4:4 three color component have the same texture information. System to perform prediction with motion compensation based on the mode prediction inter-coding, the information of the reference image and information of the motion vector depends only on one component, not necessarily the best way in the format of 4:4:4, where the color components make equal contributions to the representation of the structure of the image signal. Thus, the coding system used to format 4:2:0, performs signal processing that is different from the coding system intended for format 4:4:4, to perform optimal coding and definition of the information elements multiplexed into a coded bit stream, are the other. In order to create a device for decoding, capable of decoding compressed video data in many different formats should be used a construction in which the bit streams for signal formats are perceived individually, and thus the device becomes ineffective.

REVELATION IS THEIR INVENTIONS

Therefore, the present invention is to provide a method of forming a bit stream for compatibility between the bit stream encoded in the space of Y, Cb and Cr, as in the case of the traditional format 4:2:0, and bit stream, no differences in the ratio of the number of samples between the color components, as in the case of format 4:4:4, and received by video compression with free color space definition, and the method of decoding.

Device for encoding a dynamic image, which receives, compresses and encodes the digital signal dynamic image, includes: a first block defining a mode of intra prediction coding to perform the intra prediction encoding on the component signal corresponding to the brightness component, when the color format of the input signal dynamic image format is 4:2:0 or 4:2:2; the second block to determine the mode of intra prediction coding to perform the intra prediction encoding on the component signal corresponding component color, when the color format of the input signal dynamic image format is 4:2:0 or 4:2:2; block coding with variable length coding variable length of the first re the ima predictions with intra-encoding, defined by the first block determination mode of intra prediction coding, or second prediction mode from the intra-encoding is defined by the second unit determining the mode of intra prediction coding; a first imaging unit of intra prediction coding for forming the first image prediction from the intra-coding based on the first prediction mode from the intra-encoding; a second imaging unit of intra prediction coding for forming the second image prediction from the intra-coding on the basis of the second mode of intra prediction coding; and a coding block for performing transformation and coding over error signal the predictions obtained as the difference between the first image prediction from the intra-coding or the second image prediction of intra-coding and the corresponding signals of the color components included in the input signal dynamic image. On the basis of the control signal to provide the type of the color format of the input dynamic image signal in the case of the color format 4:2:0 or 4:2:2 first block determination mode of intra prediction encoding, and the first imaging unit predicting intra-codiovan the m are applied to the brightness component of the input dynamic image signal and the second block to determine the mode of intra prediction encoding, and the second imaging unit of intra prediction encoding, apply to the chrominance signal input dynamic image. If the color format 4:4:4 in the first block determination mode of intra prediction encoding, and the first imaging unit predicting intra-coding is applied to all color components of the input signal dynamic image to do the encoding; and block coding with variable length multiplexes the control signal as data encoding, which must be applied to the element of the sequence of dynamic image bit stream.

The encoding/decoding can be performed together for a number of different color formats such as formats 4:2:0, 4:2:2 and 4:4:4, through effective device configuration, and may be increased mutual compatibility between the encoded video data.

BRIEF DESCRIPTION of DRAWINGS

In the accompanying drawings:

1 is an explanatory diagram showing the relationship between sequence, the image section and the macroblock;

2 is an explanatory diagram showing the General process of encoding;

3 is an explanatory diagram showing the process of independent coding;

4 is a block diagram showing the configuration of a device for encoding in accordance with the first variant waples the of the present invention;

5 is an explanatory diagram showing the prediction modes with intra-coding for size N×N (N=4 or 8);

6 is an explanatory diagram showing the prediction modes with intra-coding for size 16×16;

7 is an explanatory diagram showing the prediction modes with intra-coding for components Cb/Cr formats 4:2:0/4:2:2;

figa-8H - explanatory diagrams showing the individual elements of macroblocks;

Fig.9 - explanatory diagrams showing processes of forming a predicted image from the motion compensation component Y in the format 4:2:0/4:2:2 and 4:4:4;

figure 10 is an explanatory diagram showing the process of forming a predicted image from the motion compensation for component Cb/Cr formats 4:2:0/4:2:2;

11 is an explanatory diagram showing the process of coding the difference between the predictions for the component Y in the format 4:2:0 and 4:2:2;

Fig - explanatory diagrams showing the process of coding the difference between the predictions for components Cb/Cr formats 4:2:0 and 4:2:2;

Fig is an explanatory diagram showing the bitstream;

Fig is an explanatory diagram showing the structure of a partition;

figa and 15B is an explanatory diagram showing a section in the format of 4:4:4 encoded using General and independent of the encoding;

Fig is a block diagram showing the configuration of the Oia device for decoding in accordance with the first variant embodiment of the present invention;

Fig is an explanatory diagram showing the internal process of the macroblock level for block decoding variable length;

Fig is an explanatory diagram showing the switching predictions with intra-coding in accordance with the format chrominance component Cb/Cr;

Fig is an explanatory diagram showing the switching motion compensation in accordance with the format chrominance component Cb/Cr;

Fig is an explanatory diagram showing the process of coding the difference between the predictions for the component Y in the formats 4:2:0, 4:2:2 format 4:4:4;

figa and 21B is an explanatory diagram showing the process of coding the difference between the predictions for components Cb/Cr formats 4:2:0 and 4:2:2;

Fig is an explanatory diagram showing the internal configuration of the block decoding of the difference between the predictions for components C1 and C2; and

Fig - explanatory diagrams showing formats.

The IMPLEMENTATION of the INVENTION

The first variant embodiment of the

The first variant of the embodiment of the present invention relates to a device for encoding, which takes one of the video signals in the color format 4:2:0 or 4:2:2, defined in a color space (Y, Cb and Cr), and the video color format 4:4:4 defined in the color space (R, G and B), (Y, Cb and Cr) or (X, Y, and Z), to perform video encoding and you the AET bit stream, and to a device for decoding, which receives the encoded bit stream generated by the device for encoding to restore the image signal. Below in the description of the three color component will in General be called components (C0, C1, and C2), and in the case of color formats 4:2:0 and 4:2:2 components C0, C1 and C2 will be considered as a component Y component Cb and Cr component.

As shown in figure 1, the device for encoding a first variant embodiment of the present invention receives the signal, represented as a sequence in time data display information (hereinafter called image), determined by elements of frames or fields through consistent discretization. A data element that includes sequential in time the image is called a sequence. The sequence can be divided into some groups of pictures (GOP). The group of pictures (GOP) are used to ensure complete decoding, starting from arbitrary initial group of pictures (GOP), independently of other groups of pictures (GOP) and random access to the bit stream. The image is also divided into square blocks, called macroblocks, and applied to the processes of prediction, converted the I or the quantization level of macroblocks, to compress video. The element formed by concatenating multiple macroblocks is called a section. A partition is a data item that is to be encoded or decoded independently of the other sections. For example, when processing video signals with a resolution equal to or greater resolution high-definition television (HDTV), real-time partitioning is performed for parallel encoding or decoding the divided sections, and, thus, reduces the computation time. When the bitstream is transmitted through a line having a high error rate, even if a certain section is destroyed under the influence of the error and the decoded image is distorted, the correct decoding process is restored with the next section. In the General case, on the border of the section cannot be used for prediction using the dependence of the signal with the adjacent section. Thus, increasing the number of sections increases the flexibility of the parallel process and the error tolerance, while the encoding performance is reduced.

The macroblock in the case of each of the color formats 4:2:0, 4:2:2 or 4:4:4 is defined as a block of pixels with dimensions W=H=16, as shown in Fig. To compress the video signal through the process of preds the marks, the transformation or quantization of the macroblock, the coded data of the macroblock, multiplexed in the bit stream, mainly contain two types of information. One of them is the type of auxiliary information that is different from the video directly, such as the prediction mode, motion prediction, or the quantization parameter, and these items of information are collectively referred to as the header of the macroblock. The other one is directly the information of the video signal. In accordance with the first variant embodiment of the present invention, the video signal which is to be encoded, is a compressed data signal, the prediction error obtained by performing prediction, transformation or quantization on the basis of header information of the macroblock and presented in a quantized form with the conversion factor. Thus, the further the signal will be called the data of the quantization coefficients.

In the further process of coding the three color components of one frame or one field based on a common header macroblock will be called "the General process of encoding, and the encoding process signals of the three color components of one frame or one field on the basis of an individual independent of the macroblock headers bude is called "the process of independent coding". Similarly, the decoding process of image data from a bit stream obtained by coding the three color components of one frame or one field based on a common header macroblock, will be called "the General process of decoding, and the decoding process of image data from a bit stream obtained by coding the three color components of one frame or one field on the basis of an individual independent of the macroblock headers, will be called "process independent decoding". Device for encoding a first variant embodiment of the present invention is configured to encode the signal chroma 4:4:4 via a process selected from the General process of coding and the process of independent coding. In the General process of encoding three color component of the one frame or one field together are defined as a single image, and the image is divided into macroblocks, which combine three color component (figure 2). Figure 2 and in the description below, three color component will be called the components C0, C1 and C2. On the other hand, in the process of independent coding the input video signal of one frame or one field is divided into three color components, each defined as an image, and each image is divided is as macroblocks, containing color components of the signal (figure 3). In other words, the macroblock, which shall be subjected to the process of the General coding contains the reference (pixel) with three color components C0, C1 and C2, while the macroblock, which shall be subjected to the process of independent coding, contains a reference (pixel) with only one of the components C0, C1 and C2. In accordance with a device for encoding a first variant embodiment of the present invention defining a macroblock in figure 2 is always used for color formats 4:2:0 and 4:2:2, and is used in the encoding process, is equivalent to "the generic coding process" or "process General decoding".

Device for encoding

Figure 4 shows the configuration of a device for coding according to the first variant embodiment of the present invention. Further information for the destination color format of the input video signal, which must be encoded, will be called information 1 identification of the color format, and identification information that indicates what encoding is performed through the General process of coding or through a process of independent coding will be called information 2 identify common encoding/independent encoding.

Input signal 3 is first divided into data mA is Rubakov, shown in figure 2, or 3, based on the information 1 identification of the color format and information 2 identify common encoding/independent encoding. In accordance with the information 4 specify only the intra-coding process of intra prediction coding (block 5 determine the mode of intra prediction coding for the component C0, unit 6, determine the mode of intra prediction coding for components C1/C2, unit 7 of the image-forming intra prediction coding for the component C0 and unit 8 of the image-forming predictions with intra-coding for components C1/C2) and the process of prediction with motion compensation unit 9 motion detection for a component C0, block 10 motion detection for components C1/C2 10, block 11 motion compensation component C0 and the block 12 motion compensation components C1/C2) are performed to choose the most effective for coding the macroblock prediction mode (block 14 selection of the encoding mode), the difference in predictions is converted or quantized (block 18 coding the difference between the predictions for the component C0, block 19 encoding the difference between the predictions for the component C1 and the block 20 coding the difference between the predictions for the component C2) and auxiliary information such as a prediction mode or traffic information, and quantized to efficient transformation are encoded with variable length for the formation of the bit stream 30 (block 27 coding with variable length). Quantized conversion coefficient locally decoded (block 24 local decoding for the component C0, block 25 local decoding for the component C1 and the block 26 local decoding for the component C2), and the predicted image is obtained on the basis of auxiliary information, and data from the reference image are summed to obtain a local decoded image. If necessary filters eliminate blocking (block 28 filter eliminate blocking) to suppress the noise at the boundaries of blocks, accompanying quantization, and then the local decoded image stored in the memory 13 frames and/or the memory 12 lines for use in subsequent processes predictions. When information 4 specify only intra-coding indicates "only perform intra-coding"will execute only the process of intra prediction encoding, without performing the process prediction with motion compensation.

Next will be described the characteristics of the first variant embodiment of the present invention, i.e. the process of prediction of intra-coding, the process of prediction with motion compensation, the process of encoding the difference between the prediction and the encoding process with variable length (and the resulting configuration bitstream), the cat is who performs switching on the basis of information 1 identification of the color format information 2 identify common encoding/independent encoding information 4 specify only intra-coding, etc.

(1) the Process of prediction of intra-coding

The process of prediction of intra-coding is performed by block 5 determine the mode of intra prediction coding for the component C0, block 6 determine the mode of intra prediction coding for components C1/C2, unit 7 of the image-forming intra prediction coding for the component C0 and unit 8 of the image-forming predictions with intra-coding for components C1/C2, shown in figure 4.

In the case of color formats 4:2:0 and 4:2:2 signal component Y 100 mode of intra prediction coding for the component C0 is determined by block 5 determine the mode of intra prediction coding for the component C0. In this case, there are three selectable types of modes: the mode of intra prediction coding with 4×4 prediction mode from the intra-coding with size 8×8 and the mode of intra prediction coding with size 16×16. For the mode of intra prediction coding with 4×4 and mode of intra prediction coding with size of 8×8 macroblock is divided into blocks with a size of 4×4 pixels or 8×8 pixels, and the spatial prediction using close pornog the pixel is performed for each block, as shown in figure 5. This method of prediction has nine options. Information about which of these nine methods were used to perform the prediction is coded as a single item of auxiliary information in the form of a prediction mode from the intra-encoding. The pixels enclosed in a rectangle with a size of 4×4 figure 5 are the pixels to be predicted, and the pixel allocated a cross-hatch pattern, is a reference pixel for forming a predicted image. The arrow indicates the direction in which the reference pixel is the effect on the predicted value. In mode 2, the average value of the reference pixel is the predicted value. Figure 5 shows an example of block size 4×4. For block size of 8×8 pixels are identified in the same mode. Through the spatial prediction with the above direction, can be made effective prediction for structural information of the image, such as the contour of the object or pattern texture.

The mode of intra prediction coding with size 16×16 is used as the execution mode of intra prediction coding for blocks of size 16×16 without units of macroblocks (6). In this case, can be selected from four types of spatial prediction, shown in Fig.6. This is effective tunes as the mode with the increase of the effects predictions with a small amount of auxiliary information to the image area, where the image is uniform.

For components Cb and Cr by block 6 determine the mode of intra prediction coding for components C1/C2 is determined by the mode 101 predictions with intra-coding for components C1/C2 that is different from the component y (Modes corresponding to the components C1 and C2 are the modes 101a and 101b, respectively. It should be noted that the modes 101a and 101b are always equal in value formats 4:2:0 and 4:2:2 and one of the modes 101a and 101b is multiplexed in the bit stream. The decoder sets the decoded values as 101a and 101b.) Fig.7 shows the prediction modes with intra-coding for components Cb and Cr, which can be selected in the case of color formats 4:2:0 and 4:2:2. Fig.7 shows the case of 4:2:0, and the same modes are used for format 4:2:2. Of these four modes mode only 0 is equivalent to the area of the macroblock for components Cb and Cr (a block with a little 8 × 8 pixels in the case of the format 4:2:0 and a block size of 8×16 pixels in the case of the format 4:2:2) is divided into blocks of size 4×4, and the predicted average value from parties for blocks of size 4×4. For example, for a block size of 4×4 in the upper-left side average all 8 pixels plots "a" and "x" or 4 pixels are averaged plots "a" or "x", and one of these average values is used as predskazano is about values. For modes 1, 2 and 3, as in the case shown in figure 5 and 6, performs spatial prediction with focus. In the case of color formats 4:2:0 and 4:2:2 elements of the structural information, such as texture images are integrated in the Y component, while for components Cb and Cr, which is the signal component of the color is not saved no structural information of the image. In accordance with this effective prediction is performed by the above described mode of prediction.

If the color format 4:4:4 components C0, C1 and C2 not installed as components Y, Cb or Cr, and information about the structure of the image equivalent to the component Y is contained in each color component in the color space of R, G, B. Thus, a satisfactory efficiency predictions cannot be obtained through the prediction for components Cb and Cr. Thus, in accordance with a device for encoding a first variant embodiment of the present invention in the case of the color format 4:4:4 for the components C0, C1, and C2 mode of intra prediction coding is selected through a process equivalent to unit 5 selection of the mode of intra prediction coding for the component C0. More specifically, if information 2 identify common encoding/independent Kadirova the Oia indicates the process is complete coding", the components C0, C1 and C2 are predicted only in one common mode of intra prediction coding. On the other hand, if the information identifying the common encoding/independent encoding specifies the process of independent coding", the components C0, C1 and C2 are predicted individually obtained the prediction modes with intra-coding. In other words, if the color format is the format of 4:4:4, and 2 identify common encoding/independent encoding indicates the process is complete coding", all of the components C0, C1 and C2 are the prediction from the intra-encoding mode 100 predictions with intra-coding for the component C0. If the color format is the format of 4:4:4 and 2 identify common encoding/independent encoding specifies the process of independent coding", the components C1 and C2 are subjected to intra prediction encoding modes 101a and 101b predictions with intra-coding for components C1 and C2, obtained independently from the component C0 of the mode of intra prediction coding, the corresponding component C0, shown in figure 5 or 6.

In accordance with the configuration of the device for encoding, shown in figure 4, if the color format is the format of 4:4:4, and 2 identify common encoding/independent encoding of the criminal code which shows the General process of coding", the prediction mode is determined for the component C0 through block 5 determine the mode of intra prediction coding for the component C0, and the prediction mode for the component C0 is used directly or in combination with block 6 determine the mode of intra prediction coding for components C1/C2 to determine only one mode of intra prediction coding, optimal for all of the components C0, C1 and C2. If the color format is the format of 4:4:4 and 2 identify common encoding/independent encoding specifies the process of independent coding, the prediction mode is determined for the component C0 through block 5 determine the mode of intra prediction coding for the component C0, and the optimal prediction modes with intra-encoding is determined individually for components C1 and C2 through the block 6 determine the mode of intra prediction coding for components C1/C2.

In all processes of the mode of intra prediction encoding, the value of the peripheral pixel becomes the reference pixel, must be a local decoded image that is not subjected to filtering to remove the blocking. Thus, the pixel value before filtering process of removing blocking obtained by summing the signal 17b locally on the encoded difference predictions, which is the output information block 24 local decoding for the component C0, block 25 local decoding for the component C1 and the block 26 local decoding for the component C2, and the predicted image 34, is stored in the memory 29 of strings to use for predicting the intra-encoding.

On the basis of prediction modes with intra-coding of the corresponding color components identified through the above process, the predicted image generated by the unit 7 of the image-forming intra prediction coding for the component C0 and unit 8 of the image-forming predictions with intra-coding for components C1/C2. For unit 7 of the image-forming intra prediction coding for the component C0 and unit 8 of the image-forming predictions with intra-coding for components C1/C2 are used in common with the device for decoding elements, and thus, detailed description will be given with the description of the device for decoding.

(2) the Process of prediction with motion compensation

The process of prediction with motion compensation is performed by block 9 motion detection for a component C0, block 10 motion detection for components C1/C2, block 11 motion compensation component C0 and block 12 compens the tion of motion for the components C1/C2, shown in figure 4.

In the case of color formats 4:2:0 and 4:2:2 signal component Y traffic information is determined by the block 9 motion detection for a component C0. Traffic information includes the index of the reference image to specify what the reference image data from the reference image stored in the memory 13 of frames used for prediction and the motion vector applied to the reference image designated by the index of the reference image.

In block 9 of the motion detection for a component C0 of the reference image is selected from the data of the reference image prediction with motion compensation, stored in the memory 13 of frames to process prediction with motion compensation on the elements of macroblocks for the component y In the memory 13 shots saves a lot of data of the reference image for the immediately preceding time or for many of the past/future time points, and the optimal reference image is selected among these data elements macroblocks to perform motion prediction. Has prepared seven types of block sizes, which are single elements, to actually perform the prediction with motion compensation. First, as shown in figa-8D, for macroblocks are selected any time the ' 16×16, 16×8, 8×16 and 8×8. Further, when the selected size 8×8, for each block of size 8×8, as shown in figa-8H, choose any of the sizes 8×8, 8×4, 4×8 and 4×4. All or some of the dimensions/size of the subblock on figa-8H for motion vector within a predetermined search range and one or more suitable reference images is the process of prediction with motion compensation for each macroblock to obtain information 102 on the motion (motion vector and reference index image component Y. the components Cb and Cr are used the same reference index image, as for the Y component, and a motion vector for component Y to obtain information 103 on the motion for the components Cb/Cr (in particular, information corresponds to the ratio of the number of samples for the component and Y component Cb and Cr, and is obtained by scaling the motion vector component Y). This process is performed by block 10 motion detection for components C1/C2.

It should be noted that the methods of forming suitable candidates for image prediction with motion compensation, which should be assessed through unit motion detection, and the predicted image that should be formed by block motion compensation, distinguish the SJ component and Y component Cb and Cr in the following.

For component Y is created not only the pixel (integer pixel) for the position, in fact, entered into the device for encoding, but also the virtual pixels of the 1/2 pixel position, which is the middle point between the integer pixels, and the pixels of 1/4 pixel position, which is the midpoint between 1/2 pixels, through a process of interpolation to use for forming a predicted image. This situation is shown in Fig.9. Figure 9, to obtain a pixel value for a position of 1/2 of the pixel data is used surrounding 6 pixels to perform filtering, interpolation, and thus obtained value of the pixel. To get the pixel value for the position of 1/4 of a pixel, used 2 surrounding pixel to perform a linear interpolation through the averaging process, and, thus, the result is a pixel value. The motion vector is represented with accuracy up to 1/4 pixel. On the other hand, when forming a predicted image for the components Cb and Cr, as shown in figure 10, the pixel value for the pixel position indicated by the motion vector obtained by scaling the corresponding motion vector component Y, is computed from the values of integer pixels of its 4 neighbors through a process of weighted linear interpolat and in accordance with the distance between pixels.

In case the format is 4:4:4 information about the structure of the image equivalent to the component Y is contained in each color component in the color space of R, G, B, while the components C0, C1 and C2 not installed as components Y, Cb or Cr. Thus, a satisfactory efficiency predictions cannot be obtained by generating a predicted image for the components Cb and Cr. Thus, in accordance with a device for encoding a first variant embodiment, in the case of the color format 4:4:4 suitable candidate for the predicted image or the predicted image generated through the process unit 9 motion detection component C0 and block 11 motion compensation component C0 together with the components C0, C1 and C2 to obtain information about the movement. More specifically, if information 2 identify common encoding/independent encoding specifies the General process of coding, only General information 102 about the movement is obtained for the components C0, C1 and C2. The scaling process is not performed, when the motion vector of the specified color component is applied to another component, as in the case of formats 4:2:0 and 4:2:2. On the other hand, if information 2 identify common encoding/independent encoding specifies the process of independent coding, each information the movement is obtained independently for the components C0, C1 and C2. In accordance with the configuration of the device for encoding in figure 4, if the color format is the format of 4:4:4 and 2 identify common encoding/independent encoding indicates the process is complete coding", information 102 about the movement of the component C0 is determined for the component C0 through block 9 motion detection for a component C0. For components C1 and C2 directly used traffic information for the component C0, or in combination, only one piece of information 102 about the movement that is optimal for all components C0, C1 and C2, is determined through the use of block 10 motion detection for components C1/C2. If the color format is the format of 4:4:4 and 2 identify common encoding/independent encoding specifies the process of independent coding", information 102 on the motion for the component C0 is determined by the block 9 motion detection for a component C0. For components C1 and C2 are determined by the individual elements of the optimal information 103a and 103b on motion by block 10 motion detection for components C1/C2.

On the basis of information about the movement determined for each color component through the above process, the predicted image generated by the block 11 of the motion compensation component C0 and block 12 comp the compensation of motion for the components C1/C2. As for the block 11 of the motion compensation component C0 and block 12 motion compensation components C1/C2 are used in common with the device for decoding elements, a detailed description will be given with the description of the device for decoding.

(3) the Process of coding the difference between predictions

The optimal prediction mode from the intra-coding, the resulting process of prediction of intra-coding, and the predicted image, and optimal traffic information (motion vector/index of the reference image), the resulting process prediction with motion compensation, and the predicted image is estimated by block 14 selection of the encoding mode to select the optimal mode 15 coding. If the mode is 15 encoding an intra prediction encoding, calculates a difference between the input signal 3 and the image intra prediction encoding by myCitadel 16 to receive the signal 17a-difference predictions. If the mode is 15 coding is the prediction with motion compensation, calculates a difference between the input signal 3 and the image prediction with motion compensation by myCitadel 16 to receive the signal 17a-difference predictions.

The received signal 17a difference forecast the Oia is converted or quantized by block 18 coding the difference between the predictions for the component C0, unit 19 encoding the difference between the predictions for the component C1 and block 20 coding the difference between the predictions for the component C2, to compress the information. In block 19 encoding the difference between the predictions for the component C1 and the block 20 coding the difference between the predictions for the component C2 process for components C1/C2 is switched in accordance with information 1 identification of the color format and information 2 identify common encoding/independent encoding.

For the component Y in the case of color formats 4:2:0 and 4:2:2 component and C0 in the case of the color format 4:4:4 the process of coding the difference between the predictions of figure 11 are performed by block 18 coding the difference between the predictions for the component C0. First, in accordance with the process, if the mode is 15 coding mode is intra prediction encoded with a size of 8×8 or mode is selected to perform the above signal 17a of the difference between the predictions of the integer transform on blocks of size 8×8, is the integer transform on blocks of size 8×8, in which the macroblock is divided into four parts, and performs the quantization process in accordance with the parameter 32 quantization to obtain data 21 of the quantization coefficients. If the mode is 15 encoding differs from the above-mentioned first is the integer transform block is m with size 4×4. Then, if the mode is 15 coding mode is intra prediction encoded with a size of 16×16, only the DC components of the transform coefficients of blocks of size 4×4 are combined to compose blocks of size 4×4, and is the Hadamard transform. For the DC component quantization coefficient Hadamard transform in accordance with the parameter 32 quantization and quantization processes are performed individually for 15 AC-components of the remaining blocks of size 4×4. If the mode is 15 coding mode is not predicting intra-coding with a size of 16×16, the quantization process is performed simultaneously for 16 transform coefficients in accordance with a parameter 32 quantization.

For the component Cb formats chroma 4:2:0 and 4:2:2 and the component C1 in the case of the color format 4:4:4 encoding the difference between the prediction is performed by block 19 encoding the difference between the predictions for the component C1. In this case, since the encoding of the difference between the prediction is performed through the processes shown in Fig when formats color formats are 4:2:0 and 4:2:2, and through the processes shown figure 11, when the color format is the format of 4:4:4, the following describes only the process in the case of color formats 4:2:0 and 4:2:2. In accordance with this process independently of tregime 15 encoding the signal component Cb macroblock is divided into blocks of size 4×4 to perform integer conversion and the quantization process is performed in accordance with option 32 quantization to obtain data 22 of the quantization coefficients. First is the integer transform for blocks of size 4×4, and then the DC components of the blocks with a size of 4×4 are combined to compose blocks of size 2×2 (when the color format 4:2:0) or with blocks of size 2×4 (when the color format 4:2:2), and, thus, is the Hadamard transform. For the DC component quantization coefficient Hadamard transform in accordance with the parameter 32 quantization and the quantization process is performed individually for 15 AC-components of the remaining blocks of size 4×4 in accordance with a parameter 32 quantization.

For the component Cr formats chroma 4:2:0 and 4:2:2 and the component C2 in the case of the color format 4:4:4 encoding the difference between the prediction is performed by block 20 coding the difference between the predictions for the component C2. In this case, the encoding of the difference between the prediction is performed through the processes shown in Fig when formats color formats are 4:2:0 and 4:2:2, and through the processes shown figure 11, when the color format is the format of 4:4:4, for receiving the output data 23 coefficients quantization.

For each color component in the quantization is determined and the formation of a coded block pattern (CBP), indicates whether there is a valid (non-zero) coefficient blocks of size 8x8, and again is multiplexed as a single element information item of the macroblock in the bit stream. The definition of a coded block pattern (CBP) depends on the information 1 identification of the color format and information 2 identify common encoding/independent encoding, and the details will be described in the description of the device for decoding.

(4) the Process of encoding with variable-length

As elements of the header information on the sequence level information 1 identification of the color format information 2 identify common encoding/independent encoding information 4 specify only intra-coding and information 31 about the size of the image is fed to the input of block 27 coding with variable length. When information 2 identify common encoding/independent encoding specifies the process of independent coding", the input is the information identifying the color component that indicates which color component belongs coded at the moment the image, and based on the information to the beginning of the encoding in the present section is added flag 33 identification color component. In accordance with this side of the device for decoding can ODA is to divide, encoded data of a color component contains the adopted section. As the encoded data of level macroblock mode 15 coding mode 100/101 prediction of intra-coding, information 102/103 about the movement, option 32 quantization information 104 specify the size of the converted block and the data 21/22/23 coefficients of quantization of an input and subjected to statistical coding using coding Huffman or arithmetic coding for multiplexing the bit stream 30. Bit stream 30 is formed in the form of packets of data elements section, which contains one or multiple macroblocks (AVC encoding is also referred to as packetization period of the NAL level), for delivery to the output.

Fig shows the entire bit stream 30. Information 1 identification of the color format information 2 identify common encoding/independent encoding information 4 specify only intra-coding and information 31 about the size of the image is multiplexed in a set sequence (SPS), which represents the header information of the sequence level. Because information 2 identify common encoding/independent encoding is required only when the color format is the format of 4:4:4, it is multiplexed only then is Yes, when information 1 identification of the color format specifies the format of 4:4:4. The initial value 32 quantization used in the beginning of the image, is multiplexed in the set of image parameters (PPS), which represents the header information of the image level. Encoded image data multiplexed in sections or smaller element, and data formats change, as shown in Fig and 15A and 15B, in accordance with the value of information 1 identification of the color format and information 2 identify common encoding/independent encoding.

When information 1 identification of the color format indicates that the formats color formats are 4:2:0 and 4:2:2, it turns out the structure of the section shown in Fig. On Fig SH is the header section, MB - data coding macroblock, MBH - header macroblock and Tx data quantization coefficients for the x component. In this case, in the configuration shown in figure 2, the section contains data coding macroblock consisting of pixels of the components Y, Cb and Cr in accordance with the ratio of the number of samples for color format, the header MBH contains the type of the macroblock, equivalent to mode 15 coding. If the macroblock type indicates the mode of intra prediction coding, contains 100 mode of intra prediction encoding, the La component C0, that is, for Y component, common mode 101 predictions with intra-coding for components C1 and C2, i.e. for components Cb and Cr, and parameter 32 quantization used for the quantization/inverse quantization data coefficient quantization. If the macroblock type indicates the mode prediction with motion compensation (inter-coding), contains information 102 on the motion (motion vector and reference index image component C0, i.e. for Y component, and parameter 32 quantization used for the quantization/inverse quantization data coefficient quantization.

When information 1 identification of the format of the color indicates the color format is the format of 4:4:4, it turns out the structure of the section shown in figa and 15B, in accordance with the value of information 2 identify common encoding/independent encoding. If information 2 identify common encoding/independent encoding specifies the General process of coding" (figa), in the configuration shown in figure 2, the section contains data coding macroblock consisting of pixels of the components C0, C1 and C2 in accordance with the ratio of the number of samples for color format, and the header MBH contains the type of the macroblock, equivalent to mode 15 coding. If the macroblock type indicates the mode of prediction of intra-Kadirova the Institute of economy and management, it contains 100 mode of intra prediction coding, common to all components C0, C1 and C2, and parameter 32 quantization used for the quantization/inverse quantization data coefficient quantization. If the macroblock type indicates the mode prediction with motion compensation (inter-coding), contains information 103 on the motion (motion vector and reference index image), common to all components C0, C1 and C2, and parameter 32 quantization used for the quantization/inverse quantization data coefficient quantization.

If information 2 identify common encoding/independent encoding specifies the process of independent coding" (pigv), in the configuration shown in figure 3, the section contains data coding macroblock consisting of pixels of the same color component (k) of the color components C0, C1 and C2. As information indicating which of the color components C0, C1, and C2 is a component Ck, to the beginning of the section is added flag 33 identification color component. Title MBH contains the type of the macroblock, equivalent to mode 15 coding. If the macroblock type indicates the mode of intra prediction coding, contains 100 mode of intra prediction coding for a component Ck and parameter 32 quantization used for the quantization/inverse quantization on the R coefficients quantization. If the macroblock type indicates the mode prediction with motion compensation (inter-coding), contains information 102 on the motion (motion vector and reference index image component Ck and parameter 32 quantization used for the quantization/inverse quantization data coefficient quantization.

Although not shown, if necessary, can be inserted into a unique word (separator access coding AVC, initial code image in the MPEG-2 standard or the initial code of the video object plane (VOP) in the MPEG-4), indicating a gap between the access elements (one image with the formats color formats are 4:2:0 and 4:2:2 or is in the process of the General coding, and three images when the process of independent coding).

When configuration bit stream, even when the blocks are encoded many different color formats, such as 4:2:0, 4:2:2 and 4:4:4, the method for detecting the generation of the coded prediction mode or the traffic information and the semantics of the encoded data can be shared. Thus, the configuration of the device for encoding can be made effective. In addition, since the encoded video data of a number of different color formats such as 4:2:0, 4:2:2 and 4:4:4 may be before the taulani single bit stream format, bit stream 30 that is output from the device to encode the first variant of the embodiment can satisfy high mutual compatibility in the transmission/recording for processing a variety of different color formats.

Device for encoding in figure 4 is configured to control the encoding process on the basis of information 4 specify only intra-coding. Information 4 specify only intra-coding is a signal for specifying whether to perform the process of predicting the direction of time-based prediction with motion compensation by means of a device for coding. If the signal indicates "only intra coding", for all images video input 3 is closed within the frame coding (coding prediction only with intra-coding) without performing prediction direction in time, based on the prediction with motion compensation. At the same time disables the filter eliminate blocking in the block coding of the image. If information 4 specify only intra-coding indicates "not only intra coding", for images of the input video 3 encoding is performed using all of the correlations in the frame and between frames by using also the predictions of the direction of time and, based on the prediction with motion compensation. Information 4 specify only intra-coding is added to the set of sequence parameters that will be multiplexed in the bit stream 30 through block 27 coding with variable length. In accordance with this, the device for decoding, which takes the bit stream 30 can recognize only intra-coding of the bit stream 30 through decode 4 instructions only intra-coding of the parameter set sequence and checks its value. Thus, if it was applied only to intra-coding, the number of computing devices for decoding can be reduced without performing the filtering process to eliminate blocking. If information 4 specify only intra-coding indicates only intra-coding, the prediction with motion compensation is not performed, and thus, the reference image is not recorded in the memory 13 frames. In this configuration decreases the memory access.

Device for encoding is also performed with the ability to control the encoding process on the basis of information 31 of the frame size of the input video signal 3. Information 31 on the frame size indicates the number of macroblocks in the image of the input video signal 3. If this value exceeds predoped the certain threshold value, set the upper limit on the number of macroblocks included in the section, and checks, in order not to include in the partition more macroblocks. In particular, information 31 about the size of a frame is input to the block 27 coding with variable length. Block 27 coding with variable length sets the value of the upper limit for the number of macroblocks included in the section, on the basis of information 31 on the frame size. Block 27 coding with variable length pre-counts the number of coded macroblocks and closes the data package section to form subsequent macroblocks in the data package as a new partition when the number of macroblocks included in the section reaches the upper limit value. Information 31 about the size of a frame is added to the set of sequence parameters that will be multiplexed in the bit stream 30 through block 27 coding with variable length. Accordingly, when the frame size of the input video signal 3 is large (high spatial resolution), a device for encoding and decoding can determine the items that will be processed in parallel and to perform uniform distribution tasks.

The device for decoding

Fig shows the configuration of the device for decoding the fit is accordance with the first variant embodiment. A device for decoding at Fig performed with the opportunity to take the bit stream 30 and switch internal decoding process on the basis of information 1 identification of the color format contained in the bit stream, and decodes the encoded data of many different formats and color.

The input bit stream 30 is first subjected to a process of decoding variable length (unit 200 decodes variable length), and information 1 identification of the color format information 2 identify common encoding/independent encoding information 4 specify only intra-coding and information 31 about the size of the frame are decoded as information elements of an older title that will be saved during decoding sequence. Then, the data of each macroblock are decoded on the basis of the structures of the sections shown in Fig and 15A and 15B, is determined by information 1 identification of the color format and information 2 identify common encoding/independent encoding, and structure of the macroblock shown in figure 2 or 3. When decoding of the first macroblock in accordance with the decoded mode 15 coding is performed a process of forming the image of a prediction with the intra-coding (unit 7 of the image-forming predictions with intra-coding the component C0 and unit 8 of the image-forming predictions with intra-coding for components C1/C2) and the process of motion compensation (block 11 of the motion compensation component C0 and the block 12 motion compensation for components C1/C2)to generate the predicted image macroblock. The process of inverse quantization/inverse integer transform is performed for the data of the quantization coefficients, the decoded as part of the encoded data of the macroblock, to decode the signal 17b-difference prediction (block 24 decoding of the difference between the predictions for the component C0, unit 25 decodes the difference between the predictions for the component C1 and the unit 26 decodes the difference between the predictions for the component C2). Then the predicted image 34 and the signal 17b-difference predictions are summed together to obtain a temporary decoded image. If necessary, filters eliminate blocking (block 28 filter eliminate blocking) to suppress the noise at the boundaries of blocks, accompanying quantization, and then the decoded image stored in the memory 201 of personnel and/or the memory 202 of strings to use for subsequent processes of forming a predicted image. When information 4 specify only intra-coding indicates "only perform intra-coding", is the process of forming the image prediction only with intra-coding without doing motion compensation.

Next will be described the process of decoding variable length, the process fo the formation of the image prediction with intra-encoding, the process of motion compensation and decoding process of a difference predictions, switched on the basis of information 1 identification of the color format information 2 identify common encoding/independent encoding and 4 indicate only intra-coding, which are signs of the first variant embodiment.

(1) the Process of decoding variable length

Bit stream 30 is fed to the input unit 200 decodes variable length, and analyzed the senior title, such as a set of sequence settings or set parameters of the image on Fig. Through this process are decoded information 1 identification of the color format information 2 identify common encoding/independent encoding information 4 specify only intra-coding and information 31 about the size of the image. Information 2 identify common encoding/independent encoding is extracted from the bit stream 30 only when information 1 identification format color specifies the color format 4:4:4. These parameters are stored in the internal memory unit 200 decodes variable length during decoding sequence.

Then the decoded level element NAL section. First, only when information 1 identification format color specifies the color format 4:4: 2 identify common encoding/independent encoding specifies the process of independent coding", decoded flag 33 identification color component for recognition, encoded data of a color component contains the current section. Then the decoded header section, and the process proceeds to decode the encoded data of the macroblock belonging to the partition.

With reference to Fig will be described an array of data-encoded macroblock and the sequence of process analysis/decoding. The decoding of the macroblock is performed as follows.

(a) First, as shown in Fig and 15A and 15B, the decoded macroblock type (variable mb_type on Fig), equivalent to mode 15 encoding.

(b) In the switch SW1 is estimated variable mb_type. If the variable mb_type indicates the PCM mode (direct mode multiplexing pixel value in the bit stream without compression), uncompressed data equivalent to the number of pixels in the macroblock are extracted directly from the bit stream, and the decoding process of the macroblock ends.

(c) In the switch SW1 variable mb_type does not indicate the PCM. In the switch SW2 is estimated that indicates whether the variable mb_type execution mode prediction with motion compensation based on the block size equal to or smaller than 8×8, the decoded type submicromolar (sub_mb_type)equal to or less than 8×8, and is decoded information about the motion (motion vector/index is pornoho image) of each subunit. Then shift to the switch SW4.

(d) In the switch SW2 variable mb_type does not satisfy the conditions of paragraph (c). The switch SW3 is estimated variable mb_type. If blocks of size 8×8 can be selected as the block size conversion signal 17b-difference prediction mode is inter-coding is decoded information 104 specify the size of the converted block and the decoded information about the motion. In the case of decoding information about the movement decoding is performed as follows based on the information 1 identification of the color format and information 2 identify common encoding/independent encoding. Then shift to the switch SW4.

(1) When the color format is the format of 4:2:0 or 4:2:2, traffic information, which must be decoded is decoded as information 102 on the motion for the component C0.

(2) When the color format is the format of 4:4:4, and the process is a General process of encoding information about the movement, which must be decoded is decoded as information elements 102 and 103 of motion used for the components C0, C1 and C2.

(3) When the color format is the format of 4:4:4, and the process is a process of independent coding, traffic information, which must be decoded is decoded as traffic information, usage is used for the component Ck, indicated by flag 33 identification color component.

(e) switch SW2 variable mb_type does not satisfy the conditions of paragraph (c). The switch SW3 is estimated variable mb_type. If the mode is intra prediction encoded with a size of 4×4 or 8×8, the information of the prediction mode from the intra-encoding. If the decoding mode of intra prediction encoding, decoding is performed as follows based on the information 1 identification of the color format and information 2 identify common encoding/independent encoding. Then shift to the switch SW4.

(1) When the color format is the format of 4:2:0 or 4:2:2, 100 mode of intra prediction coding for the component C0, certain elements with blocks of size 4×4 or 8×8, is decoded for the component Y, and independently coded mode 101 predictions with intra-coding for components C1/C2 is decoded for components Cb/Cr.

(2) When the color format is the format of 4:4:4, and the process is a General process of encoding information of the prediction mode from the intra-coding, which must be decoded is decoded as information elements 100 and 101 of the mode of intra prediction coding used for the components C0, C1 and C2.

(3) When the form of the chroma format is 4:4:4, and the process is a process of independent coding, information of the prediction mode from the intra-coding, which must be decoded is decoded as information of the prediction mode from the intra-coding used for the component Ck, indicated by flag 33 identification color component.

(f) switch SW3 variable mb_type does not satisfy the conditions of paragraphs (d) or (e). The switch SW4 is estimated variable mb_type. If the mode is intra prediction encoded with a size of 16×16, the decoded prediction mode from the intra-coding with a size of 16×16, included in the variable mb_type. Then, if information 1 identification format color specifies the color format 4:2:0 or 4:2:2, in accordance with subparagraph (1) of paragraph (e) is decoded mode 101 predictions with intra-coding for components C1/C2, independent of component Y. Then decoded quantization parameter.

(g) If the variable mb_type does not satisfy the conditions of paragraph (f) in the switch SW4, decoded the coded block pattern (CBP). Based on the value of the coded block pattern (CBP) the results of the decoding data of the quantization coefficients all are set to 0 for blocks of size 8×8, indicating that all the coefficients equal to 0. If the coded block pattern (CBP) indicates that there is a valid factor in one of the blocks with asmerom×8 macroblock (switch SW5), the decoded quantization parameter. If information 1 identification format color specifies the color format 4:2:0 or 4:2:2, the coded block pattern (CBP), which must be decoded is decoded as information to determine the presence of valid data coefficients for the four blocks of size 8×8 for the luminance components and blocks of size 8×8 for N (N=2 to 4:2:0, N=4 for format 4:2:2) components of the color. If information 1 identification format color specifies the color format 4:4:4 and if the information 2 identify common encoding/independent encoding specifies the "common coding", the coded block pattern (CBP) is decoded as information for determining whether there is a valid factor in one of the blocks of size 8×8 in the same space components C0, C1 and C2 for the four blocks of size 8×8. If information 2 identify common encoding/independent encoding indicates "independent coding, the coded block pattern (CBP) is decoded for each component C0, C1 and C2 as the same definition as for the brightness, when the color format is the format of 4:2:0 or 4:2:2.

(h) Data of the quantization coefficients are decoded for a macroblock, the quantization parameter which has been decoded. In this case, the data Tx to the rates of quantization decoded in accordance with the data structures section and macroblock, shown in Fig and 15A and 15B, defined on the basis of information 1 identification of the color format and information 2 identify common encoding / independent encoding.

(2) the Process of forming the image of a prediction with the intra-coding

The process of forming the image of a prediction with the intra-coding is performed by the block 7 to the image-forming intra prediction coding for the component C0 and unit 8 of the image-forming predictions with intra-coding for components C1/C2, shown in Fig. These blocks are shared with the encoding device shown in figure 4.

In the case of color formats 4:2:0 and 4:2:2 signal component Y of the predicted image is generated on the basis of 100 mode of intra prediction coding for the component C0, provided from block 200 decoding variable length, through block 7 imaging with intra-coding for the component C0. For 100 mode of intra prediction coding for the component C0 has three selectable types of modes: the mode of intra prediction coding with 4×4 prediction mode from the intra-coding with size 8×8 and the mode of intra prediction coding with size 16×16. For the mode of intra prediction coding with 4×4 prediction mode intra-coding is of size 8×8 macroblock is divided into blocks with a size of 4×4 pixels or 8×8 pixels, and for each block performs spatial prediction using neighboring the reference pixel, as shown in figure 5, for forming a predicted image. This method of forming a predicted image there are nine options. Information about which of these nine methods used to generate a predicted image, represented as 100 mode of intra prediction coding for the component C0 for unit 7 of the image-forming intra prediction coding for the component C0. Figure 5 shows an example of block size 4×4. For block size of 8×8 pixels is determined in the same mode. Effects of methods of spatial prediction with such a direction, as described above.

There is a mode of intra prediction coding with size 16×16 as the execution mode of intra prediction coding for blocks of size 16×16, not divided into any macroblocks (6). In this case, the method of forming a predicted image is selected from four types of spatial prediction, shown in Fig.6. The effects of this mode of spatial predictions described above.

For components Cb and Cr by a block 8 of the image-forming predictions with intra-coding for components C1/C2 is formed image is their intra prediction encoding, regardless of component Y. Fig shows the internal configuration of the unit 8 of the image-forming predictions with intra-coding for components C1/C2 of the first variant embodiment. If information 1 identification format color specifies the color format 4:2:0 or 4:2:2, mode 101 predictions with intra-coding for components C1/C2 specifies one of the four types of modes shown in Fig.7. Based on the number of blocks for the purpose of forming image prediction process jumps to block 8a imaging prediction of intra-coding for components Cb/Cr in the 4:2:0 or block 8b of the image-forming predictions with intra-coding for components Cb/Cr in the 4:2:2 in accordance with the color format. If the color format 4:4:4 as the mode 101 predictions with intra-coding for components C1/C2 has exactly the same definition as the mode of intra prediction coding for the component C0, processing component Y, the process proceeds to block 8c imaging prediction of intra-coding for component y Unit 8c of the image-forming predictions with intra-coding for component Y can be configured through the use of substantially the same elements as in unit 7 of the image-forming predictions with intra-coding for comp is the component C0. However, the difference lies in the fact that the formation of the predicted image is performed for both components C1 and C2, if information 2 identify common encoding/independent encoding specifies "the generic coding process, and forming a predicted image is mode (101a or 101b) prediction of intra-coding, corresponding only to the component Ck, indicated by flag 33 identification color component, in the case of the process of independent coding".

In all processes the image-forming intra prediction encoding, the value of the peripheral pixel becomes the reference pixel must be decoded image that is not subjected to filtering to remove the blocking. Thus, the pixel value before filtering process of removing blocking obtained by summing the decoded signal 17b-difference predictions, which is the output information unit 24 decodes the difference between the predictions for the component C0, unit 25 decodes the difference between the predictions for the component C1 and unit 26 decodes the difference between the predictions for the component C2, and the predicted image 34, is stored in memory 202 of strings to use for the image-forming predictions with intra-encoding.

(3) Percent of the SS motion compensation

The process of motion compensation is performed by block 11 motion compensation component C0 and block 12 motion compensation components C1/C2, shown in Fig. These blocks are shared with a device for encoding, shown in figure 4.

In the case of color formats 4:2:0 and 4:2:2 signal component Y is formed by a block 11 of the motion compensation component C0 on the basis of information 102 on the motion for the component Y, the decoded as part of the encoded data of the macroblock. Traffic information includes the index of the reference image, which indicates that the reference image is used from the data of one or more reference images stored in the memory 201 of frames and the motion vector applied to the reference image designated by the index of the reference image.

Information 102 on the motion for the component Y is decoded in accordance with the seven types of block sizes, which are unit elements of prediction with motion compensation, is shown in figa-8H. What size block on figa-8H used to perform motion compensation, is determined by mode 15 coding and subtype macroblock (sub_mb_type)described in the description of the decoding process with variable length. Information 102 on the motion for the component Y is allocated for the block, to the which becomes a single element of the motion compensation, and a motion vector is applied to a reference image designated by the index of the reference image from the memory 201 frames, to obtain a predicted image. For components Cb and Cr, as shown in Fig, in block 200, the decoding of variable length are allocated the same reference index image, as for the component Y, and the motion vector for the Y component is used to retrieve information 103 on the motion for the components Cb/Cr (in particular, obtained by scaling the motion vector for the component Y with the ratio of the number of counts for the components Y, Cb and Cr).

As described above with figures 9 and 10, a method for forming a predicted image in the block motion compensation varies between the component and Y components Cb/Cr. The process of motion compensation for component Cb/Cr is performed by block 12 motion compensation components C1/C2.

Fig shows the internal configuration of the block 12 motion compensation components C1/C2 of the first variant embodiment. If information 1 identification format color specifies the color format 4:2:0 or 4:2:2, information 103 on the motion for the components Cb/Cr is formed by block 200 decoding variable length based on the information 102 on the motion for the component C0, that is, the Y component, as described above, the La feed the input unit 12 of the motion compensation components C1/C2. Based on the number of blocks for the purpose of forming a predicted image, the process jumps to block 12a motion compensation for component Cb/Cr in the 4:2:0 or block 12b motion compensation for component Cb/Cr in the 4:2:2 in accordance with the color format. If the color format 4:4:4 because information 103 motion compensation for component Cb/Cr has the same definition as information 102 motion compensation for Y component, for processing the component Y, the process proceeds to block 12c motion compensation component Y. the Block 12c motion compensation for component Y can be configured through the use of substantially the same elements as in block 11 of the motion compensation component C0. However, the difference lies in the fact that the formation of the predicted image is performed for both components C1 and C2, if information 2 identify common encoding/independent encoding specifies "the generic coding process, and forming a predicted image is for information (103a or 103b) of the movement corresponding to only the component Ck, indicated by flag 33 identification color component, in the case of the process of independent coding”.

(4) the Process of decoding the difference between predictions

The decoding process is different the minute prediction is performed by block 24, the decoding of the difference between the predictions for the component C0, unit 25 decodes the difference between the predictions for the component C1 and unit 26 decodes the difference between the predictions for the component C2, shown in Fig. They are shared with the block 24 local decoding for the component C0, block 25 local decoding for the component C1 and the block 26 local decoding for the component C2 of the device for encoding, shown in figure 4.

The decoding process of the difference between the prediction is a process for the recovery of the signal 17b of the difference between the predictions by performing inverse quantization/inverse transformation for data 21-23 quantization coefficients of the components C0-C2 for each macroblock, issued from the block 200 decoding variable length. In block 25 of the local decoding for the component C1 and the block 26 local decoding for the component C2 process for components C1/C2 is switched in accordance with information 1 identification of the color format and information 2 identify common encoding/independent encoding.

For the component Y in the case of color formats 4:2:0 and 4:2:2 and the component C0 in the case of the color format 4:4:4 the process of decoding the difference between the predictions shown in Fig, is performed by block 24, the decoding of the difference between the predictions for the component C0. First, in accordance with the process of the EU and mode 15 coding mode is intra prediction encoded with a size of 8×8 or information 104 specify the size of the converted block specifies the integer transform on blocks of size 8×8, data 21 of the quantization coefficients are processed in blocks of size 8×8 macroblock is divided into four parts. After inverse quantization process performed in accordance with option 32 quantization on blocks of size 8×8, is the inverse integer transform on blocks of size 8×8 to obtain the reconstructed values 17b signal 17a-difference predictions.

If the mode is 15 encoding differs from the above, the process is switched based on whether the mode is 15 coding mode of intra prediction coding with size 16×16. In the case of prediction mode from the intra-coding with a size of 16×16 first performs an inverse quantization process for conversion only DC components of the transform coefficients of blocks of size 4×4 data from 21 of the quantization coefficients in accordance with a parameter 32 quantization and then performs the inverse Hadamard transform of size 4×4. In accordance with this, it turns out the restored value of the DC component blocks of size 4×4 macroblock. Also performs reverse quantization for the remaining 15 AC coefficients in accordance with a parameter 32 quantization, and the coefficients of the inverse quantization unit block element with a size of 4×4 can be obtained in combination with the received DC-comp is the component. Finally, by running over them inverse integer transform with 4×4 is restored signal 17b-difference predictions.

If the mode is 15 coding mode is not predicting intra-coding with a size of 16×16, the inverse quantization/inverse integer transform is performed for each block size 4×4 to restore the signal 17b-difference predictions.

For components Cb and Cr formats chroma 4:2:0 and 4:2:2 for C1 and C2 in the case of the color format 4:4:4 processes the decoding of the difference between the prediction is performed in the unit 25 decodes the difference between the predictions for the component C1 and the unit 26 decodes the difference between the predictions for the component C2.

For components Cb and Cr formats chroma 4:2:0 and 4:2:2 decoding process of the difference between the prediction is performed based on the sequence shown in figa and 21B. The difference between formats 4:2:0 and 4:2:2 is whether the size of a single element to perform a Hadamard transform block of 2×2 or a block of 2×4. First collected only DC components of the conversion coefficient blocks of size 4×4 to produce a block that is subjected to the inverse Hadamard transform, and the inverse Hadamard transform is performed after the inverse quantization. For the remaining 15 AC components on atoe quantization is performed individually, and inverse integer transform with a size of 4×4 is performed after the combination with the DC-components. Thus, the restored signal 17b-difference predictions.

Fig shows the internal configuration of the unit 25 decodes the difference between the predictions for the component C1 and unit 26 decodes the difference between the predictions for the component C2. Processes for components Cb and Cr formats chroma 4:2:0 and 4:2:2 is described separately on Fig as a unit 25a decoding of the difference between the predictions for components Cb or Cr in the format of 4:2:0 and block 25b decoding of the difference between the predictions for components Cb or Cr in the format of 4:2:2, but the difference in effect of the processing described above. If the color format 4:4:4, because the data 22 and 23 of the quantization coefficients for components C1/C2 encoded exactly the same way as data 21 of the quantization coefficients for the component Y in the format 4:2:0 and 4:2:2, the process proceeds to block 25c decoding of the difference between the predictions for the component Y. the Unit 25c decoding of the difference between the prediction for component Y can be configured by using the same element as the unit 24 decodes the difference between the predictions for the component C0.

If information 4 specify only intra-coding indicates only intra-coding, all image bit stream 30 is encoded through the om run a closed coding (prediction only with intra-coding) in the frame without performing prediction direction in time, based on the prediction with motion compensation. Thus, the process of block 28 of the filter eliminate blocking is disabled. In accordance with this, in the device for decoding, which takes the bit stream 30, the process filter eliminate blocking is not performed if only intra-coding, and thus, the number of computing devices for decoding can be reduced. In addition, if information 4 specify only intra-coding indicates only intra coding", because the prediction with motion compensation is not performed, the reference image is not recorded in the memory 201 frames. With this configuration reduces the memory access.

The device for decoding the first variant embodiment has been described by the configuration in which the decoding is performed when receiving a bit stream 30 issued from the device for encoding. However, since the device for encoding produces a bit stream in accordance with the format of the bit stream 30, the bit streams output from the device for coding that performs encoding by using only the color format 4:2:0, or devices for encoding various specifications, using only the color format 4:2:2 or two color format 4:2:0 and 4:2:2 can be correctly decoded.

In accordance with devices for encoding and decoding of the first variant embodiment for a variety of different formats, color, such as 4:2:0, 4:2:2 and 4:4:4 can be jointly performed the encoding and decoding through effective configuration of the device and can be increased mutual compatibility encoded video data.

The first variant of the embodiment has been described through the case where the color space of the color formats 4:2:0 and 4:2:2 are the space components Y, Cb and Cr. However, when using other color spaces, such as Y, Pb and Pr, can be achieved the same effects.

1. Device for encoding a dynamic image to perform encoding with compression on the basis of the input digital signal dynamic image, comprising:
the first block determination mode prediction with intracoronal to perform predictions with intracoronal over the component signal corresponding to the brightness component, when the color format of the input signal dynamic image is one of the formats 4:2:0 and 4:2:2;
the second block determination mode prediction with intracoronal to perform predictions with intracoronal over the component signal corresponding component color, when the color format of the input signal dynamic image is one of the formats 4:2:0 and 4:2:2;
block coding with variable length to encode lane with the variable length of one element of the set, consisting of a first mode prediction with intracoronal defined by the first block determination mode prediction with intracoronal, and the second mode prediction with intracoronal defined by the second definition block mode prediction with intracoronal;
the first imaging unit predictions with intracoronal for forming the first image prediction with intracoronal based on the first prediction mode with intracoronal;
the second imaging unit predictions with intracoronal for forming the second image prediction with intracoronal based on the second mode prediction with intracoronal and
the coding block to perform the conversion and encoding on the signal prediction errors obtained as the difference between one element of the set consisting of the first image prediction with intracoronal and second image prediction with intracoronal, and the corresponding signals of the color components included in the input signal dynamic image, and
on the basis of the control signal to provide the type of format chrominance input dynamic image when the color format is one of the formats 4:2:0 � 4:2:2, the first block determination mode prediction with intracoronal and the first imaging unit predictions with intracoronal apply to the brightness of the input dynamic image signal and the second block determination mode prediction with intracoronal and the second imaging unit predictions with intracoronal apply to the chrominance signal input dynamic image;
when the color format is the format of 4:4:4, the first block determination mode prediction with intracoronal and the first imaging unit predictions with intracoronal apply to all color components of the input signal dynamic image to do the encoding; and
block coding with variable length multiplexes the bit stream control signal as data encoding, which must be applied to the element of the sequence of dynamic images.

2. Device for encoding a dynamic image according to claim 1, in which: when the color format is the format of 4:4:4, when another control signal that makes the difference between the total coding and independent coding indicates the total encoding the first block determination mode prediction with intracoronal appreciates the art or all color components, included in the input dynamic image signal, and determines the first mode prediction with intracoronal, which must be obtained as the prediction mode common to all the color components included in the input signal dynamic image; and when the control signal indicates independent coding, the first block determination mode prediction with intracoronal evaluates the signals of respective color components included in the input dynamic image signal, and determines the first mode prediction with intracoronal independently for each of the color components to do the encoding.

3. Device for encoding a dynamic image to perform encoding with compression on the basis of the input digital signal dynamic image, comprising:
the first block of the motion detection to determine the first information about the movement relative to the component signal corresponding to the brightness component, when the color format of the input signal dynamic image is one of the formats 4:2:0 and 4:2:2;
the second block motion detection to determine the second information about the movement relative to the component signals corresponding component color, when the color format of the input signal dynamics is a mini-image is one of the formats 4:2:0 and 4:2:2;
block coding with variable length coding variable length first information about the movement defined by the first unit motion detection;
the first block motion compensation for the formation of the first image prediction with intracoronal based on the first information about the movement;
the second block motion compensation for the formation of the second image prediction with intracoronal based on the second traffic information and
the coding block to perform the conversion and encoding on the signal prediction errors obtained as the difference between one element of the set consisting of the first image prediction with intracoronal and second image prediction with intracoronal, and the corresponding color component signal included in the input signal dynamic image, and
on the basis of the control signal to provide the type of format chrominance input dynamic image when the color format is one of the formats 4:2:0 and 4:2:2, the first block of the motion detection and the first block motion compensation applied to the brightness component of the input dynamic image signal and the second detection unit movement and the second block motion compensation applied to a component of svetlost the input dynamic image;
when the color format is the format of 4:4:4, the first block of the motion detection and the first block motion compensation is applied to all color components of the input dynamic image signal to perform encoding and
block coding with variable length multiplexes the bit stream control signal as data encoding, which must be applied to the element of the sequence of dynamic images.

4. Device for encoding a dynamic image according to claim 3, in which: when the color format is the format of 4:4:4, when another control signal that makes the difference between the total coding and independent coding indicates the total encoding the first block motion detection evaluates some or all of the color components included in the input dynamic image signal, and determines the first information on the movement information about the movement, common to all the color components included in the input signal dynamic image; and when the control signal indicates independent coding, the first unit motion detection evaluates the signals the respective color components included in the input dynamic image signal, and determines the first motion information independently for each of C is Etowah components.

5. A device for decoding a dynamic image for decoding a digital signal dynamic image based on the input bit stream generated by performing the above signal of the digital dynamic image coding, compression, and a device for decoding a dynamic image contains:
the first imaging unit predictions with intracoronal for forming the first image prediction with intracoronal relative to the component signal corresponding to the brightness component, when the color format of the signal dynamic image is one of the formats 4:2:0 and 4:2:2;
the second imaging unit predictions with intracoronal for forming the second image prediction with intracoronal relative to the component signals corresponding component color, when the color format of the signal dynamic image is one of the formats 4:2:0 and 4:2:2;
the block decoding variable length decoding information identifying the format of the color included in the input bit stream as information regarding the element of the sequence of dynamic images, information identifying the color format specifies the format type zwetna the ti compressed and encoded signal dynamic image, analysis of the input bit stream at the level of the macroblocks based on the information identifying the color format and decoding data of the quantization coefficients included in the input bit stream, the data of the quantization coefficients obtained by converting and signal encoding the prediction error or between the first mode prediction with intracoronal used for forming the first image prediction with intracoronal, and the first image prediction with intracoronal or between the second prediction mode with intracoronal used for forming the second image prediction with intracoronal, and the second image prediction with intracoronal; and
the block decoding signal prediction errors for decoding data of the quantization coefficients in the signal prediction errors by the inverse quantization and inverse data transform coefficients quantization, and
in the case where the identification information of the color format specifies that the color format is one of the formats 4:2:0 and 4:2:2, the image prediction with intracoronal brightness component is formed on the basis of the first imaging unit predictions with intracoronal and the first prediction mode with intraradicular the eat and the image prediction with intracoronal component color is formed on the basis of the second imaging unit predictions with intracoronal and the second prediction mode with intracoronal;
in the case where the identification information of the color format specifies that the color format is the format of 4:4:4, the image prediction with intracoronal all color components is generated based on the first imaging unit predictions with intracoronal and the first prediction mode with intracoronal; and
the dynamic image signal is decoded by summing the generated image prediction with intracoronal with output information block decoding signal prediction errors.

6. A device for decoding a dynamic image according to claim 5, in which: when the color format is the format of 4:4:4, the block decoding variable-length optional decodes (as information about the element sequence of the dynamic image signal identifying the common encoding/independent encoding, which makes a distinction between coding and independent encoding; in the case when the signal identification of the common encoding/independent encoding specifies a common encoding block decoding variable-length decodes the common prediction mode as the first mode prediction with intracoronal relative to the signals of all the color components included in the signal dynamic the who images which must be decoded; and when the signal identification of the common encoding/independent encoding specifies independent coding, the block decoding variable length decodes an independent prediction mode as the first mode prediction with intracoronal with respect to each of the signals of respective color components included in the dynamic image data that must be decoded.

7. A device for decoding a dynamic image for decoding a digital signal dynamic image based on the input bit stream generated by performing the above digital signal dynamic image coding, compression, and a device for decoding a dynamic image contains:
the first block motion compensation for the formation of the first image prediction with intracoronal relative to the component signal corresponding to the brightness component, when the color format of the signal dynamic image is one of the formats 4:2:0 and 4:2:2;
the second block motion compensation for the formation of the second image prediction with intracoronal relative to the component signals corresponding component color, when the color format of the signal is as dynamic image is one of the formats 4:2:0 and 4:2:2;
the block decoding variable length decoding information identifying the format of the color included in the input bit stream as information regarding the element of the sequence of dynamic images, information identifying the color format specifies the type of the color format of the compressed and encoded signal dynamic image analysis of the input bit stream at the level of the macroblocks based on the information identifying the color format and decoding data of the quantization coefficients included in the input bit stream, the data of the quantization coefficients obtained by converting and signal encoding the prediction error or between the first information about the movement used for forming the first image prediction with intracoronal, and the first image prediction with intracoronal or between the second information about the movement used to generate the second image prediction with intracoronal, and the second image prediction with intracoronal; and
the block decoding signal prediction errors for decoding data of the quantization coefficients in the signal prediction errors by the inverse quantization and inverse transform data of the quantization coefficients, and:
the case when the information identifying the format of the color indicates the color format is one of the formats 4:2:0 and 4:2:2, value, decode the second traffic information is generated based on the first information about the movement, the image prediction with intracoronal for brightness component is formed on the basis of the first block motion compensation and first information about a motion prediction with intracoronal for the component color is formed on the basis of the second block motion compensation and the second flow information;
in the case where the identification information of the color format specifies that the color format is the format of 4:4:4, the image prediction with intracoronal for all color components is formed on the basis of the first block motion compensation and first information about the movement; and
the dynamic image signal is decoded by summing the generated image prediction with intracoronal with output information block decoding signal prediction errors.

8. A device for decoding a dynamic image according to claim 7, in which: when the color format is the format of 4:4:4, the block decoding variable-length optional decodes (as information about the item sequence dinamicas the th image signal identifying the common encoding/independent encoding, which makes a distinction between coding and independent encoding; in the case when the signal identification of the common encoding/independent encoding specifies a common encoding block decoding variable-length decodes the overall motion information as the first information about the movement relative to the signals of all the color components included in the signal dynamic image, which must be decoded; and when the signal identification of the common encoding/independent encoding specifies independent coding, the block decoding variable-length decodes the information is independent of the movement as the first information about the movement in respect of each of the signals of respective color components included in the data dynamic image, which must be decoded.

9. The method of encoding dynamic images to perform encoding with compression on the basis of the input digital signal dynamic image, comprising:
the first stage of determining the prediction mode with intracoronal, which perform the prediction with intracoronal over the component signal corresponding to the brightness component, when the color format of the input signal dynamic image is one of the formats 4:2:0 and 4:2:2;br/> the second stage of determining the prediction mode with intracoronal, which perform the prediction with intracoronal over the component signal corresponding component color, when the color format of the input signal dynamic image is one of the formats 4:2:0 and 4:2:2;
phase coding with variable length, which encode variable length one element of the set consisting of the first prediction mode with intracoronal defined through the first stage of determining the prediction mode with intracoronal, and the second mode prediction with intracoronal defined by the second detection phase mode prediction with intracoronal;
the first stage of the image-forming predictions with intracoronal, which form the first image prediction with intracoronal based on the first prediction mode with intracoronal;
the second stage of the image-forming predictions with intracoronal, which form the second image prediction with intracoronal based on the second mode prediction with intracoronal; and
the coding stage, which perform the conversion and encoding on the signal prediction errors obtained as the difference between one element of the set, consisting the of the first image prediction with intracoronal and second image prediction with intracoronal, and the corresponding signals of the color components included in the input signal dynamic image, and
on the basis of the control signal to provide the type of format chrominance input dynamic image when the color format is one of the formats 4:2:0 and 4:2:2, the first stage of determining the prediction mode with intracoronal and the first stage of the image-forming predictions with intracoronal apply to the brightness of the input dynamic image signal and the second detection phase mode prediction with intracoronal and the second stage of the image-forming predictions with intracoronal apply to the chrominance signal input dynamic image;
when the color format is the format of 4:4:4, the first stage of determining the prediction mode with intracoronal and the first stage of the image-forming predictions with intracoronal apply to all color components of the input signal dynamic image to do the encoding; and
at the stage of encoding with variable-length multiplexers in the bit flow control signal as data encoding, which must be applied to the element of the sequence of dynamic images.

10. The method of encoding dynamic is one image to perform encoding with compression on the basis of the input digital signal dynamic image, contains:
the first stage of motion detection, which determine the first motion information relative to the component signal corresponding to the brightness component, when the color format of the input signal dynamic image is one of the formats 4:2:0 and 4:2:2;
the second stage of motion detection, which determine the second motion information relative to the component signals corresponding component color, when the color format of the input signal dynamic image is one of the formats 4:2:0 and 4:2:2;
phase coding with variable length, which encode variable length of the first motion information specified by the first stage of motion detection;
the first stage motion compensation, which form the first image prediction with intracoronal based on the first information about the movement;
the second stage motion compensation, which form the second image prediction with intracoronal on the basis of the second information about the movement; and
the coding stage, which perform the conversion and encoding on the signal prediction errors obtained as the difference between one element of the set consisting of the first image prediction with intracoronal and second images of the forecast the Oia with intracoronal, and the corresponding color component signal included in the input signal dynamic image, and:
on the basis of the control signal to provide the type of format chrominance input dynamic image when the color format is one of the formats 4:2:0 and 4:2:2, the first stage of motion detection and the first stage motion compensation applied to the brightness component of the input dynamic image signal and the second phase of the motion detection and the second stage motion compensation is applied to the chroma input dynamic image;
when the color format is the format of 4:4:4, the first stage of motion detection and the first stage motion compensation is applied to all color components of the input signal dynamic image to do the encoding; and
at the stage of encoding with variable-length multiplexers in the bit flow control signal as data encoding, which must be applied to the element of the sequence of dynamic images.

11. The method of dynamic image decoding for decoding a digital signal dynamic image based on the input bit stream generated by performing the above signal of the digital dynamic image coding is of compressible, moreover, the method of decoding dynamic image contains:
the first stage of the image-forming predictions with intracoronal, which form the first image prediction with intracoronal relative to the component signal corresponding to the brightness component, when the color format of the signal dynamic image is one of the formats 4:2:0 and 4:2:2;
the second stage of the image-forming predictions with intracoronal, which form the second image prediction with intracoronal relative to the component signals corresponding component color, when the color format of the signal dynamic image is one of the formats 4:2:0 and 4:2:2;
stage decoding variable length, which decode the information identifying the format of the color included in the input bit stream as information regarding the element of the sequence of dynamic images, information identifying the color format specifies the type of the color format of the compressed and encoded signal dynamic image, analyze the input bit stream at the level of the macroblocks based on the information identifying the color format and decode the data of the quantization coefficients included in the input bit stream, the data to which the rates of quantization obtained by converting and signal encoding the prediction error or between the first mode prediction with intracoronal, used for forming the first image prediction with intracoronal, and the first image prediction with intracoronal, or between the second prediction mode with intracoronal used for forming the second image prediction with intracoronal, and the second image prediction with intracoronal; and
the step of decoding the prediction error, which decodes the data of the quantization coefficients in the signal prediction errors by the inverse quantization and inverse data transform coefficients quantization, and
in the case where the identification information of the color format specifies that the color format is one of the formats 4:2:0 and 4:2:2, the image prediction with intracoronal brightness component is formed on the basis of the first stage of the image-forming predictions with intracoronal and the first prediction mode with intracoronal, and the image prediction with intracoronal component color is formed on the basis of the second stage of the image-forming predictions with intracoronal and the second prediction mode with intracoronal;
in the case where the identification information of the color format specifies that the color format is the format of 4:4:4, the image prediction II andstc the encoding of all the color components is formed on the basis of the first stage of the image-forming predictions with intracoronal and the first prediction mode with intracoronal; and
the dynamic image signal is decoded by summing the generated image prediction with intracoronal with output information of the phase decoding the prediction error.

12. The method of dynamic image decoding for decoding a digital signal dynamic image based on the input bit stream generated by performing the above signal of the digital dynamic image encoding with compression, a method of decoding a dynamic image contains:
the first stage motion compensation, which form the first image prediction with intracoronal relative to the component signal corresponding to the brightness component, when the color format of the signal dynamic image is one of the formats 4:2:0 and 4:2:2;
the second stage motion compensation, which form the second image prediction with intracoronal relative to the component signals corresponding component color, when the color format of the signal dynamic image is one of the formats 4:2:0 and 4:2:2;
stage decoding variable length, which decode the information identifying the format of the color included in the input bit stream as information on e is ment sequence of dynamic images information identifying the color format specifies the type of the color format of the compressed and encoded signal dynamic image, analyze the input bit stream at the level of the macroblocks based on the information identifying the color format and decode the data of the quantization coefficients included in the input bit stream, the data of the quantization coefficients obtained by converting and signal encoding the prediction error or between the first information about the movement used for forming the first image prediction with intracoronal and the first image prediction with intracoronal, or between the second information about the movement used to generate the second image prediction with intracoronal, and the second image prediction with intracoronal; and
the step of decoding the prediction error, which decodes the data of the quantization coefficients in the signal prediction errors by the inverse quantization and inverse data transform coefficients quantization, and
in the case where the identification information of the color format specifies that the color format is one of the formats 4:2:0 and 4:2:2, value, decode the second traffic information is generated based on the first information about the movement, and the imagination predictions with intracoronal for brightness component is formed on the basis of the first stage motion compensation and first information about a motion prediction with intracoronal for the component color is formed on the basis of the second stage motion compensation and the second flow information;
in the case where the identification information of the color format specifies that the color format is the format of 4:4:4, the image prediction with intracoronal for all color components is formed on the basis of the first stage motion compensation and first information about the movement; and
the dynamic image signal is decoded by summing the generated image prediction with intracoronal output information obtained in the step of decoding the prediction error.



 

Same patents:

FIELD: information technology.

SUBSTANCE: invention relates to encoding and decoding digital images. A device is proposed for encoding/decoding a dynamic image, in which during compressed encoding through input of data signals of the dynamic image in 4:4:4 format, the first encoding process is used for encoding three signals of colour components of input signals of the dynamic image in general encoding mode and the second encoding process is used for encoding three signals of colour components input signals of the dynamic image in corresponding independent encoding modes. The encoding process is carried out by selecting any of the first and second encoding processes, and compressed data contain an identification signal for determining which process was selected.

EFFECT: more efficient encoding dynamic image signals, without distinction of the number of readings between colour components.

18 cl, 15 dwg

FIELD: information technologies.

SUBSTANCE: method is suggested for selection and processing of video content, which includes the following stages: quantisation of colour video space; making selection of dominating colour with application of mode, median of average or weighted average of pixel colorations; application of perception laws for further production of dominating colorations by means of the following steps: transformation of colorations; weighted average with application of pixel weight function affected by scene content; and expanded selection of dominating colour, where pixel weighing is reduced for majority pixels; and transformation of selected dominating colour into colour space of surrounding light with application of three-colour matrices. Colour of interest may additionally be analysed for creation of the right dominating colour, at that former video frames may control selection of dominating colours in the future frames.

EFFECT: creation of method to provide imitating surrounding lighting by means of dominating colour separation from selected video areas, with application of efficient data traffic, which codes averaged or characteristic values of colours.

20 cl, 43 dwg

FIELD: information technologies.

SUBSTANCE: invention concerns systems of coding/decoding of the squeezed image with the use of orthogonal transformation and forecasting/neutralisation of a motion on the basis of resolving ability of builders of colour and colour space of an input picture signal. The device (10) codings of the information of the image the forecastings (23) block with interior coding is offered is intended for an adaptive dimensional change of the block at generating of the predicted image, on the basis of the signal of a format of chromaticity specifying, whether is resolving ability of builders of colour one of the format 4:2:0, a format 4:2:2 and a format 4:4:4, and a signal of the colour space specifying, whether the colour space one of YCbCr, RGB and XYZ is. The block (14) orthogonal transformations and the quantization block (15) are intended for change of a procedure of orthogonal transformation and quantization procedure according to a signal of a format of chromaticity and a signal of colour space. The block (16) of return coding codes a signal of a format of chromaticity and a signal of colour space for insert of the coded signals gained, thus, in the squeezed information of the image.

EFFECT: increase of image coding and decoding efficiency.

125 cl, 12 dwg, 1 tbl

FIELD: information technologies.

SUBSTANCE: device and method are suggested which are intended for effective correction of wrong colour, such as purple fringe, created as a result of chromatic aberration, and for generating and output of high quality image data. Pixel with saturated white colour is detected from image data, at that in the area around detected pixel having saturated white colour the pixel of wrong colour and pixels having colour corresponding to wrong colour such as purple fringe are detected out of specified area. Detected pixels are determined as wrong colour pixels, and correction processing on the base of surrounding pixels values is performed over detected wrong colour pixels.

EFFECT: design of image processing device which allows to detect effectively an area of wrong colour.

25 cl, 22 dwg

FIELD: physics.

SUBSTANCE: invention concerns image processing technology, particularly YCbCr-format colour image data coding/decoding to smaller data volume by finding correlation between Cb and Cr chroma signal components of colour image data. The invention claims colour image coding method involving stages of: chroma signal component conversion in each of two or more mutual prediction modes; cost calculation for conversion values in each of two or more mutual prediction modes with the help of cost function defined preliminarily; selection of one or more mutual prediction modes on the basis of calculation result and conversion value output for the selected mutual prediction mode; entropic coding of output conversion values, where preliminarily defined cost function is selected out of cost function defining distortion in dependence of transfer rate, function of absolute subtract value amount, function of absolute converted subtract, function of square subtract sum and function of average absolute subtract.

EFFECT: increased efficiency of image coding.

88 cl, 23 dwg

FIELD: image processing systems, in particular, methods and systems for encoding and decoding images.

SUBSTANCE: in accordance to the invention, input image is divided onto several image blocks (600), containing several image elements (610), further image blocks (600) are encoded to form encoded representations (700) of blocks, which contains color code word (710), intensity code word (720) and intensity representations series (730). Color code word (710) is a representation of colors of elements (610) of image block (600). Intensity code word (720) is a representation of a set of several intensity modifiers for modification of intensity of elements (610) in image block (600), and series (730) of representations includes representation of intensity for each element (610) in image block (600), where the series identifies one of intensity modifiers in a set of intensity modifiers. In process of decoding, code words (710, 720) of colors and intensity and intensity representation (730) are used to generate decoded representation of elements (610) in image block (600).

EFFECT: increased efficiency of processing, encoding/decoding of images for adaptation in mobile devices with low volume and productivity of memory.

9 cl, 21 dwg, 3 tbl

FIELD: method and device for video encoding and decoding which is scalable across color space.

SUBSTANCE: in the method, encoder may inform decoder about position of brightness data in bit stream, and decoder may transform colored image to halftone image when necessary. In accordance to the invention, brightness data are serially inserted from all macro-blocks contained in a section, into bit stream, chromaticity data are inserted serially from all macro-blocks contained in a section, into bit stream, after inserted brightness data and bit stream which contains inserted brightness data and chromaticity data is transmitted.

EFFECT: creation of method for video encoding and decoding which is scalable across color space.

4 cl, 12 dwg

FIELD: engineering of systems for analyzing digital images, and, in particular, systems for showing hidden objects on digital images.

SUBSTANCE: in accordance to the invention, method is claimed for visual display of first object, hidden by second object, where first object has color contrasting with color of second object, and second object is made of material letting passage of visible light through it, where amount of visible light passing through second object is insufficient for first object to be visible to human eye. The method includes production of digital image of first and second objects with usage of visible light sensor. Digital data of image, received by computer system, contains both data of first object and data of second object, where data of first object and data of second object contains color information, and value of contrast between first and second objects must amount to approximately 10% of full scale in such a way, that along color scale of 256 levels the difference equals approximately 25 levels, then data of second object is filtered, after that values, associated with data of first object, are increased until these values become discernible during reproduction on a display.

EFFECT: creation of the method for showing hidden objects in digital image without affecting it with special signals.

3 cl, 6 dwg

FIELD: radio communications; color television sets.

SUBSTANCE: novelty is that proposed color television set that has radio channel unit, horizontal sweep unit, vertical sweep unit, chrominance unit, sound accompaniment unit, and color picture tube is provided in addition with three identical line doubling channels, pulse generator, and switch, second set of three planar cathodes mounted above first set and second set of three cathode heaters are introduced in its color picture tube. Reproduced frame has 1156 active lines and 1 664 640.5 resolving elements.

EFFECT: enhanced resolving power.

1 cl, 5 dwg, 1 tbl

The invention relates to techniques for color television and can be used in the decoder SECAM color TVs and video devices

FIELD: information technologies.

SUBSTANCE: method of video coding consists in selection of data for coding in the first and second levels to ensure possibility of data decoding in one joint level and then coding of selected data in the first and second levels by coding of coefficient in the first level and coding of differential verification of first level coefficient in the second level.

EFFECT: decrease of computational complexity and requirements to memory capacity upon decoding of scalable video data.

58 cl, 11 dwg

FIELD: information technologies.

SUBSTANCE: there is proposed the method of modelling the context of video signal coding information for compression or decompression of coding information. Original function value for probabilistic coding of the coding information of the video signal of the improved level is determined on the basis of the coding information of the appropriate video signal of the main level.

EFFECT: providing the method of modelling the coding information context in order to increase data compression coefficient by using context adaptive binary arithmetic coding which is an entropic coding scheme of the improved video codec when the scalable coding scheme is combined with AVC MPEG-4.

6 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: invention relates to encoding and decoding digital images. A device is proposed for encoding/decoding a dynamic image, in which during compressed encoding through input of data signals of the dynamic image in 4:4:4 format, the first encoding process is used for encoding three signals of colour components of input signals of the dynamic image in general encoding mode and the second encoding process is used for encoding three signals of colour components input signals of the dynamic image in corresponding independent encoding modes. The encoding process is carried out by selecting any of the first and second encoding processes, and compressed data contain an identification signal for determining which process was selected.

EFFECT: more efficient encoding dynamic image signals, without distinction of the number of readings between colour components.

18 cl, 15 dwg

FIELD: physics; image processing.

SUBSTANCE: present invention relates to methods of reducing visual distortions in the frame of a digital video signal. The method of reducing visual distortions, resulting from borders of units between decoded image units in a frame of a digital video signal, involves adapting filtration of borders of units at the border of units formed between the first decoded image unit on the first side of the border of units and the second decoded image unit on the second side of the border of units. The first decoded image unit is encoded using the first type of encoding method and the second decoded image unit is encoded using the second type of encoding method. The value of at least one parametre of the operation of adapting filtration of the borders of units, done at the border of units, is determined through analysing types of methods from the first and second encoding methods.

EFFECT: reduction of blocking distortions.

59 cl, 6 dwg

FIELD: information technologies.

SUBSTANCE: invention refers to communication systems using multimedia data compression. Digital multimedia data include intraframe information and interframe information. Further to separate complete frame transmission defined as interframes, some interframes ("hybrid" frames) contain partial intraframe information in order to receive at least piece of intraframe information from hybrid frames in case of the whole intraframe loss.

EFFECT: provided multimedia data compression using frames P and B of enhanced fault tolerance.

38 cl, 4 dwg

FIELD: physics, video technics.

SUBSTANCE: invention concerns video signal encoding/decoding, particularly adaptive choice of context model for entropy encoding, and video decoder. The invention claims a method of residual prediction flag encoding, indicating prediction of residual data for enhanced layer unit of multilayer video signal, based on residual data from bottom layer unit correlating to residual data from enhanced layer unit. The method involves stages of residual data energy calculation for bottom layer unit, determination of residual prediction flag encoding method according to the energy, and residual prediction flag encoding by the determined encoding method.

EFFECT: method and device for efficient flag compression.

66 cl, 17 dwg

FIELD: information technology.

SUBSTANCE: shift predicting unit in the image encoding device for the signal encoding calculates MV vectors using shift evaluation of the previous frame that presents a key frame for incoming current frame; turning and correlation module calculates turning angles of the current frame; extraction device extracts the key frame according to the turning angles; shift compensator restores the frame with predicted shift and the coder generates a difference signal between the current frame and the frame with predicted shift and encodes the difference signal, MV vectors and turning angles.

EFFECT: increased coefficient of data compression.

18 cl, 10 dwg

FIELD: coding and compression of video signal.

SUBSTANCE: method and system for image coding with the help of adaptive uneven coding based on context, according to which conversion coefficients are divided into units, which have unit size of 4nx4m (where n, m are positive whole numbers, equal to 1, or more than 1), every unit is scanned in a zigzag manner for obtaining of ordered vector of coefficients, with length of 16nxm, ordered vector is subjected to sub-discretisation in alternating manner for obtaining of sub-discretised series of conversion coefficients prior to coding of conversion coefficients with the help of statistic coder.

EFFECT: reduction of number of bits that are required for presentation of quantised coefficients that are produced after performance of unit conversion more than four by four.

20 cl, 23 dwg

FIELD: video processing.

SUBSTANCE: method for encoding includes determining if all flags of current layer included in the specified unit area are equal flags of the main layer, setting pre-defined flag of forecast according to the result of determining, and if it is determined that flags of current layer are equal to flags of the main layer, flags of current layer are omitted and flags of the main layer are inserted and said forecast flag into the bit stream.

EFFECT: improvement of encoding efficiency of various flags used in multilayer scaled video coder-decoder based on interlayer correlation; method and device are suggested for efficient encoding of various flags used in multilayer scaled video coder-decoder based on interlayer correlation.

21 cl, 12 dwg

FIELD: encoding of inventions and, in particular, encoding of video frame blocks.

SUBSTANCE: method and device are suggested, meant for encoding digital image with usage of prediction block in internal mode, where the list of prediction modes is produced for each combination of prediction modes of adjacent blocks. Modes, meant for each combination of prediction modes, may be divided on two groups. First group has m most probable prediction modes, and second group has (n-m) prediction modes, where n is common number of available prediction modes. Modes in the first group are ordered in accordance to their probability. Aforementioned order may be set as a list of modes, ordered from most probable one to least probable one. Modes, belonging to the second group, may be ordered using a certain given method, which may be set depending on information which is already available in the decoder.

EFFECT: reduced memory volume with minimal loss of encoding efficiency.

6 cl, 6 tbl, 8 dwg

FIELD: information technology.

SUBSTANCE: invention relates to encoding and decoding digital images. A device is proposed for encoding/decoding a dynamic image, in which during compressed encoding through input of data signals of the dynamic image in 4:4:4 format, the first encoding process is used for encoding three signals of colour components of input signals of the dynamic image in general encoding mode and the second encoding process is used for encoding three signals of colour components input signals of the dynamic image in corresponding independent encoding modes. The encoding process is carried out by selecting any of the first and second encoding processes, and compressed data contain an identification signal for determining which process was selected.

EFFECT: more efficient encoding dynamic image signals, without distinction of the number of readings between colour components.

18 cl, 15 dwg

Up!