Image encoding method, image decoding method, image encoder and image decoder

FIELD: information technology.

SUBSTANCE: in an image encoding system compression processing is applied to an input image signal, comprising multiple colour components, encoded data obtained after independent encoding processing of the input image signal for each of the colour components, and the parameter that indicates which colour component corresponds to encoded data is multiplexed with the bit stream.

EFFECT: higher encoding efficiency and providing possibility to include data for one image in one access unit, and establish identical time information and single encoding mode for corresponding colour components.

6 cl, 25 dwg

 

The technical field to which the invention relates.

The present invention relates to a method for encoding image and the image encoder, designed to handle the compression, i.e. compressing the input image signals consisting of a set of color components, the method of decoding the image and the image decoder, designed for input bit stream, in which the compressed image signals consisting of a set of color components, and perform decode processing, and the bit stream of the encoded image, and the recording media.

The level of technology

Normally applied coding system in accordance with the international standard, such as MPEG and ITU-TH.26x, in the main, provided that the format of the input signal, called format 4:2:0". Format 4:2:0 is a format used to convert a color image signal, such as RGB (GLC, red-green-blue), brightness component (Y) and two components of chrominance (CB and CR) and to reduce the number of samples of the chrominance components by half in both the horizontal and vertical directions relative to the number of samples in the value component. Because of the characteristics of human vision visibility of the chrominance components is low compared with the other components of the brightness of the conventional system in which dekodiranje on the basis of the international standard used condition the amount of information on the object of coding is reduced by reducing the number of samples of the chrominance components before performing the encoding.

On the other hand, due to the increased resolution and increased gradation display in recent years, investigating the system, the image encoding with samples, identical components of the brightness obtained without reducing the sampling frequency of the chrominance components. The format in which the number of samples of the luminance components and the number of samples of the chrominance components are identical is called a format of 4:4:4". In the coding system for input format 4:4:4: define "high profile 444 (see, for example, non-patent document 1).

Although the usual format 4:2:0 is used, provided that the sampling of the chrominance components is performed with low frequency, and it's limited color space Y, CB and CR, in the format of 4:4:4, there are no differences in the frequency of sampling of the chrominance components, it is therefore possible to directly use R, G, and B, instead of Y, CB and CR, and identify and use other color space.

Non-patent document 1: ISO/IEC 14496-10|ITU-TH.264 standart (Advanced Video Coding: AVC)

The invention

Problems to be solved by the invention of

When using the high profile 444 defined in ISO/IEC 14496-10|ITU-TH.264 (2005) (below okrasheno called AVC) as in the conventional coding system, it is necessary to perform processing of encoding and decoding processing using the macroblock as a single module.

In other words, because these three components of the color included in one macroblock, the data of the respective color components are treated in the order of single modules of the macroblock. This approach is not preferred to perform parallel processing of encoding and decoding.

On the other hand, AVC defined format 4:0:0. This format was originally aimed at the processing of the image encoding only on the component of brightness, that is, a monochrome image. It is also possible to use the method of generating three independent encoded data by applying processing coding for the respective three color components of the format 4:4:4 using the format 4:0:0. In this case, since the respective color components are treated independently, it is possible to use parallel processing.

However, since the respective color components are treated independently in this standard becomes impossible to implement the install processing of the same information time and use a uniform encoding mode for the respective color components. Therefore, there is a problem, which is that in modulating the x images cannot easily implement playback random access (fast playback forward play back etc) and processing editing.

This problem is explained below. Various data defined in AVC, are arranged in order of the limiter module access (AUD, OMD), set parameter sequence (SPS, SCP), set parameter image (PPS UPI) and the image data. Data that is not associated with the present invention, is not explained.

AVC is determined that one module access (AU, SB) consists of a single image (equivalent to one frame or one field). It is possible to define the boundary of the access modules using the limiter module access (AUD). For example, in the profile of the main line AVC as limiting access module are located at the boundaries of the respective images, it becomes possible to independently and easily select one of the access module by detecting limiter modules access. This allows the decoding of data for a single image.

On the other hand, when the three components are color-coded format 4:0:0 using the existing AVC system, the access module is defined for each of the color components. Accordingly, one image is composed of three modules access. Therefore, it becomes impossible to separate the data for one image by simply detecting limiter modules access. Impossible is easy to implement playback random access and processing of editing in units of images. Because processing coding is performed independently for each of the color components, it is difficult to ascertain the same information time and use a single encoding mode.

Thus, the present invention is to provide a method for encoding image and method of decoding an image, the image encoder and image decoder, and the bit stream of the encoded image, and the recording media that allow you to include data for one image in a single module access by expanding AVC, even when the processing of the encoding used for the respective three color components of the format 4:4:4 using the format 4:0:0, and allow the same information time and use a single encoding mode for the respective color components.

The means of solving the problem

In accordance with the present invention, in the encoding method of the image intended to apply compression processing to an input image signal includes a set of color components, the coded data obtained by independent processing of coding the input image signal of each color component, and a parameter indicating which color component correspond to the coded data, multiplexers flow Bito is.

In addition, in accordance with the present invention, in the method of decoding image to perform decode processing based on the input bit stream generated by the compression of the image signal, which includes a set of color components, perform the decoding processing for the encoded data of each of the color components by using a parameter indicating which color component correspond to the coded data.

In addition, in accordance with the present invention, the image encoder, designed to apply compression processing to an input image signal includes a set of color components includes a means of multiplexing designed for multiplexing the bit stream of encoded data obtained by independent processing of coding the input image signal of each color component, and a parameter indicating which color component correspond to the coded data.

In addition, in accordance with the present invention, the image decoder, for performing decode processing on the basis of the input bit-stream generated by compressing the image signal, which includes a set of color components includes a means of detection p is designated for detection parameter, indicating which color component correspond to the coded data.

In addition, in accordance with the present invention, in the bit stream generated by the encoding compression of the input image signal includes a set of color components, the compressed data of the image signal of each of the color components are in the form of modules slice (layer), and a parameter indicating which color component correspond to the compressed data included in the data slice, multiplexer with the header area of this slice.

In addition, the present invention is directed to a recording medium on which the recorded bit stream generated by the encoding compression of the input image signal includes a set of color components, and in which the compressed data of the image signal of each color component is compiled as modules of the slice, and a parameter indicating which color component correspond to the compressed data included in the data slice, multiplexer with the header area of the slice.

The effect of the invention

In accordance with the present invention, it becomes possible to easily perform the reproduction with random access and processing of editing in units of the image using the AUD. It is possible to include data for one image in one access module, the even when the processing of the encoding used for the three color components, using the format 4:0:0. In addition, it becomes possible to install the same information time and use a single encoding mode among the respective color components.

Brief description of drawings

In Fig. 1 shows a diagram of the parts relating to the present invention, selected from the syntax of the encoded streams of bits generated by the encoder of the image in accordance with the present invention.

In Fig. 2 shows a diagram intended to explain the parameter definition colour_id, as another way of ensuring compatibility with the existing standard.

In Fig. 3 shows an explanatory diagram in which the data of all the color components constituting one image between AUD and AUD included in one module access (AU).

In Fig. 4 shows an explanatory diagram in which the data of four color components separated for each color component using a limiter and linked together into a single module access.

In Fig. 5 shows an explanatory diagram in which the encoding modes of the format 4:0:0 and 4:4:4 switch in an arbitrary module.

In Fig. 6 shows a diagram illustrating the overall processing of the encoding, in accordance with a seventh alternative embodiment of the present invention.

In Fig. 7 shows a diagram illustrating the independent processing of the coding is according to the seventh variant embodiment of the present invention.

In Fig. 8 shows a diagram representing a link to the reference image when the motion prediction in the time direction among the image encoder and the decoder in accordance with a seventh alternative embodiment of the present invention.

In Fig. 9 shows a diagram representing an example of the structure of the bit stream generated by the encoder and subjected to input processing and decoding, using the decoder in accordance with a seventh alternative embodiment of the present invention.

In Fig. 10 shows a diagram representing the structure of the bit stream data of the slice, in the case of processing coding and independent processing of encoding in accordance with a seventh alternative embodiment of the present invention.

In Fig. 11 shows a block diagram representing a schematic structure of an encoder in accordance with a seventh alternative embodiment of the present invention.

In Fig. 12 shows a diagram illustrating the flow of 106 bits, multiplexity and the output module 105 multiplexing shown in Fig. 11.

In Fig. 13 shows a block diagram representing the internal structure of the first module 102, the image encoding shown in Fig. 11.

In Fig. 14 shows a block diagram representing the internal structure of the second module 104, the image encoding shown in Fig. 11.

In Fig. 15 shows a block diagram representing a schematic structure of a decoder in accordance with a seventh alternative embodiment of the present invention.

In Fig. 16 shows a block diagram representing the internal structure of the first module 302 decoding image shown in Fig. 15.

In Fig. 17 shows a block diagram representing the internal structure of the second module 304 decoding image shown in Fig. 15.

In Fig. 18 shows a block diagram representing a modification of the encoder shown in Fig. 11.

In Fig. 19 shows a block diagram representing a modification of the encoder shown in Fig. 11.

In Fig. 20 shows a block diagram representing a decoder corresponding to the encoder shown in Fig. 18.

In Fig. 21 shows a block diagram representing the decoder corresponding to the encoder shown in Fig. 19.

In Fig. 22 shows a diagram representing the structure of the data-encoded header information of the macroblock included in the bit stream of the conventional YUV 4:2:0.

In Fig. 23 shows a diagram representing the internal structure of the module 311 of the first prediction module 302 decoded image, which provides compatibility with the conventional YUV 4:2:0 bit stream.

In Fig. 24 shows a diagram representing another example of the structure of the bit stream.

In Fig. 25 shows a diagram representing another example of the structure of the bit stream.

A detailed description of the preferred option exercise

The first variations the t of the incarnation

In Fig. 1 shows a diagram of the parts relating to the present invention, selected from the syntax of the encoded bitstream generated by the image encoder in accordance with the present invention. In Fig. 1 part (a) denotes the syntax of the information module header NAL (UAS, the level of abstraction of the network), part (b) indicates the syntax SPS (SCP, set the parameter sequence), part (c) denotes the syntax of PPS (UPI, set the parameter of the image), and part (d) indicates the syntax of the header of the slice. Other parts, except for the shaded parts represent the syntax defined by the existing AVC standard. The shaded part represents the syntax defined by the existing AVC standard, but which added new functions in accordance with the present invention, or syntax that is not defined in the existing AVC standard and added again in accordance with the present invention.

The parameters defined in the AVC will be briefly described below. In part presented at position (a) in Fig. 1 nal_ref_idc NAL module is a parameter that indicates whether the data module of the NAL image data used for forecasting and links. In addition, nal_unit_type is a parameter that indicates whether the data module NAL data slice, SPS, PPS, or limit elem access module (AUD).

In part presented at position (b) in Fig. 1 profile_idc in SPS, indicates the profile sequence encoding. The main line, main, high, high 444, etc. are defined as profiles in AVC. Seq_parameter_set_id indicates the ID (ID, ID) SPS. Many SPS are defined in the same sequence coding and managed by the ID. In addition, chroma_for mat_idc is used only during the high profile 444 and is a parameter indicating which of the formats 4:0:0, 4:2:0, 4:2:2 and 4:4:4 is the sequence of the encoding.

In part presented at position (c) in Fig. 1 pic_parameter_set_id in PPS, ID denotes PPS. Many PPS are defined in the same sequence coding and managed by the ID. Seq_parameter_set_id in PPS is a parameter denoting any SPS owns this PPS.

In part presented at position (d) in Fig. 1 first_mb_in_slice title cut is a parameter denoting the position in which the data are leading block of screen data slice. In addition, slice_type is a parameter indicating which of predictive coding and double predictive coding is used to encode data slice within a frame. In addition, pic_parameter_set_id is a parameter that indicates what PPS belong to the data slicer.

Next is explained the operation. When printing handling the ku coding applied to the image signal of the three color components, independently for each of color components using the format 4:0:0, data indicating the handling of the independent coding of the three color components using format 4:0:0, render again in profile_idc, which represents one of the parameters included in the SPS, is presented in part (b) of Fig. 1. Parameter colour_id available again in the header of the slice shown in part (d) of Fig. 1, to indicate which of the three color components are encoded data included in the data slice.

When the processing of the encoding performed in the existing format 4:0:0 (monochrome), format 4:2:0, 4:2:2 and 4:4:4, the parameter colour_id, shown in part (d) of Fig. 1, is not used. Mode only independent encoding data of the three color components using format 4:0:0 determined again in accordance with the present invention, the parameter colour_id used in order thus to make it possible to prevent the impact on the existing standard.

Mode independent of the encoding data of the three color components using format 4:0:0 defined again in accordance with the present invention, the parameter colour_id used to, as shown in Fig. 3, to include the data of the three color components into a single module access (AU) and instead of the data of all the component is in color, the components of one image between AUD and as AUD.

As another way of ensuring compatibility with the existing standard option colour_id can be defined, as shown in Fig. 2. When colour_id defined in this way, in case colour_id = 0, it indicates the data slice encoded in the format in which the data of the three color components included in one macroblock, as in the existing standard. In the case of other values, it is possible to identify the data slice is encoded by processing independent of the encoding data of the three color components, using format 4:0:0 described in the first variant embodiment.

This allows a stream of bits, covering both the existing system and the system described in the first variant embodiment, which is useful for maintaining compatibility with the existing system. When the number of cuts increases, and additional costs associated with the amount of coding of the parameter colour_id affect the coding efficiency, the volume of the parameter colour_id can be reduced by performing the corresponding coding with variable length based on the criteria of judgment, determining which of the existing system and the system described in the first variant embodiment, it is easier to choose.

Thus, in the coding system image, the purpose is Noi to apply compression processing to input image signals, consisting of a set of color components, the coded data obtained by independent application processing coding the input image signal components of the respective color and a parameter indicating which color component were obtained coded data, multiplexer with the bit stream. This enables easy playback random access and editing processing modules of the image, using the AUD.

In the decoding system image for an input bit stream, which compresses the image signal consisting of a set of color components, and perform decoding processing, it becomes possible to easily perform the processing for decoding coded data of the respective color components using the parameter indicating which color component were obtained encoded data.

Because the data of the three color components included in one access module, the data of these three color components at the same time code as the IDR picture (OMD update instantaneous decoding).

The IDR picture is defined in AVC. Normal decoding processing can be instantly executed for image IDR. Image IDR receive, provided that the IDR picture is used as the upper part of playing with others is arbitrary access.

When you want to select only one color component of the three color components, it could be easily implemented by allocating only data slice colour_id that have a specific value.

In Fig. 1 parameter colour_id provided in the upper part of the header of the slice. However, it is not always necessary to have colour_id in the upper part of the header of the slice. It is possible to obtain the same effect if the parameter colour_id included in the slice header.

The second variant embodiment of the

As in the first variant embodiment, the coded data of the three color components included in one access module. However, while the data (R, B, G) of the respective color components are arranged in order in accordance with the first variant embodiment shown in Fig. 3, it is also possible to apply a layout of the same color components R, B or G together, respectively, as shown in Fig. 4. In addition, it is also possible to easily select only the data with the specified color component by inserting "separator", which is not defined in the existing AVC standard.

Therefore, for example, it becomes possible to easily select different processors for the respective color components to perform parallel processing. It is possible to implement a "separator"described in the present invention, without affecting the existing standard, by expanding on the power of the SEI message (DIR, more information about the extension) in the AVC. Needless to say, it is possible to obtain the same effects when the "separator" is defined in accordance with other methods.

The third variant embodiment of the

It is also possible to obtain the same effects as in the first variant embodiment, by inserting a parameter indicating a color component in the NAL part of the module, instead of colour_id in the header of the slice. In AVC, since the slice header and the data slice after slice header, defined as the load module NAL parameter nal_unit_type module NAL expanded so that it denotes in this parameter, the component color video are included in the load module NAL. In addition, by including data for the three color components into a single module access (AU), all data constituting one image, is placed between the AUD and the next AUD.

Consequently, as in the first variant embodiment, it becomes possible to easily perform the reproduction with random access and editing processing modules of the image. In addition, when you want to select only one component of the three color components, it becomes possible to separate this component only in accordance with the data header of the NAL module, without parsing the header of the slice.

The fourth variant embodiment of the

In the first - third embodiments in which proxenia there is a limit, is that the same value is always set to first_mb_in_slice header of the encoded slice in which the encoded data of the three color components. Parameter first_mb_in_slice indicates the position of the first data in the data slice on the screen.

In the coding system corresponding to the usual AVC, since it is possible to apply an arbitrary format for the structure of the slice, you can apply different patterns cut among the respective color components. However, thanks to the establishment of such restrictions, it becomes possible to decode and display part of the image with the correct status colors, by collecting data of the three clips that have the same value of first_mb_in_slice.

As a result, when you want to display a certain part of the screen, for example, only the center, it becomes possible to perform processing for decoding and display, using only the data slicer of the screen, instead of the entire screen, while when the limit is not provided, it is impossible to combine the three color component to obtain a correct decoded image until the entire screen is not to be decoded using the data slice of the entire screen, because the value of first_mb_in_slice will be different for the respective color components. When parallel processing using with the matter of the processors for data of the respective color components, corresponding data elements of the slice starts from the same position, so that it becomes easy to manage concurrency.

The fifth variant embodiment of the

Additional constraints are provided so that the same value is always set for the parameter slice_type slice header of the respective color components in addition to the restrictions in accordance with the fourth alternative embodiment. Parameter slice_type means, for example, which of the encoding inside a frame, predictive coding and double predictive coding is used for data slice, following the slice header. If the data slice is data encoding inside a frame, because the processing of prediction within a frame is not used, it is possible to immediately perform the processing of decoding and display.

Thus, for the data slicer in the same position on the screen the type of encoding is common to all the color components, and perform the same processing coding. This allows the decoder to perform decoding processing and display with high speed, exposing only the slice encoding inside a frame processing decoding when playing with random access.

The sixth variant embodiment of the

Thanks to the application page is ctur, described in the first to fifth embodiments of the embodiment, it becomes possible to switch to an arbitrary module mode independent of the encoding data of the three color components, using a newly defined format 4:0:0, and the encoding mode with format 4:4:4.

For example, as shown in Fig. 5, the newly defined format 4:0:0 installed for seq_parameter_set_id = 1 in SPS. The format parameter 4:4:4 has to seq_parameter_set_id = 2. SPS corresponding seq_parameter_set_id, installed with different pic_paramater_set_id set for them. This allows you to switch as format 4:0:0 and 4:4:4 for modules of the image.

Therefore, it becomes possible to select one of the formats with high efficiency encoding processing of encoding and choose one of the convenient formats, depending on the application, to handle encoding.

In the fifth variant embodiment is illustrated that both switches in the modules of the image. However, in compliance with the AVC standard, it is also possible to switch formats in units of the cutoff, in accordance with the same processing.

The present invention is illustrated using the AVC, which is an international standard coding system of a moving image. However, it goes without saying that it is possible to obtain the same effects using other systems encoded who I am.

The seventh variant embodiment of the

In the seventh variant embodiment of the present invention, the structure and operation of the device for performing encoding and decoding changing the encoding of the signals of the three color components under the General header of the macroblock and the encoded signals of the three color components for individual macroblock headers in module one frame (or one field) is illustrated on the basis of specific drawings. In the following explanation, unless specified otherwise, the term "one shot" is considered as a module of data located in the one frame or one field.

It is assumed that the header macroblock in accordance with the seventh variant embodiment comprises: encoding information and the prediction mode, such as a macroblock type, the type "padmakumara and mode prediction between frames; information on traffic forecasting, such as the identification number of the reference image and the motion vector; and the service information of the macroblock, in addition to data conversion, such as the quantization parameter for the conversion factor, the flag indicating the size of the transform block and the flag judgments about the presence/absence of an effective conversion modules in blocks of size 88.

In the following explanation, the processing coded the I signals of the three color components of one frame with a common header macroblock is called "total processing coding", and processing of encoding signals of the three color components of one frame with separate headers macroblock is called "independent processing coding". Similarly, the processing of the image data frame decoding from the bit stream, in which the signals of the three color components of one frame code under the General header of the macroblock, called "General treatment of decoding, and the decoding processing of the image data of the frame of the bit stream, in which the signals of the three color components of one frame encoded using separate header macroblock, called "independent decoding processing".

In General the processing of encoding, in accordance with the seventh variant embodiment, as shown in Fig. 6, the input video signal for one frame is divided into macroblocks, which are subjected to a common processing coding in groups of three color component, consisting of the component C0, component C1 and component C2, respectively. On the other hand, when the independent processing of the encoding, as shown in Fig. 7, the input video signal for one frame is divided into three color component: component C0, component C1 and component C2, and three color component is divided into macroblocks consisting of components of the same color, i.e. corresponding macroblocks that are supposed subjected is the ut independent processing coding for the corresponding component C0, component C1 and component C2.

The macroblock subjected to total processing coding include sampling of the three color components C0, C1 and C2. These macroblocks, which are subjected to independent processing, encoding, include a sample of any one of the components C0, C1 and C2.

In Fig. 8 shows the reference model for predicting motion in the time direction among the image encoder and the decoder in accordance with the seventh variant embodiment. In this example, the data module, indicated by a thick vertical line is set as the image and the link between the image and the access module is indicated in the view of the surrounding dashed lines. In the case of the General processing of encoding and decoding one picture represents the data representing the video signal for one frame, in which the three components are mixed colors. In the case of processing independent encoding and decoding, one image is a video signal for one frame of any one color component.

The access module is a minimum data module that provides a timestamp for synchronization with the audio/sound information or the like with the video signal. In the case of processing the overall encoding and decoding data of one image included in one access module.

On the other the second side, in the case of processing independent encoding and decoding three images included in one access module. This is because in the case of treatment independent of encoding and decoding the signal of the playback video image for one frame is not received until until you have received images in the same display time for all three color components. The numbers above the relevant images, indicate the processing order of encoding and decoding in the time direction images (frame_num for AVC (UCMJ, improved video encoding), which is a standard coding system with data compression for moving images).

In Fig. 8 by the arrows between the image shows the direction of the links in forecasting the movement. In the case of processing independent encoding and decoding, the reference flow forecasting among the images included in the same access module, and link prediction motion among different color components, are not used. Images of the respective color components C0, C1 and C2 encode and decode the prediction and the reference motion signals only identical color components.

When using this structure, in the case of processing independent encoding and decoding, in soo is according to the seventh variant embodiment, it becomes possible to perform the encoding and decoding of the respective color components, rather than relying on the processing of encoding and decoding of other color components, in General. Thus, it is easy to perform parallel processing.

AVC is defined image IDR (MOD, instant update of the decoder, which performs, by itself, internal coding and resets the contents of the memory reference picture used for prediction of motion compensation. As the IDR picture can be decoded without reference to any other image, such an image IDR is used as a random access point.

In the module access, in the case of treatment of the General coding, one access module is a single image. However, in the module access to the treatment of independent coding, one access module consists of multiple images. Thus, when a particular color component image is an IDR picture, assuming that the other remaining color component image is also an image IDR, the access module IDR defined to ensure the function of random access.

In the following explanation, the identification information (information that is equivalent to the identification of the common prediction mode between the images, or flag to identify common header of the macroblock), indicating whether the encoding using a common processing coding or processed using independent coding is called a signal identifying the common encoding/independent encoding.

In Fig. 9 shows the structure of a bit stream generated by the encoder, in accordance with the seventh variant embodiment, and subjected to input processing and decoding, using the decoder, in accordance with the seventh variant embodiment. The drawing shows the structure of the bit stream from the sequence level to the level of the frame. First signal identifying the common encoding/independent encoding multiplexer with the upper header sequence level (in the case of AVC, SPS (parameter set sequence), and so on).

Appropriate personnel code in modules module access. AUD denotes the separator module module of access NAL, which is a unique NAL module, designed to identify the gap module access in AVC. When the signal identification General/independent encoding means encodes the image using processing complete coding", the coded data for one image included in the access module.

It is assumed that the image in that case is the data representing the video signal of one frame, in which the three color component are mixed, as described above. In this case, the coded data of the i-th module of the access form as a set of data slice Slice(ij), and "j" is an index data slice in a single image.

On the other hand, when the signal identification of the common encoding/independent encoding means encodes the image using processing independent coding", one image is a video signal of one frame to any one of the color components. In this case, the coded data of the p-th module of the access form as a set of data slice Slice(p, q, r) q-th image in the module access, and "r" represents the index of the data slice in a single image. In the case of the video signal is composed of three color components such as RGB, "q" represents one of the values 0, 1 and 2.

If, for example, when additional information, such as information about permeability alpha mix, encode and decode as identical to the access module in addition to the video, which includes three primary colors, or the case when the encode and decode the video signal, consisting of color components (for example, YMCK, used in color printing), the number of which is equal to four components or more, q may be greater than 3.

If you choose independent processing of encoding, the encoder and decoder in accordance with a seventh alternative embodiment encode the respective color components constituting the entire video signal, independently from each other. Thus, it becomes possible to freely change the number of parts of the color components, in principle, without changing the processing of encoding and decoding. When this effect occurs, whereby, even when the signal format for representing color video signal is changed in the future, it becomes possible to consider such a change on the basis of processing independent encoding in accordance with a seventh alternative embodiment.

To implement such a structure in the seventh variant embodiment of the signal to identify common encoding/independent encoding presented in the form of "number of images included in one access module, and independently coded without the use of forecasting the movement with reference to each other."

Signal 3 identify common encoding/independent encoding is called below num_pictures_in_au. In other words, num_pictures_in_au=1 indicates "processing complete coding", and num_pictures_in_au=3 indicates "processing independent coding", in accordance with the seventh variant embodiment. When there are four or more color component, for num_pictures_in_u just want to set a value larger than 3.

Using these signals, if the decoder performs decoding and accessing num_pictures_in_au, the decoder can not only distinguish the coded data corresponding processing General coding, and the coded data corresponding processing independent coding, but also can determine how many images of one color component is represented in a single module access. Thus, it becomes possible to perform processing General encoding and processing of independent coding without joints or in the stream of bits that can solve the problem of expanding view color video signal in the future.

The structure of the bit stream data of the slice, in the case of treatment of common coding and processing independent of the encoding shown in Fig. 10. In the bit stream, the coded using processing independent coding, to obtain the effects described below, sets the flag identifying the color component (color_channel_idc) in the header area, in the upper part of the data of the slice taken by the decoder, so that it becomes possible to identify which of the color components of the image in the module access data belong to the slice.

Color_channel_idc groups the sections that have the same value color_channel_idc. In other words, among the clippings that have different values color_channel_idc, both of whom paid no dependence encoding and decoding (for example, the reference to the reference image when the traffic forecasting, analysis of contextual modeling of the probability of occurrence, etc. in accordance with CABAC (KAKDA, context-adaptive coding with binary arithmetic)). Color_channel_idc represents what color_id in accordance with the first variant embodiment, shown in part (d) of Fig. 1, and represents the information of the same semantic form.

When using these initial conditions is independent from the respective images in the module access in case of processing independent coding. All images color component in the same module access set identical to the value of Frame_num (the order of encoding and decoding when processing the image belongs to the slice) are multiplexed with the corresponding slice header.

Schematic structure of the encoder according to the seventh variation of the embodiment shown in Fig. 11. In the drawing processing General coding in the first module 102, the image encoding and processing independent of the encoding is performed in the second module 104, the image encoding (prepared for the three color components). Video 1 served in the first module 102, the image encoding or module 103 separation of color components, and any one of the second module 104 is tiravanija image, for each color component using a switch (SW) 100. The switch 100 is controlled by the signal 101 identify common encoding/independent encoding, and passes the input video signal 1 at the specified path.

The following is a description of a case in which the signal (num_pictures_in_au) 101 identify common encoding/independent encoding is a signal, multiplexed with the parameter sequence set when the input video signal is a signal format 4:4:4 and is used to select processing General coding and processing independent coding in the module sequence.

When used in the treatment of common coding, it is necessary to process the total decoding on the side of the decoder. When used as a processing independent coding on the side of the decoder needs to perform processing independent decoding. Thus, it is necessary to multiplex the signal 101 identify common encoding/independent encoding a bit stream, information indicating the processing. Therefore, the signal 101 identify common encoding/independent encoding is introduced into the module 105 multiplexing. Module multiplexing signal 101 identify common encoding/independent encoding can represent the way the nd any module, such as the module GOP (GRIS, the group of images consisting of several groups of images in the sequence length, representing a single module, at the level higher than the level of the image.

For processing the common encoding module 102 encoding the first image divides the input video signal 1 for the macroblock in the group of samples of the three color components, as shown in Fig. 6, and proceeds to the coding in this module. Processing of the encoding module 102 encoding the first image will be described below.

When you choose the independent processing of encoding the input video signal 1 is divided into data for one frame consisting of C0, C1 and C2 in the module 103 separation of color components, and serves in the corresponding second module 104, the image encoding, respectively. The second module 104, the image encoding divide the signal of one frame is divided for each color component, on macroblocks in this format, as shown in Fig. 7, and transferred to the processing coding in this module. Processing of second encoding modules, the image encoding will be described below.

The video signal for one component of the image components of three colors introduced in the first module 102, the image encoding. The coded data is output as a stream of 133 bits. The video signal for one image is to be placed, consisting of one color component, is injected into the second module 104, the image encoding. The coded data is output as streams 233a - 233c bits.

These bit streams multiplexers in stream format 106 bits in the module 105 multiplexing based on the state signal 101 identify common encoding/independent encoding and display. In other words, the module 105 multiplexing multiplexes with bit streams encoded data obtained by independent coding of the input image signals corresponding to color components and parameter indicating which color component data correspond to the coded data.

When multiplexing a single thread 106 bits in the module access, in the case in which the independent processing of the encoding, it becomes possible to perform interleaving order multiplexing and transmission procedure in the stream of data bits of the slice among the images of the respective color components) in the module access.

In Fig. 12 presents the case (a), in which the alternation of the slice module access is impossible, and the case (b), in which the alternation of cut possible. In case (a), when the alternation of the slice, it's impossible for multiplexing the image data of the component C1 with the bit stream until it is finished encoding component C0, and there is no way to multiplex the image data of the component C2 in the bit stream as long until it is finished encoding components C0 and C1. However, in case (b), in which the alternation of the slice may possibly immediately multiplexing component C1, if one slice component C0 multiplexed with the bit stream, and may immediately multiplexing component C2, if one slice component C0 and C1 are multiplexed on the bit stream.

In this case, on the side of the decoder, it is necessary to determine which color component in the module access belong to the received data slice. We therefore use the flag identifying the color component, multiplexed with the header area of the upper part of the data slicer shown in Fig. 10. Described here the concept of alternation of the slice shown in Fig. 12, equivalent to the concept disclosed with reference to Fig. 3.

When using such a structure, as in the encoder of Fig. 11, when the encoder encodes the image of the three color components in accordance with parallel processing using three sets of each of the second modules 6, the image encoding, independently from each other, it becomes possible to transmit the coded data without waiting for the coded data of other images of the color components, as only the data of the slice image of the second module 104, the image encoding will be prepared.

AVC is possible to divide one image, n is the set of data slice and code a data slice. Thus it is possible to flexibly change the length of the data slice and the number of microblocks included in the slice, in accordance with the terms of the encoding.

Between the slices adjacent to each other in the image space, since it is independent of the processing for decoding of slices, it is impossible to use a similar context, such as the prediction of the inside of the frame and arithmetic coding. Thus, the greater the length of the data slice, the higher the coding efficiency.

On the other hand, when an error is mixed with the stream of bits during transmission and recording, the return of the errors will be the earlier, less than the length of the data slice, and thus it is easier to suppress the decrease in the quality. When the length and structure of the slice, the order of color components, etc. are fixed, without multiplexing flag identification color component, the conditions for generating a stream of bits are fixed at the encoder. While it is impossible to flexibly solve problems associated with the different conditions required for encoding.

If it is possible to make the bit stream, as shown in Fig. 12, the encoder becomes possible to reduce the size of the transmit buffer that is required for transmission, that is to reduce the processing latency on the side of the encoder.

The state reduce the delay of the processing shown in Fig. 11. If mult is plissirovannye data slice through the image is not permitted, until it is finished encoding the image of a specific color component, the encoder must place the coded data of other images in the buffer. This means that there is a delay at the level of the image.

On the other hand, as shown in the lower plot in the drawing, if it is possible to perform interleaving on the level of the slicer module, the image encoding specific color component may output the encoded data to the multiplexing module in units of data slicer and can suppress the delay.

In one image of a color component, the data of the slice included in the image can be transferred in the order of raster scan macroblock or can be made so that it is possible to perform transmission with alternation, even in a single image.

The operations performed by the first module 102, the image encoding and the second module 104, the image encoding, are explained in detail below.

Schematic description of the operations of the first module 102, the image encoding

The internal structure of the first module 102, the image encoding shown in Fig. 13. In this drawing, the input video signal 1 is injected in the format of 4:4:4 and in the module of the macroblock in the group of three color components in the format of Fig. 6.

First module 110 forecasting selects the reference from the expression data from the reference image prediction motion compensation, stored in a storage device 111, and performs the processing of the prediction motion compensation module of the macroblock. It is possible to store multiple pieces of data of the reference image, is composed of three color components many times. The module 110 selects the optimal prediction reference image in the module of the macroblock of the reference image data, and performs motion prediction.

As the layout data of the reference image in the storage device 111, the data of the reference image can be saved separately for each of the color components in the form of a simple sequence, or sample components of the respective colors can be saved as a sequence of points. Seven types of prepare as the size of the unit to perform prediction motion compensation. First, it is possible to choose the size of any one of the modules in the macroblock size 1616, 168, 816 and 88. In addition, when you choose the size 88, it is possible to choose any one of the sizes 88, 84, 48 and 44 for each block of 88.

The module 110 prediction runs for each size of the macroblock processing prediction motion compensation for all or part of block sizes 1616, 168, 816 and 88 subblock sizes 88, 84, 48 and a 44 motion vectors in a predetermined search range, and one is, and the more used the reference images. The module 110 forecasting receives the differential signal 114 prediction for each block that is used as a prediction module motion compensation using the motion vectors and information 112 identification of the reference image and myCitadel 113 used for prediction.

The effectiveness of the prediction difference signal 114 forecasting appreciate module 115 to determine the encoding mode. Module 115 to determine the encoding mode displays the type 116 macroblock type auxiliary macroblock, and information 112 identify the motion vector/reference image with optimal efficiency prediction gain for the predicted macroblock in the processing result prediction performed in the module 110 prediction.

All part of the header information of the macroblock, such as macroblock type padmanaban, the indexes of the reference image and the motion vectors is defined as header information common to the three color components used for encoding and multiplexed with the bit stream.

When assessing the optimality of the effectiveness of forecasting, order management number of arithmetic operations, it is possible to estimate the number of forecast errors for a given color component (for example, the component G in the RGB or component Y in the Hema YUV). Alternatively, although the number of arithmetic operations increases for optimum performance prediction can comprehensively assess the number of forecast errors for all color components. When the final choice of type 116 macroblock/type padmanaban, you can consider the weight ratio of 118 for each type defined in the definition module 117 control encoding.

Similarly, the module 110 forecasting performs prediction inside a frame. When forecasting the inside of the frame, information about the prediction mode within the frame of the output signal 112. In the following explanation, when not specifically shows the difference between the prediction of the inside frame and the prediction motion compensation as the output signal 112, the information about the prediction mode inside a frame, information of the motion vector, the identification number of the reference image, in General, are referred to as service information and forecasting. With regard to predicting inside the frame, to comprehensively evaluate the number of prediction errors is only possible for a given color component, or can be fully evaluated the number of forecast errors for all color components. Finally, the module 110 selects the prediction forecasting inside or predicted is the join between frames type macroblock by assessing the type of the macroblock in accordance with the efficiency of forecasting or coding efficiency in module 115 to determine the encoding mode.

The module 110 outputs the selected prediction type 116 macroblock/type padmakumara and the differential signal 114 prediction, the resulting prediction inside the frame and prediction for motion compensation based on the service information 112 forecasting module 119 conversion. Module 119 conversion converts the differential signal 114 forecasting and outputs a differential signal 114 forecasting module 120 quantization as the conversion factor. In this case, the block size that is used as a single module for conversion can be selected from the values of the 44 and 88. When there is a choice of the size of the transform block, the block size selected during encoding, is displayed in the flag 134 denote the size of the transform block, and this flag is multiplexed with the bit stream.

The module 120 performs quantization quantization conversion factor entered on the basis of the parameter 121 quantization defined by a module 117-control coding, and outputs the converted coefficient module 123 coding with variable length, as a quantized coefficient 122 conversion. Quantized coefficient 122 conversion includes information for the three color components obtained by the entropy of the CSO coding using Huffman coding, arithmetic coding or the like in the module 123 coding with variable word length.

Quantized coefficient 122 converting to restore local difference signal 126 prediction decoding using module 124 inverse quantization module 125 inverse transformation. The coefficient 122 quantized transform is added to the predicted image 127 is generated based on the selected type 116 macroblock/type padmakumara and service information 112 forecasting, using the adder 128. As a result, the generated decoded image 129. After execution of this block processing of correcting the distortion in the filter 130 to remove the distortion block local decoded image 129 is stored in the storage device 111, which is intended for use during the next process prediction motion compensation.

Flag 131 management removal filter distortion block, indicating whether the filter remove the distortion block of the macroblock, also introduced in module 123 coding with variable length.

Quantized coefficient 122 conversion type 116 macroblock/type padmanaban, service information 112 forecasting and parameter 121 quantization introduced in module 123 coding with variable length words, raspolagayut and form into a stream of bits, in accordance with the predetermined rule (syntax) and output buffer conversion as encoded data 132 of the NAL module in the units of the data slice individually or in a group of multiple macroblocks of the format shown in Fig. 6.

The buffer 17 transfer smoothes the bit stream in accordance with a strip transmission line, connected to the encoder, and a reading speed of the recording medium, and outputs the bit stream as the video stream 133. The buffer 17 conversion applies feedback module 117-control coding in accordance with the state of accumulation of flows of bits in the buffer 133 of the transmission and controls the amount of generated codes in the following video encoding.

The output of the first module 102, the image encoding is a slice module of the three components and is equivalent to the number of codes in the module of the module group access. Thus, the buffer 132 may be configured in the module 105 multiplexing, as is.

In the first module 102, the image encoding in accordance with a seventh variant embodiment, it is possible to determine that all the data slice in the sequence represent a slice in which mixed C0, C1 and C2 (i.e. the slice in which the mixed pieces of information, consisting of three color components), in accordance with the signal 101 identify common encoding/independent Kodirov the deposits. Thus, the flag identifying a color component is not multiplexed on a header of the slice.

The pattern of the second module 104, the image encoding

The internal structure of the second module 104 encoding shown in Fig. 14. In the drawing it is assumed that the input video signal 1a is introduced into the module macroblock consisting of a sample of one color component, in the format shown in Fig. 7.

First module 210 forecasting selects the reference image data from the reference image prediction motion compensation stored in the storage device 211, and performs processing of the prediction motion compensation module of the macroblock. It is possible to store multiple pieces of data of a reference image consisting of a single color component, many times in the storage device 211. Module 210 selects the optimal prediction reference image in the module of the macroblock of the reference image data, and performs motion prediction.

The storage device 211 in the module group of the three color components can be used together with the respective storage devices 111. For predicting motion compensation prepare seven types of as dimensions of the block for execution. First, it is possible to choose any one of RA is the size of 1616, 168, 816 and 88 in units of macroblocks. In addition, when the selected 88, you can choose any one of the sizes 88, 84, 48 and 44 for each block of 88.

Module 210 forecasting performs for each size of the macroblock processing prediction motion compensation for all or part of the block size 1616, 168, 816 and 88 subblock sizes 88, 84, 48 and a 44 motion vectors in a predetermined search range for one or more used reference images. Module 210 forecasting receives the differential signal 214 prediction for each block that is used as the prediction module motion compensation using the motion vectors and the index 212 of the reference image and the block 213 subtraction used for prediction.

The effectiveness of the prediction difference signal 214 forecasting appreciate module 215 to determine the encoding mode. Module 215 to determine the encoding mode displays the type 216 of the macroblock/type padmakumara and index 212 motion vector/reference image, which receive optimal efficiency prediction for the predicted macroblock from the processing of the prediction performed in the module 210 forecasting. All part of the header information of the macroblock, such as the types of macroblock types padmanaban, the indexes of the reference image and the motion vectors, predelay as header information relative to one color component of the input video signal 1a, used to encode and multiplexed with the bit stream.

When assessing the optimality of the effectiveness of forecasting estimate only the value of the forecast error for one color component is subjected to processing coding. In the final selection of the type 216 of the macroblock/type padmakumara can be taken into account weighting factor 218 for each specific type when determining module 217 control encoding.

Similarly, the module 210 forecasting performs prediction inside a frame. The module 110 forecasting is a block that performs as a prediction inside the frame, and the prediction between frames. Run-time prediction inside of frame information on the prediction mode inside a frame output signal 212. In the following explanations, it is not necessary to divide the prediction inside the frame and the prediction motion compensation, signal 212 is called service information prediction. In addition, with regard to predicting inside the frame, estimate only the value of the forecast error for one color component is subjected to processing coding. Finally, the module 210 selects the prediction prediction inside a frame or prediction between frames type macroblock by assessing the type macrob the eye in accordance with the efficiency of forecasting or coding efficiency in module 115 to determine the encoding mode.

Module 210 outputs the selected prediction type 216 macroblock/type padmakumara and the differential signal 214 prediction obtained by the service information 212 forecasting module 219 conversion. Module 219 conversion converts the differential signal 214 forecasting one color component, and outputs the differential signal 214 forecasting module 220 quantization as a conversion factor. In this case, the block size that is used as a single module for conversion can be selected from 44 and 88. When such a choice is possible, the block size selected during encoding affects the value of the flag 234 denote the size of the transform block, and this flag multiplexer with the bit stream.

Module 220 performs quantization quantization conversion factor entered on the basis of the parameter 221 quantization defined by a module 217 control coding, and outputs the conversion coefficient in module 223 encoding with variable-length words as quantized coefficient 222 conversion. Quantized coefficient 222 conversion includes information on one color component, and it is subjected to entropy coding using Huffman coding, arithmetic coding or the like, in module 223 displaced the aqueous encoding.

Quantized coefficient 222 conversion restore a differential signal 226 forecasting local decoding via module 224 inverse quantization module 225 inverse transformation. Quantized coefficient 222 conversion added to the predicted image 227 generated on the basis of the selected type 216 macroblock/type padmakumara and service information 212 forecasting using adder 228. As a result, generate a locally decoded image 229.

After processing, remove distortion unit in the filter 230 remove distortion unit locally decoded image 229 maintain in the storage device 211 for use in the next process prediction motion compensation. Flag 231 management removal filter distortion block, indicating whether the filter remove the distortion unit to the macroblock, also introduced in module 223 encoding with variable word length.

Quantized coefficient 222 conversion, the type 216 of the macroblock/type padmanaban, service information 212 forecasting and parameter 221 quantization introduced in module 223 encoding with variable word length, link and form as a stream of bits in accordance with a predetermined rule (syntax) and output buffer 232 transmission as data encoded in modulating the NAL, in the units of the data slice, one at a time or in groups of multiple macroblocks of the format shown in Fig. 7.

Buffer 232 transmission smoothes the bit stream in accordance with a strip transmission line, connected to the encoder, and a reading speed of the recording medium, and outputs the bit stream as the video stream 233. Buffer 232 transmission applies feedback to the module 217 control coding, in accordance with the state of accumulation of flows of bits in the buffer 232 transmission and controls the amount of generated codes the next time encoding of video frames.

The output of the second module 104, the image encoding represents a cut consisting of only data of one color component. When you want to control the number of codes in a module that consists of a group of access modules, the total transmit buffer in the module multiplexed clippings of all the color components may be provided in the module 105 multiplexing to apply feedback to the module 217 control the encoding of the respective color components on the basis of the degree of filling of the buffer.

In this case, the control coding can be performed using only the amount of information we generate all color components, or may also be performed based on the status of the buffer 232 transmit each of the color components. When the management is of the encoding is carried out using only the amount of information we generate all color components, it is also possible to implement a function equivalent to that of the buffer 232 transmission, with a total transmission buffer module 105 multiplexing, in order to eliminate the buffer 232 transmission.

In the second module 104, the image encoding in accordance with a seventh variant embodiment, it is possible to determine that all the data slice in the sequence represent a snapshot of one color component (i.e., the slice C0, slice or C1 slice C2) in accordance with the signal 101 identify common encoding/independent encoding. Thus, the flag identifying the color component is always multiplexed with the title cut, which allows you to define on the side of the decoder, which image data in the module access corresponds to the slice.

Therefore, the second module 104, the image encoding can pipe the output from the respective buffers 232 transmission when the accumulated data for a single slice, without accumulation of output data for a single image.

The first module 102, the image encoding and the second module 104, the image encoding differ only in whether the header information of the macroblock as the information common to the three components, or is treated as the information component of the same color and in the structure of the bit stream data slice. It is possible to realize a greater hour is basic processing units, as prediction modules, conversion modules, and the modules of the inverse transform modules quantization and modules inverse quantization, and filters remove distortion unit shown in Fig. 13 and 14, can be implemented in the functional blocks common to the first module 102, the image encoding, and the second module 104, the image encoding, with the only difference consisting in the fact, whether the information of the three color components together, or handled only the information of one color component.

Therefore, it is possible to implement not only the module is fully independent processing coding shown in Fig. 11, but also by various coders, by appropriately combining the basic components shown in Fig. 13 and 14. If the layout of the storage device 111 in the first module 102, the image encoding provided in the form of a sequence of plans, it is possible to share the structure of the storage device save reference image between the first module 102, the image encoding and the second module 104, the image encoding.

Although it is not shown in the drawing, in the encoder in accordance with this alternative embodiment, assuming the presence of an imaginary stream buffer (buffer image coding)that hosts the video stream 106 ACC is accordance with matrices, it is shown in Fig. 9 and 10 in an imaginary frame memory (buffer image decoding), which are decoded image 313a and 313b, the video stream 106 generate so that prevents overflow or incomplete filling of the buffer image coding, and the error buffer image decoding. This control is mainly performed using modules 117 and 217 control encoding.

Therefore, when the video stream 106 decode in accordance with the operations model of the imaginary buffer) buffer image encoding and buffer image decoding in the decoder, it is guaranteed that the decoder will not occur. Imaginary model of the buffer is defined below.

The operation of the buffer image encoding is performed in units of unit access. As described above, when processing General decoding encoded data of one image included in one access module. When processed independent of decoding encoded image data for the set of color components (three images in the case of three color components) are included in one access module.

The operations defined in the buffer image encoding is performed, when the first bit and the last bit of the access module is introduced into the buffer image coding, and IOM is NT time when this bit access module reads from the buffer image coding. It is determined that the reading from the buffer image encoding is performed simultaneously. It is assumed that all bits of the access module reads from the buffer image coding at the same time.

When this bit access module reads from the buffer image coding, this bit is introduced into the upper module header parsing. As described above, this bit is subjected to the decoding processing in the first module of the decoding of the image or the second decoding module image, and output as a color video frame, which is Packed in modules module access. Processing of sampled bits from the buffer image coding and image output in the form of a color video frame in modules module access is performed simultaneously by defining a fictitious model of the buffer.

Color video frame, compiled in modules of the access module, is introduced into the buffer of the image decoding and counting time of the output frame of the color video image from clipboard image decoding. The time output from the buffer image decoding represents a value calculated by adding a predetermined delay time to the time read from the buffer image encoding.

It is possible to multiplex this time, C is the support from the bit stream to control the decoder. When the delay time is 0, that is, when the output from the buffer image decoding is equal to the time read from the image coding frame of a color video image is introduced into the buffer of the image decoding and simultaneously output from the buffer image decoding.

In other cases, i.e. when the output of the buffer image decoding occurs later than the time of reading from the buffer image encoding, a frame of a color video image is stored in the buffer of the image decoding until, until the time of the conclusion of the buffer image decoding. As described above, the operation of the buffer image decoding defined in access modules.

Schematic structure of a decoder in accordance with the seventh variant embodiment shown in Fig. 15. In the drawing, the overall decoding processing is performed in the first module 302 decoding of the image. Independent processing of the decoding is done in the module 303 definition color component, and, secondly, the modules 304 decoding image (prepared for the three color components).

Stream 106 bits are divided into modules modules NAL module 300 of analysis of the header. Information header, such as parameter set sequence and set the parameter image, decode, the AK it is, keep in the specified area of the storage device, in which the first module 302 decoding the image, the module 303 definition color component, and the modules 304 decoding the second image can access information header. The signal identification of the common encoding/independent encoding (num_pictures_in_au), multiplexed in the module sequence, decode, and include as part of the information header.

Decoded num_pictures_in_au served in the switch (SW) 301. If num_pictures_in_au=1, the switch 301 transmits the NAL module of cut for each image in the first module 302 decoding of the image. If num_pictures_in_au=3, the switch 301 transmits the NAL module slice module 303 definition color component.

In other words, if num_pictures_in_au=1, the processing of the overall decoding is performed in the first module 302 decoding of the image. If num_pictures_in_au=3, the processing independent of the decoding performed by using three of the second module 304 decoding of the image. Detailed operations of the first and second modules of the decoding image will be described below.

Module 303 determine the color component is a means for detection that is designed to detect a parameter indicating which color component correspond to the decoded data. Module 303 definition wide-angle the color component determines what color component image in this module access corresponds to the NAL module of the slice, in accordance with the value of the flag identifying the color component shown in Fig. 10, and distributes and transmits the NAL module of the slice corresponding to the second module 304 decoding of the image.

With this structure, the decoder provides the effect, consisting in the fact that even if it is accepted the bit stream obtained as a result of alternation and coding of the slice in the module as shown in Fig. 12, it becomes possible to easily determine which slice belongs to the image of a color component and, respectively, to decode the bit stream.

Scheme of work the first module 302 decoding image

The internal structure of the first module 302 decoding image shown in Fig. 16. The first module 302 decoding image receives a stream of 106 bits, corresponding to the matrices shown in Fig. 9 and 10, which is output from the second encoder shown in Fig. 11, in units of mixed cut C0, C1 and C2. The first module 302 of the image decoding processing of decoding the macroblock consisting of samples of the three color components shown in Fig. 6, and restores the output video frame.

Stream 106 bits injected into the module 310 decoding variable length words. Module 310 zakodirovana the variable length words interprets the stream 106 bits in accordance with a predetermined rule (syntax) and selects the quantized coefficient 122 conversion for the three components and header information, macroblock type 116 macroblock/type padmakumara service information 112 forecasting, flag 134 denote the size of the transform block and the parameter 121 quantization), used for the three components. Quantized coefficient 122 conversion is introduced into the module 124 inverse quantization, which performs the same processing as the first module 102, the image encoding, together with the parameter 121 and performs quantization processing, inverse quantization.

Then the output module 124 inverse quantization is introduced into the module 125 inverse transform, which performs the same processing as the first module 102, the image encoding, and restore a differential signal 126 forecasting local decoding (if flag 134 denote the size of the transform block is present in the stream 106 bits, the flag 134 denote the size of the transform block reference phase inverse quantization and inverse transformation processing).

On the other hand, only treatment, consisting in accessing service information 112 prediction to generate a predicted image 127 module 110 prediction in the first module 102, the image encoding included in the module 311 forecasting. Type 116 macroblock/type padmakumara and service information 112 forecasting is introduced into the module 311 of prognosisof the tion, to obtain the predicted image 127 for the three components.

When the macroblock type indicates the prediction of the inside of the frame, the predicted image 127 for the three components obtained from the service information 112 forecasting in accordance with the information of the prediction mode inside a frame. When the macroblock type indicates the prediction between frames, the predicted image 127 for the three components obtained from the service information 112 forecasting in accordance with the motion vector and reference index image.

The differential signal 126 forecasting local decoding and the predicted image 127 summarize using adder 128 to receive pre-decoded image 129 for the three components. Because pre-decoded image 129 is used to predict the motion compensation the following macroblocks after treatment removal of the distortion unit to the samples prior decoded image for the three components in the filter 130 to remove distortion unit, which performs the same processing as the first module 102, the image encoding, pre-decoded image 129 output as the decoded image 313 and store in the storage device 312.

In this case, about what abotu filter remove the distortion unit is used to pre-decoded image 129 on the basis of instructions of the flag 131 management removal filter distortion unit, interpreted by the module 310 decoding variable length words. Many pieces of data of a reference image composed of components of three colors for a variety of values of time, store in the storage device 312.

Module 311 forecasting selects the reference image designated by the index of the reference image extracted from the bit stream, the macroblock of the reference image data and generates the predicted image. As the layout data of the reference image in the storage device 312 reference image data can be separately stored for each of the color components in a sequence of plans, or sample the respective color components can be stored in a sequence of pixels. The decoded image 313 includes three color component and directly replaces the frame of a color video image forming module 313a access when processing General decoding.

The scheme of work of the second module 304 decoding image

The internal structure of the second module 304 decoding image shown in Fig. 17. The second module 304 decoding image receives a stream of 106 bits, corresponding to the matrices shown in Fig. 9 and 10, the output of the decoder shown in Fig. 11, in units of 450 NA slice C0, C1 or C2, the selected module 303 defining a color component that performs the processing of decoding a macroblock consisting of a sample of one color component shown in Fig. 7, as a module, and restores the output frame of video.

The flow of 450 bits injected into the module 410 decoding variable length words. Module 410 decoding variable length words interprets the flow of 450 bits in accordance with a predetermined rule (syntax) and selects the quantized coefficient 222 conversion for component one color and the header information of the macroblock (type 216 macroblock/type padmanaban, service information 212 forecasting, flag 234 denote the size of the transform block and the parameter 221 quantization), which together are used for one color component.

Quantized coefficient 222 conversion is introduced into the module 224 inverse quantization, which performs the same processing as the second module 104, the image encoding, together with the parameter 221 quantization and processed inverse quantization. Then the output module 224 inverse quantization served in module 225 inverse transform, which performs the same processing as the second module 104, the image encoding, and restore with obtaining a differential signal 226 forecasting local decoding : CTCSS / DCS the project (if the flag 234 denote the size of the transform block is present in the flow of 450 bits, flag 234 denote the size conversion unit reads the phase of the inverse quantization and inverse transformation processing).

On the other hand, only the processing of the access service information 212 to generate the predicted image 227 module 210 forecasting in the second module 104, the image encoding included in the module 411 forecasting. Type 216 macroblock/type padmakumara and service information 212 forecasting is introduced into the module 411 forecasting for receiving the projected image 227 for one color component.

When the macroblock type indicates the prediction of the inside of the frame, the projected image 227 for one color component, is obtained from the service information 212 forecasting, in accordance with the information of the prediction mode inside a frame. When the macroblock type indicates the prediction between frames, the projected image 227 for one color component, is obtained from the service information 212 forecasting, in accordance with the motion vector and reference index image.

The differential signal 226 forecasting local decoding and the projected image 227 summarize using adder 228 to receive pre-decoded image 229, for a macroblock of one color component. Since p is evritania decoded image 229 is used to predict the motion compensation the following macroblocks, after applying processing to remove the distortion unit to the samples prior decoded image for one color component in the filter 230 remove distortion unit, which performs the same processing as in the second module 104, the image encoding, pre-decoded image 229 output as the decoded image 451 and store in the storage device 412.

In this case, the processing filter to remove distortion unit is used to pre-decoded image 229 on the basis of instructions of the flag 231 management removal filter distortion unit that is interpreted by the module 410 decoding variable length words. The decoded image 410 includes only a sample of one color component, and make it up as a frame of a color video image by building the modules 313b access output data other relevant second module 304 decoding image for parallel processing, shown in Fig. 15.

From the above it follows that the first module 302 decoded image and the second module 304 decoding images differ only in that the header information of the macroblock is treated as the information common to the three components, or is treated as the information component of the same color and in the structure of the bit stream d is the R slice. It is possible to implement most of the major blocks of the decoding processing as the processing of the prediction motion compensation, inverse transform and inverse quantization, as shown in Fig. 13 and 14 to the functional blocks common to the first module 302, the image encoding and second modules 304, the image encoding.

Therefore, it is possible to implement a variant embodiment not only completely independent of the processing module decoding shown in Fig. 15, but also various decoders by the approximate combining the basic components shown in Fig. 16 and 17. In addition, if the layout of the storage device 312 of the first module 302, the image encoding provided in the form of a simple sequence, it is possible to jointly use patterns storage devices 312 and 412 between the first module 302 decoded image and the second module 304 decoding of the image.

Needless to say that the decoder shown in Fig. 15, allows you to receive and decode the bit stream output from the encoder is arranged so that it always captures the signal 3 identify common encoding/independent encoding as "processing independent coding and independently encodes all frames without using the first module 102, the image encoding, in General, is the quality of another form of the encoder it is shown in Fig. 11.

As another form of the decoder shown in Fig. 15, in the form of use, provided that the signal 3 identify common encoding/independent encoding is always fixed at "processing independent coding, the decoder can be written as a decoder, which does not include the switch 301, and the first module 302 decoding image performs only the processing of independent decoding.

If the first module 302 of the image decoding includes decoding of the bit stream corresponding to the high profile AVC, in which the three components together encode conventional YUV format (signal format for representing color, using three pieces of information signal for signal (Y) is a brightness difference (U) between the luminance signal and a blue component, and a difference (V) between the luminance signal and a red component) 4:2:0, as an object, and the analysis module 300 header, determines what format encoded bit stream with a reference to the profile identifier decoded from the stream 106 bits, and transmits the result of determination in the switch 301 and the first module 302 decodes the image information of one line of signal 3 identify common encoding/independent encoding, it is also possible to make the decoder, which provides the provides the compatibility of the conventional YUV 4:2:0 bit stream.

In the first module 102, the image encoding in the seventh variant embodiment of the present invention, pieces of information of the three color components are mixed with the data slice, and exactly the same processing prediction frames inside/between frames is used for the three color components. In line with this, the correlation of the signal among the color components may remain in the space signal prediction error.

As compensation for the removal of the correlation signal, for example, conversion processing of the color space can be used for signal prediction error. Examples of the first module 102, the image encoding having such a structure shown in Fig. 18 and 19. In Fig. 18 and 19 are the same as those components shown in Fig. 13, with the exception of module color space conversion and inverse transform module color space.

In Fig. 18 shows an example in which processing of the color space conversion is performed at pixel level before performing the conversion processing. Module 150a convert the color space is set before conversion module, and the module 151a inverse conversion of the color space is after the inverse transform module.

In Fig. 19 shows an example in which the handle is preobrazovaniya color space, while the processed frequency components accordingly choose regarding the coefficient data obtained after performing the conversion processing. Module 150b conversion color space is after the transformation module, and the module 151b inverse conversion of the color space is in front of the inverse transform module. Consequently, there is an effect that it becomes possible to control the high-frequency noise component included in a specific color component, so that it will not spread to other color components, which are unlikely to include noise.

When the frequency components subjected to the conversion processing of the color space, make adaptive elected, pieces of information 152a and 152b of the signals used to define the choice of encoding time, multiplexer with the stream of bits on the side decoding.

During processing of converting the color space of many conversion systems can be switched in units of a macroblock and used in accordance with the characteristic of the image signal subjected to the encoding, or the presence or absence of transformation can be defined in the module macroblock. It is also possible to fix in advance the types of selectable PR systems the education level of the sequence, and to mark the conversion system, which must be selected in the module image, slice, macroblock, or the like. It is possible to choose whether to perform processing for converting the color space before conversion or after conversion.

When performing adaptive processing coding of this type, it is possible to perform the coding efficiency estimation for all selectable options module 115 or 215 determine the encoding mode to select the option with the highest coding efficiency. When handle adaptive coding of this type, part of the information 152a and 152b signals to determine the timing of encoding multiplexer with the stream of bits on the side decoding. Such signals can be identified at a level different from that of macroblocks, such as a slice, a picture, GOP and sequence.

The decoder corresponding to the encoder of Fig. 18 and 19, shown in Fig. 20 and 21. The components shown in Fig. 20 and 21 are the same as components shown in Fig. 16, except the module inverse conversion of the color space. In Fig. 20 illustrates a decoder that decodes the bit stream encoded by the encoder shown in Fig. 18, by performing the color space conversion before processing transformations the Finance.

Module decode variable-word length decoding of the bit stream information about the presence or absence of conversion to select whether to perform the conversion module 151a inverse conversion of the color space, and information 152a to select the method of conversion executed in the module inverse conversion of the color space, and passes this information to the module 151a inverse conversion of the color space. The decoder shown in Fig. 20, performs module 151a inverse conversion of the color space, the processing of converting the color space for signal prediction error after the inverse transformation on the basis of these kinds of information.

In Fig. 21 illustrates a decoder that decodes a bit stream encoded using the encoder shown in Fig. 19, by selecting the frequency component subjected to processing after the conversion processing and conversion color space. Module decode variable-word length decoding of the bit stream information 152b identification, which includes information about the presence or absence of conversion, to select whether to perform the conversion module 151b inverse transformation of color space information to define how the transformation is, performed in the module inverse transformation of color space information for specifying a frequency component, which executes color space conversion, and the like, and transmits this information to the module 151b inverse conversion of the color space. The decoder shown in Fig. 21, performs module 151b transformation inverse color space conversion processing of the color space conversion coefficient data after the inverse quantization on the basis of these kinds of information.

In the decoder shown in Fig. 20 and 21, as the decoder of Fig. 15, if the first module 302 of the image decoding includes decoding of the bit stream corresponding to the high profile AVC, which is jointly encoded three components with the conventional YUV 4:2:0, as the object, and the module 300 analysis module header determines what format was encoded bit stream, referring to the profile identifier decoded from the stream 106 bits, and transmits the result of determination in the switch 10 and the first module 302 decoding image as part of the information about the signal cable for the signal 101 identify common encoding/independent encoding, this also may be a decoder, which provides compatibility normal is armata YUV 4:2:0 bit stream.

The structure of the data-encoded header information of the macroblock included in the bit stream of the conventional YUV 4:2:0, as shown in Fig. 22. When the macroblock type is a prediction inside a frame, the coded data included module 500 prediction mode internal forecasting color. When the macroblock type is a prediction between frames, the motion vector component of the color is generated by using a method different from the method used for the component brightness, uses the identification number of the reference image and information of a motion vector included in the header information of the macroblock.

The performance of the decoder, while ensuring compatibility with the conventional YUV 4:2:0 bit stream is illustrated below. As described above, the first module 302 decoding image has the function of decoding the bit stream of the conventional YUV 4:2:0. The internal structure of the first module of the decoding of the image is the same as that shown in Fig. 16.

The first module 302 of the image decoding module 310 decoding variable length words having a function for decoding the bit stream of the conventional YUV 4:2:0, is illustrated below. When the video stream 106 serves module 310 decoding variable length words, the module 310 decoding with the PE the temporal word length decodes the flag, denoting the color format. Flag indicating if the color format is a flag included in the header of the parameter sequence of the video stream 106, and indicates whether the input video format 4:4:4, 4:2:2, 4:2:0 or 4:0:0.

The processing for decoding the header information of the macroblock of the video stream 106 is switched in accordance with the value of the flag denote color format. When the macroblock type indicates the prediction inside the frame, and a flag to indicate the color denotes the 4:2:0 or 4:2:2, the prediction mode of the color inside the frame decode the bit stream. When the flag notation format color indicates a 4:4:4, decoding the prediction mode of the color inside the frame skip. When the flag notation format color denotes 4:0:0, since the input signal has the format (format 4:0:0)that consists of only the luminance signal, decoding the prediction mode of the color inside the frame skip.

Processing decoding for other header information, macroblock, in addition to the prediction mode of the color inside the frame is the same as in the module 310 decoding variable length of the first module 302 decoding image that does not include the function of decoding the bit stream of the conventional YUV 4:2:0.

Thus, when the form of the stream 106 enters the module 310 decoding variable length words, this module 310 decoding variable length words allocates the flag denote color format (not shown), the quantized conversion coefficient for the three components and the header information of the macroblock (macroblock type/type padmanaban, service information forecasting, flag denote the block size of the transform and the quantization parameter). Flag notation format designation of color (not shown) and service information forecasting is introduced into the module 311 prediction to obtain the predicted image 127 on the three components.

The internal structure of module 311 of the first prediction module 302 decoding of the image, which provides compatibility with the conventional YUV 4:2:0 bit stream shown in Fig. 23. Below explains the operations performed by the module 311 forecasting.

Module 501 switch determines the type of the macroblock. When the macroblock type indicates the prediction of the inside of the frame, the module 502 switch determines the value of the flag denote color format. When the flag value notation format is a 4:2:0 or 4:2:2, module 311 forecasting receives the predicted image 127 for the three components of the internal information of the prediction, in accordance with the information about the prediction mode inside the frame and information on the mode, the prediction of the color inside the frame. The projected image of the luminance signal is generated, in addition to the three components, the prediction inside the frame of the luminance signal in accordance with information about the prediction mode inside the frame.

The projected image of the colour difference signal of the two components generated in the module forecasting within the frame of the colour difference signal, which performs processing different from the processing component brightness, in accordance with the information about the prediction mode of the color inside the frame. When the value of the flag denote the color format is a 4:4:4, the projected image of all three components generate module forecasting within the frame of the luminance signal in accordance with information about the prediction mode inside a frame. When the value of the flag denote the color format is a 4:0:0 because the format 4:0:0 consists only of the luminance signal (one component), only the projected image of the luminance signal is generated in the module forecasting within the frame of the luminance signal in accordance with information about the prediction mode inside the frame.

When the macroblock type indicates the prediction between frames in module 501 switching module 503 switch determines the value of the flag denote color format. When mn is the significance of the flag denote the color format is a 4:2:0 or 4:2:2 in relation to the luminance signal, the predicted image generated from the service information prediction module the prediction between frames of the brightness signal in accordance with the motion vector and reference index image, and in accordance with the method of generating a predicted image for the luminance signal, set by the AVC standard.

Regarding the predicted image color difference signal of the two components, the prediction interframe difference signal of the color motion vector obtained from the service information forecasting, is subjected to the scaling based on the color format to generate the motion vector of the color. The predicted image generated from the reference image designated by the index of the reference image, which is obtained from the internal information of the prediction based on the motion vector of the color, in accordance with the method established by the AVC standard. When the value of the flag denote the color format is a 4:0:0 because the format 4:0:0 consists only of the luminance signal (one component), the projected image of the luminance signal generating module prediction between frames of the brightness signal in accordance with the motion vector and reference index image.

As described above, the tool generirovana the projected image color difference signal of the conventional YUV 4:2:0 is provided, as a means of generating the projected image of the three components, and its switch in accordance with the flag value notation format color information decoded from the bitstream. Thus, it becomes possible to make the decoder, which provides compatibility with the conventional YUV 4:2:0 bit stream.

If the information indicating the bit stream that can be decoded even in the decoder that does not support the processing of converting the color space, such as the decoder shown in Fig. 15 will be set in the stream 106 of bits transmitted to the decoders shown in Fig. 20 and 21, in the module parameter sequence or the like, all of the decoders shown in Fig. 20, 21 and 15, it becomes possible to perform the decoding of the bit stream in accordance with the characteristics of the decoding of each of the decoders.

The eighth variant embodiment of the

In the eighth variant embodiment of the present invention will be described in another variant embodiment, in which only the structure of the bit stream, which is injected and output differs from the encoder and decoder in accordance with the seventh variant embodiment, shown in Fig. 11, 15, etc. Encoder in accordance with the eighth variant embodiment performs the multiplexing of the coded data with the structure of the bit stream is in, it is shown in Fig. 24.

In the bit stream with the structure shown in Fig. 9, the module AUD NAL includes information primary_pic_type as its element. As shown in the table below, it indicates the information of the coding type of the image, at a time when encode image data in the module access, which begins with a module AUD NAL.

Values primary_pic_type (excluded from the standard
primary_pic_typevalues slice_type, which can be represented in the primary coded picture
0I
1I, P
2I, P, B
3SI
4SI, SP
5I, SI
6I, SI, P, SP
7I, SI, P, SP, B

For example, when primary_pic_type=0, it indicates that the image is fully encoded within a frame. When primary_pic_type=1, it indicates that the slice to the th must be encoded within a frame and slice, which can be done prediction motion compensation using only one list of reference images can be mixed with the image. Because primary_pic_type represents the information identifying the encoding mode, which can be encoded image, on the side of the encoder, by processing this information, it becomes possible to perform the encoding, suitable for various conditions such as the characteristics of the input video signal and functions of random access.

In the seventh variant embodiment, since there is only one primary_pic_type for one module, when processing independent coding, primary_pic_type is common to images of the three color components in the module access. In the eighth variant embodiment, when performing independent encoding of each image color components, primary_pic_type for the other two color component images, in addition enter in module AUD NAL, shown in Fig. 9, in accordance with the value num_pitures_in_au. Alternatively, as in the structure of the bit stream shown in Fig. 24, the coded data of each color component images begin with the NAL module (separator color channel), which denotes the beginning of an image of a color component and module CCD NAL included information primary_pic_type, which is adequate to him. The concept of module CCD NAL in accordance with the eighth variant embodiment of equivalent concepts disclosed with reference to Fig. 4.

In this structure, since together multiplexer the coded data of the respective color component images and one image, the flag identifying the color component (color_channel_idc), described in the seventh variant embodiment, the module comprises CCD NAL and not in the header of the slice. Therefore, it becomes possible to combine the information of the flag identifying the color component, which must be multiplexed with the corresponding slices, the data in the modules of the image. Thus, the effect of which is that it becomes possible to reduce the service information.

Because the module CCD NAL compiled as a string of bytes that you want to detect only to check color_channel_idc only once on a single image of a color component, it becomes possible to quickly search the top of the image color component without performing the processing of decoding variable length words. Thus, on the side of the decoder, color_channel_idc in the header of the slice does not need to check every time for the Department of the NAL module, which must be decoded for each color component. It is possible to ensure smooth feeding data to the second decoding : CTCSS / DCS module the project image.

On the other hand, in this structure, the effect of reducing the buffer size and the delay processing in the encoder described with reference to Fig. 12, in the seventh variant embodiment attenuated. Thus, the flag identifying the color component may be designed to indicate at a higher level (sequence or GOP), multiplexed whether the encoded data in units of a slice, or they are multiplexed in units of image color component. With such a structure of the bit stream it is possible to adopt a flexible embodiment of the encoder in accordance with the form of use of this encoder.

The ninth variant embodiment of the

In addition, as another alternative embodiment of a multiplexing encoded data can be performed with the structure of the bit stream shown in Fig. 25. In the drawing, color_channel_idc and primary_pic_type included in the module NAL CCD shown in Fig. 24 on the respective AUD. In the structure of the bit stream, in accordance with the ninth variant embodiment of the present invention, in the case of independent processing, encoding, and when the total processing coding one (color component) of the image included in one access module. In other words, in Fig. 25 one image (one color component) is defined as one unit of access.

With this structure, as in the structure of the rounds, as described above, an effect of reducing service information, since it is possible to combine the information of the flag identifying the color component data of the modules of the image. Also, since you want to detect only the module AUD NAL compiled as a string of bytes to check color_channel_idc only once on a single image of a color component, you can quickly search the top of the image color component without performing the processing of decoding variable length. Thus, on the side of the decoder, color_channel_idc in the header of the slice does not need to check every time for the Department of the NAL module, which must be decoded for each color component. It becomes possible to smoothly feed the data to the second decoding module of the image.

On the other hand, since the image of one frame or one field is composed of three modules to access, it is necessary to define three access module, as image data in the identical time. Therefore, in the structure of the bit stream shown in Fig. 25, non-distance (in the order of encoding and decoding in the time direction, and so on) of the respective images can be transferred in AUD.

When using such a structure on the side of the decoder it is possible to check the order of decoding and displaying the corresponding image and the attributes of a color component, properties IDR, etc., not by decoding the data of the slice at all. It is possible to effectively perform the editing and special reproduction at the level of the bit stream.

In the structure of the bit stream shown in Fig. 9, 24 or 25, information indicating the number of modules NAL slice included in the image of one color component may be stored in certain areas AUD or CCD.

As for all variants of the embodiment, the conversion processing and handling reverse conversion can be converted to guarantee orthogonality such as DCT (DCT, discrete cosine transformation), or can be converted as AVC in combination with the processing of the quantization and inverse quantization to provide approximate orthogonality, instead of a simple orthogonal transformation such as DCT. In addition, the signal prediction error can be encoded as information on the level of the pixel without performing the conversion.

1. The image encoder to generate a bit stream by encoding compression color image in the format of 4:4:4, and the image encoder includes: a module for multiplexing multiplexing authentication information from the bit stream, the identification information indicates whether independently encode to the format of 4:0:0 signals sootvetstvujushij color components in the format of 4:4:4; and the encryption module, and if the identification information indicates that the signals of respective color components are independently encoded in the format 4:0:0, the encryption module independently encodes the respective color components in the format of 4:4:4, forming an independent image format 4:0:0, and if the identification information indicates that the signals of respective color components are not coded independently in the format 4:0:0, the encryption module encrypts the respective color components in the format of 4:4:4, forming a common image format 4:0:0.

2. The method of encoding image to generate a bit stream by encoding compression color image in the format of 4:4:4, and the method of encoding an image includes: a step of multiplexing to multiplex identification information from the bit stream, the identification information indicates whether independently encode to the format of 4:0:0 signals of the respective color components in the format of 4:4:4; and phase encoding, and if the identification information indicates that the signals of respective color components are independently encoded in the format 4:0:0, the encoding step independently encode the corresponding the color components in the format of 4:4:4, forming an independent image format 4:0:0, and if the information which I identify indicates, that the signals of respective color components are not coded independently in the format 4:0:0, the encoding step to encode the respective color components in the format of 4:4:4, forming a common image format 4:0:0.

3. Image decoder for decoding a color image based on the input bit stream generated by encoding compression color image in the format of 4:4:4, and the image decoder includes: a decoding module for decoding the identification information included in the bit stream, the identification information indicates whether or not the signals of respective color components are independently encoded in the format 4:0:0; and if the identification information indicates that the signals of respective color components are independently encoded in the format 4:0:0, the decoding module recognizes the module that includes the encoded data from three independent images that are encoded in the format 4:0:0 and belong to or identical to the frame or identical to the field, as the access module, which is a module for performing the decoding process.

4. A method of image decoding for decoding a color image based on the input bit stream generated by encoding compression color image in the format of 4:4:4, and the way decode the Finance image contains: stage decoding for decoding the identification information, included in the bit stream, the identification information indicates whether or not the signals of respective color components are independently encoded in the format 4:0:0; and if the identification information indicates that the signals of respective color components are independently encoded in the format 4:0:0, stage decoding recognize the module, which includes the coded data of three independent images that are encoded in the format 4:0:0 and belong to or identical to the frame or identical to the field, as the access module, which is a module for performing the decoding process.

5. The image encoder to generate a bit stream by encoding compression color image in the format of 4:4:4, and the image encoder includes: a module for multiplexing multiplexing authentication information from the bit stream, the identification information indicates whether independently encode to the format of 4:0:0 signals of the respective color components in the format of 4:4:4; and an encryption module, and if the identification information indicates that the signals of respective color components are independently encoded in the format 4:0:0, the encryption module encrypts each of the three color components belonging to or identical to the frame or identical field, in the format of 4:0:0, and suasive the coded data of the three color components with module access which is a module for performing the encoding process.

6. The method of encoding image to generate a bit stream by encoding compression color image in the format of 4:4:4, and the method of encoding an image includes: a step of multiplexing to multiplex identification information from the bit stream, the identification information indicates whether independently encode to the format of 4:0:0 signals of the respective color components in the format of 4:4:4; and phase encoding, and if the identification information indicates that the signals of respective color components are independently encoded in the format 4:0:0, the encoding step to encode each of the three color components belonging to or identical to the frame or identical field, in the format of 4:0:0, and bind the coded data of the three color components with the access module, which is a module for performing the encoding process.



 

Same patents:

FIELD: information technology.

SUBSTANCE: when controlling a diffused illumination element, the category of data displayed by the unit is identified. Diffused illumination data associated with the identified category are extracted and the extracted diffused illumination data are displayed according to the displayed data. The extracted diffused illumination data can be a diffused illumination script which can determine temporary parts of the diffused illumination data. Diffused illumination data can be associated with a category based on user input. A data subcategory can be identified and diffused illumination data can be modified with additional diffused illumination data associated with the subcategory. Association of the category with diffused illumination data can be edited by the user. Default association of the category with diffused illumination data can be provided.

EFFECT: eliminating the direct link between the diffused illumination effect and context of the displayed video.

18 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: secondary video signal is generated, said signal being composed of signals having values derived via conversion of intermediate values into values lying inside an output range according to a predetermined conversion rule when intermediate brightness values (determined by formulas where Smin is the output value of the lower limit, Xr to Xb are values of RGB signals of the main video signal, k is a constant, and Lr to Lb are intermediate values of RGB brightness), include a value greater than the value of the upper output limit, otherwise a secondary video signal consisting of a signal having an intermediate brightness value is generated.

EFFECT: preventing gradation error when a given video signal shows colour in a region outside the colour range of the video display element, performing signal conversion processing with low arithmetic load.

16 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: scalable video codec converts lower bit depth video data to higher bit depth video data using decoded lower bit depth video data for tone mapping and tone mapping derivation. The conversion can also be used for filtered lower bit depth video data for tone mapping and tone mapping derivation.

EFFECT: high encoding efficiency.

7 cl, 3 dwg

FIELD: physics.

SUBSTANCE: when controlling an ambient illumination element, a host event is detected, a light script associated with the detected event is retrieved and the retrieved light script is rendered in accordance with the detected event. A user may associate the light script with the event and/or an event type which corresponds to the event. A default association of events and/or event types may be provided, although these default associations can be modified by the user. An event type which corresponds to the event can be identified and a light script associated with the identified event type can be rendered in response to the detected event.

EFFECT: reduced viewer fatigue and improved realism and depth of experience.

20 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: image encoder includes the following: a predicted-image generating unit that generates a predicted image in accordance with a plurality of prediction modes indicating the predicted-image generating method; a prediction-mode judging unit that evaluates prediction efficiency of a predicted image output from the predicted-image generating unit to judge a predetermined prediction mode; and an encoding unit that subjects the output signal of the prediction-mode judging unit to variable-length encoding. The prediction-mode judging unit judges, on the basis of a predetermined control signal, which one of a common prediction mode and a separate prediction mode is used for respective colour components forming the input image signal, and multiplexes information on the control signal on a bit stream.

EFFECT: high optimality of encoding the signal of a moving image.

4 cl, 86 dwg

FIELD: information technology.

SUBSTANCE: image includes, when applying encoding processing to three colour components using the 4:0:0 format, data for one image into one access module, which enables to establish the same time information or identically established encoding modes for corresponding colour components. In an image encoding system for applying compression processing to an input image signal, comprising multiple colour components, encoded data obtained after independent encoding processing of the input image signal for each of the colour components, and the parameter that indicates which colour component corresponds to encoded data is multiplexed with the bit stream.

EFFECT: high encoding efficiency owing to use of a single encoding mode for corresponding colour components.

2 cl, 25 dwg

FIELD: information technology.

SUBSTANCE: invention relates to an image signal processing device, which enables to reproduce the appearance of an image on a plasma display panel (PDP), using other display devices such as a cathode-ray tube or liquid-crystal display (LCD), while processing signals. In an image processing module, such processing for an image signal for which an image received when the image signal is displayed in a display device of another type besides a PDP, may seem like an image displayed on a PDP. At least one reproduction colour shift is performed, which is associated with a moving image which forms as a result of that, RGB glow is included in the said order of reproduction, smoothing structure used in the direction of space, reproduction of the smoothing structure used in the direction of reproduction time, the interval between pixels, and reproduction of an array of strips. The invention can be used when, for example, an image which must look like an image displayed on a PDP, is displayed on an LCD.

EFFECT: possibility of obtaining a type of image on a plasma panel display reproduced on another display different from the plasma panel display such as a liquid-crystal display, while processing signals.

6 cl, 20 dwg

FIELD: information technologies.

SUBSTANCE: it is suggested to do coding and decoding uniformly for multiple colouration formats. Based on control signal providing for type of colouration format of inlet signal from dynamic image, if colouration format is 4:2:0 or 4:2:2, the first unit of prediction mode detection with intra-coding and the first unit of predication image coding with intra-coding are applied to component of dynamic image inlet signal colouration component, and the second unit of prediction mode detection with intra-coding and the second unit of prediction image formation with intra-coding are applied to colouration component. If colouration format is 4:4:4, the first unit of prediction mode detection with intra-coding and the first unit of prediction image formation with intra-coding are applied to all colour components to do coding, and unit of coding with alternating length multiplexes control signal as data of coding, which should be applied to element of dynamic image sequence in bitstream.

EFFECT: improved mutual compatibility between coded video data of various colouration formats.

12 cl, 24 dwg

FIELD: information technology.

SUBSTANCE: invention relates to encoding and decoding digital images. A device is proposed for encoding/decoding a dynamic image, in which during compressed encoding through input of data signals of the dynamic image in 4:4:4 format, the first encoding process is used for encoding three signals of colour components of input signals of the dynamic image in general encoding mode and the second encoding process is used for encoding three signals of colour components input signals of the dynamic image in corresponding independent encoding modes. The encoding process is carried out by selecting any of the first and second encoding processes, and compressed data contain an identification signal for determining which process was selected.

EFFECT: more efficient encoding dynamic image signals, without distinction of the number of readings between colour components.

18 cl, 15 dwg

FIELD: information technologies.

SUBSTANCE: method is suggested for selection and processing of video content, which includes the following stages: quantisation of colour video space; making selection of dominating colour with application of mode, median of average or weighted average of pixel colorations; application of perception laws for further production of dominating colorations by means of the following steps: transformation of colorations; weighted average with application of pixel weight function affected by scene content; and expanded selection of dominating colour, where pixel weighing is reduced for majority pixels; and transformation of selected dominating colour into colour space of surrounding light with application of three-colour matrices. Colour of interest may additionally be analysed for creation of the right dominating colour, at that former video frames may control selection of dominating colours in the future frames.

EFFECT: creation of method to provide imitating surrounding lighting by means of dominating colour separation from selected video areas, with application of efficient data traffic, which codes averaged or characteristic values of colours.

20 cl, 43 dwg

FIELD: information technology.

SUBSTANCE: method for encoding at least one picture corresponding to at least one of at least two views of multi-view video content to form a resultant bitstream, wherein in the resultant bitstream at least one of coding order information and output order information for the at least one picture is decoupled from the at least one view to which the at least one picture corresponds.

EFFECT: possibility to manage the list of reference images for coding of multi-view sequences.

68 cl, 25 dwg

FIELD: information technology.

SUBSTANCE: disclosed is an encoding method involving: defining access units; and an encoding each of the images included in the access unit for each access unit. Defining involves: encoding unit determination for determining whether to uniformly encode the images included in an access unit, uniformly encode on a field basis or frame-by-frame basis; and determining the type of field to determine whether to uniformly encode images as top fields or bottom fields when it has been determined that images included in the access unit must be encoded on a field basis. During encoding, each of the images is encoded for each access unit in a format defined when determining encoding units and the type of field.

EFFECT: defining a container or access unit when each of the images or different MVC component types are encoded differently using frame coding or field coding.

2 cl, 21 dwg

FIELD: information technologies.

SUBSTANCE: unit comprises serially connected facility of content classification and facility of multimedia data processing. The facility of multimedia data content classification is made as capable to identify data that corresponds to inadmissible level of electromagnet radiation. The facility of multimedia data processing is made as capable of transposition of multimedia data into a secure form. The unit is structurally made as an insert into a signal cable of the indication panel.

EFFECT: reduced level of electromagnetic noise that they radiate and thus provision of higher safety of devices.

2 dwg

FIELD: information technologies.

SUBSTANCE: following stages are carried out: division (2100) of an upper layer macroblock into simplest units; calculation (2200) of an intermediate position for each simplest unit within a low resolution image from a simplest unit position depending on modes of upper layer macroblock coding and images of high and low resolution; identification (2300) of a basic layer macroblock called base_MB, containing a pixel arranged in the intermediate position; calculation (2400) of a final position within a low resolution image from an alleged position of the basic layer depending on coding modes of base_MB macroblock and upper layer macroblock and images of high and low resolution; identification (2500) of the basic layer macroblock called real_base_MB, containing a pixel arranged in the final position; and production (2600) of motion data for the upper layer macroblock from motion data of the identified real_base_MB.

EFFECT: improved efficiency of video coding.

11 cl, 5 dwg

FIELD: information technologies.

SUBSTANCE: capability of signalling is provided about several values of decoding time for each sample at the level of a file format, which makes it possible to use various values of decoding time for each sample or a subset of samples when decoding a full flow and a subset of this flow. A unit of alternative decoding time is determined, designed to signal several values of decoding time for each sample. Such unit may contain a compact table version, which makes it possible to index from the alternative decoding time to quantity of samples, at the same time the alternative decoding time is the decoding time used for the sample in the case, when it is required to decode only a subset of an elementary flow stored on a path. Each record in the table contains multiple sequential samples with identical value of time difference and difference between these sequential samples, and a full "time-sample" chart may be built by adding differences.

EFFECT: reduced complexity of calculations in decoding of scaled video data.

16 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: in applications for local playback of files or in applications for single-address stream delivery with selection of an independently decoded track for a specific type of multimedia data, information on groups of alternative tracks is found first using a block of track interconnections and for the given type of multimedia data, one track is selected from the group of alternative tracks. If there is need to switch the stream, information on the group of the switched tracks is then found using the block of track interconnections. In multi-address applications with scalable streams or MDC streams, tracks in multilevel groups or MDC groups of tracks are found using the block of track interconnections and selected from all multilevel groups or MDC groups.

EFFECT: method of indicating information on multilevel groups of tracks and information on groups of tracks with multiple description coding, along with a mechanism for efficient indication of information on interconnections of tracks.

22 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: apparatus includes an input selector (101) for selecting input video streams, a stream analyser (102) for acquiring encoded information, a decoding section (103) for executing decoding processes, an output data memory section (104) for recording decoded frame data, an output switching section (105) for selecting frame data to be output, a display output section (106) for outputting a display image on a display screen, a scheduling section (107) for assigning decoding processes, and an output data image controller (108) for constructing an output data display.

EFFECT: efficient performance of parallel decoding processes on video streams with limited resources and elimination of processing delay which would occur due to temporary concentration of time-varying processing volume when a plurality of video streams are subjected to reproducing operations in parallel.

9 cl, 14 dwg

FIELD: information technology.

SUBSTANCE: video coder has an encoder (100) for encoding a block in an image by choosing between time prediction and cross-view prediction in order to facilitate prediction for the block. The image is one of a set of images corresponding to multi-view video content and having different viewpoints with respect to the same or similar scene. The image is one of different viewpoints. High-level syntax is used to indicate application of cross-view prediction for the block.

EFFECT: high accuracy of video coding.

13 cl, 6 dwg, 4 tbl

FIELD: information technology.

SUBSTANCE: disclosed is a method of encoding information on multiple image types into an image signal (200), involving: adding to the image signal (200) a first image (220) of pixel values representing one or more objects (110, 112) captured by a first camera (101); adding to the image signal (200) a map (222) containing corresponding values for corresponding sets of pixels of the first image (220), said corresponding values representing a three-dimensional position in space of the region of one or more objects (110, 112) represented by the set of pixels; and adding to the image signal (200) a partial presentation (223) of the second image (224) of pixel values representing one or more objects (110, 112) captured by a second camera (102), wherein the partial presentation (223) containing at least information on the majority of pixels representing the region of one or more objects (110, 112) are not visible for the first camera (101).

EFFECT: high accuracy of results when converting to different formats, such as a set of types with intermediate types which, in addition, do not contain a large amount of data.

15 cl, 4 dwg

FIELD: information technology.

SUBSTANCE: scalable video codec converts lower bit depth video data to higher bit depth video data using decoded lower bit depth video data for tone mapping and tone mapping derivation. The conversion can also be used for filtered lower bit depth video data for tone mapping and tone mapping derivation.

EFFECT: high encoding efficiency.

7 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: image includes, when applying encoding processing to three colour components using the 4:0:0 format, data for one image into one access module, which enables to establish the same time information or identically established encoding modes for corresponding colour components. In an image encoding system for applying compression processing to an input image signal, comprising multiple colour components, encoded data obtained after independent encoding processing of the input image signal for each of the colour components, and the parameter that indicates which colour component corresponds to encoded data is multiplexed with the bit stream.

EFFECT: high encoding efficiency owing to use of a single encoding mode for corresponding colour components.

2 cl, 25 dwg

Up!