Multi-level structure of coded bitstream

FIELD: radio engineering, communication.

SUBSTANCE: video coder separates information of a heading of video blocks of a layer (or a different coded single element of data) from residual information of video blocks of the specified layer, and performs coding of series lengths in respect to information of a heading of video blocks for better use of correlation of heading information between blocks of the specified layer. After coding of information of the blocks heading in the specified layer the video coder codes residual information for each of the blocks of the specified layer and sends coded information of the heading as the first sequence of bits and sends coded residual information as the second sequence of bits.

EFFECT: provision of a level structure of a coded bitstream, which uses correlation in heading information between video blocks of a coded signal element of data.

78 cl, 12 dwg

 

The present application claims priority based on provisional patent application US No. 60/979734, filed October 12, 2007, the contents of which fully incorporated herein by reference.

The technical field to which the invention relates

The present disclosure relates to digital video coding and, more particularly, to a coding header information of the block sequence.

The level of technology

The functionality of digital video can be used in a broad set of devices, including digital television, digital direct broadcast, wireless system, wireless radio, a personal digital assistant (PDA), a portable laptop computer or desktop computer, digital cameras, digital recording devices, video game, video game consoles, cellular and satellite phones and other similar devices. For more efficient transmission and reception of video data to digital video technology using video compression, such as MPEG-2, MPEG-4 or H264/MPEG-4, Part 10, commonly called the advanced video encoding (AVC). Visual compression technology to perform spatial and temporal prediction to reduce or eliminate the redundancy inherent in video is posledovatelnostei.

When encoding video, the compressed video data often includes spatial prediction, motion estimation and motion compensation. Intraframe coding relies on spatial prediction to reduce or exclude the spatial redundancy between videobloom within a given video frame. Interframe coding relies on temporal prediction to reduce or exclude the temporal redundancy between videobloom consecutive video frames of the video sequence. In the case of interframe coding, the video encoder performs motion estimation to track the movement of the respective videobloom two or more adjacent frames. Motion estimation generates motion vectors, which indicate the displacement of videobloom on appropriate videobloom predictions in one or more reference frames. Motion compensation uses the motion vectors for the placement and formation of videobloom predictions from the reference frame. After motion compensation is generated residual block by subtracting videoblog predictions from the original videoblog, which must be encoded. Thus, the residual information determines the number of differences between the video section of prediction, and encoded the video section to last the identification of videoplace prediction and residual information, the encoded video section could be restored in the decoder.

The video encoder may apply the processes of transformation, quantization and entropy coding to further reduce the bit rate associated with the transfer of blocks of data remanence. Entropy encoding typically involves the use of arithmetic codes or codes with variable length (VLC) in order to additionally compress the residual coefficients obtained through operations transformation and quantization. Examples include context adaptive binary arithmetic coding (CABAC) and context adaptive coding with variable-length code words (CAVLC), which can be used as alternative modes of entropy coding in some encoders. The video decoder performs the reverse operation to restore the encoded video data using information of the motion and residual information for each block.

Disclosure of invention

The present disclosure describes methods for the formation of the layered structure of the encoded bit stream, which uses the correlation in the header information between videobloom encoded unit of the video element. The video encoder is configured to operate in accordance with STRs is Obama of the present disclosure, separates the header information videobloom layer (or other coded unit data item) from the rest of the information videobloom mentioned layer. The header information of each block may include a set of syntax elements of the header, such as a syntax element block type syntax element prediction mode syntax element of the partition size syntax element of the motion vector, the syntactic element of the coded block pattern or syntax element of another type.

The video encoder can build the syntactic elements of the header block into groups based on the type of syntax element. For example, the video encoder may group together the syntactic elements of the block type of each block, group together the syntactic elements of a prediction mode of each block, and so on. The video encoder performs the run-length encoding for groups of syntax elements for the best use of correlation header information between the blocks layer. For example, in the case when there is some amount of blocks layer with the same syntax element type block, the video encoder may encode the block type for the block layer as a series of N elements, where N represents the number of consecutive blocks in the layer imauditor same type of block.

After encoding each of the groups of syntax elements of the header of the video encoder may encode the residual information for each of the blocks mentioned layer. Encoded header information for the blocks mentioned layer can be considered as one "level" of the bitstream, and residual information can be considered as another "level" of the bitstream. In other words, the video encoder may encode the first sequence of bits that corresponds to the encryption information header blocks mentioned layer, and to encode the second bit sequence that corresponds to the encoded residual information blocks mentioned layer.

In one aspect, a method of encoding video data includes a stage on which encode the header information of many videobloom encoded unit of the video element in the first sequence of bits of the encoded bit stream and the stage at which encode the residual information of many videobloom in the second sequence of bits of the encoded bit stream.

In another aspect, the encoding device includes a first encryption module, which encodes the header information of many videobloom encoded unit of the video element in the first sequence of bits of the encoded bit Patoka second encryption module, which encodes the residual information of many videobloom in the second sequence of bits of the encoded bit stream.

In another aspect, a machine-readable storage medium containing a command for prompting the processor to encode the header information of many videobloom encoded unit of the video element in the first sequence of bits of the encoded bit stream and to encode the residual information of many videobloom in the second sequence of bits of the encoded bit stream.

In another aspect, a device for encoding includes means for encoding the header information of many videobloom encoded unit of the video element in the first sequence of bits of the encoded bit stream and means for encoding the residual information of many videobloom in the second sequence of bits of the encoded bit stream.

In another aspect, a method of decoding video data includes a step in which decode the first sequence of bits of the encoded bitstream to obtain header information of many videobloom encoded unit data item, the stage at which decode the second sequence of bits of the encoded bitstream to obtain residual information is noreste of videobloom, and the stage at which communicate residual information of each of the many videobloom with the appropriate header information.

In another aspect, the decoding device includes at least one decoder module that decodes the first sequence of bits of the encoded bitstream to obtain header information of many videobloom coded single element of data, and decodes the second sequence of bits of the encoded bitstream to obtain residual information many videobloom, and the module communication header that communicates residual information of each of the many videobloom with the appropriate header information.

In another aspect, a machine-readable storage medium contains commands to induce at least one processor to decode the first sequence of bits of the encoded bitstream to obtain header information of many videobloom coded single element of data, decoding the second sequence of bits of the encoded bitstream to obtain residual information many videobloom, and to communicate the residual information of each of the many videobloom with the appropriate header information.

In d the natives aspect, the decoding device includes a means for decoding the first sequence of bits of the encoded bitstream to obtain header information of many videobloom encoded unit data item, a means for decoding the second sequence of bits of the encoded bitstream to obtain residual information many videobloom and means for communicating the residual information of each of the many videobloom with the appropriate header information.

The methods described in this disclosure can be implemented in hardware, software, firmware or any combination thereof. If implemented in software, the software may be executed in the processor, which may be referred to as one or more processors such as a microprocessor, a specialized integrated circuit (ASIC), programmable gate array (FPGA), or digital signal processor (DSP), or other equivalent integrated or discrete logic circuitry. Software that contains commands for the execution of the aforementioned methods, may be initially stored on a computer-readable storage medium, loaded and executed by the processor.

the respectively, the present disclosure also considers the machine-readable storage medium containing a command for prompting the processor to perform any of a variety of methods described in this disclosure. In some cases, the machine-readable storage medium may be part of a computer software product that can be sold to manufacturers and/or used in the device. The computer program product may include machine-readable data carrier, and in some cases may also include packaging materials.

The details of one or more aspects of the present disclosure are presented hereinafter in the accompanying drawings and description. Other characteristics, objects and advantages of the methods described in the present disclosure will become apparent from the description, drawings and claims.

Brief description of drawings

Fig. 1 is a block diagram illustrating a system for encoding and decoding video data.

Fig. 2 is a block diagram illustrating an example video encoder.

Fig. 3 is a block diagram of the block entropy encoding performed by the possibility of the formation of layered encoded bitstream in accordance with an aspect of the present disclosure.

Fig. 4 is a block diagram illustrating an example of a video decoder.

Fig. 5 is a block diagram, Illus ryuuma example entropy decoding unit, made with the possibility of decoding level of the encoded bit stream in accordance with an aspect of the present disclosure.

Fig. 6 is a block diagram illustrating another exemplary entropy decoding unit, configured to decode peremienko level of the encoded bit stream in accordance with an aspect of the present disclosure.

Fig. 7A-7C illustrate an exemplary structure of the encoded bit stream.

Fig. 8 is a block diagram of a sequence of operations illustrating an exemplary operation of the block entropy encoding, forming a layered encoded bitstream.

Fig. 9 is a block diagram of a sequence of operations illustrating an exemplary operation of the block entropy decoding, decoding level encoded bit stream.

Fig. 10 is a block diagram of a sequence of operations illustrating an exemplary operation of the block entropy decoding, decoding perenesennyj level encoded bit stream.

The implementation of the invention

The present disclosure describes techniques for coding video data. Video can be a series of video frames of the video sequence. The video encoder may divide each video frame into multiple blocks of pixels or blocks of transform coefficients (nativemedia "blocks"), to encode video data. Then, the video encoder encodes each of the blocks mentioned series of video frames and outputs the encoded bit stream. For example, for each block, the video encoder may encode the header information of the block and the residual information of the block. The header information of each block may include a number of syntax elements that identify specific characteristics of the block, such as block type, a prediction mode, the size of the partition, a motion vector, the pattern of the coded block, the change of quantization parameter with respect to the previous block (Delta-QP), the size of the transform, and others. Header information is used by the decoder to generate block prediction. The residual information of each block determines the number of differences between the block and one or more blocks of predictions to make, after identifying header information used to generate the block prediction, and residual information, the encoded video section could be restored in the decoder.

Traditional video encoder encodes video ad units in the "block by block". In other words, the conventional video encoder encodes the header information of the first block, followed by the corresponding residual data from the first block, and then encodes the header information of the WTO is on the block, followed by the corresponding residual information of the second block. Traditional video encoder continues to encode video ad units in the "block by block" until then, until you have coded blocks of the encoded unit data item (for example, a layer or frame). Therefore, the conventional video encoder can be viewed as video encoder, forming the structure of the encoded bit stream in the "block by block".

Header information for a certain number of blocks encoded by a single data element may be spatial korrelirovana. In other words, some number of blocks layer (or other coded unit data item) may include the same header information block, that is, one or more of the same syntactic elements of the header. For example, some number of blocks mentioned layer may have the same type of block, the same Delta-QP and so on. Taking advantage of this correlation, anthocynin the encoder can be achieved by more efficient coding. If entropy encoder uses arithmetic coding, this correlation is usually used by creating a context arithmetic coding based on the values of the same syntactic elements of neighboring blocks. If ENT is opium, the encoder uses the encoding variable length (code word) (VLC), this correlation is typically used by the prediction value of the current syntax element relative values of the same syntactic element adjacent blocks. Since traditional video encoder encodes video ad units mentioned layer in the "block by block", the traditional video encoder may not be able to use fully the correlation header information between the blocks mentioned layer. This is in particular true for the case when the entropy coder uses the encoding variable length. Using a syntax element header for block type, as an example, even if the number of consecutive blocks have the same block type, conventional video encoder mode VLC can send a 1-bit flag for each block to indicate that the block type is the same as the previous block. Therefore, the conventional video encoder using VLC, can encode at least one bit in one block and one syntax element for use correlation header information between the blocks mentioned layer.

The present disclosure describes methods for the formation of the layered structure of the encoded bit stream. The video encoder made with the possibility of forming a tiered structure Kadirova the aqueous bit stream, can be made with the possibility of grouping syntax elements header for a certain number of blocks on a layer (or other coded unit data item) and coding are grouped together syntactic elements of the header. The video encoder may perform run-length encoding in respect grouped syntax elements for the best use of correlation header information between the blocks of the layer, that is, through the block borders. For example, in the case when there is some number of blocks layer with the same type of blocks, the video encoder may encode the block type for the block mentioned layer as a series of N elements, where N represents the number of consecutive blocks in the layer having the same type of block, instead of encoding one bit per block, as is conventional video encoder.

The methods according to the present description can lead to a reduction in the number of bits used to encode header information block for many blocks, for example, compared with the traditional methods of VLC. In the above example, the video encoder using VLC, you may spend less than one bit per block for encoding type of the block, while traditional video encoder using VLC, would have spent at least one bit per Blonds encoding of the same information block type. The video encoder that uses arithmetic coding can also use a tiered structure of the encoded bit stream described in this disclosure. Use peremienko encoded bitstream as for VLC, and arithmetic coding can provide a more universal structure of the bit stream to VLC-coders and arithmetic coders. Additionally, the tiered structure of the bitstream provides the possibility of unequal protection against errors at the header level and at the residual level. For example, the header level, which transfers more important information can be provided the best protection against errors than the residual level.

Fig. 1 is a block diagram illustrating a system 10 for encoding and decoding video data. As shown in Fig. 1, the system 10 includes a source device 12 that transmits encoded video data receiving device 14 via communication channel 16. The communication channel 16 may include any wireless or wired communication media, such as radio frequency (RF) spectrum, or one or more physical data lines, or any combination of wired and wireless media data. The communication channel 16 may form part of a packet data network, such as LAN, xerocomus the service network or a global network, such as the Internet. The communication channel 16 generally represents any suitable communication environment, or a set of different media of communication, for transmitting encoded video data from source device 12 to the receiving device 14.

Source device 12 generates coded video data for transmission to the receiving device 14. Source device 12 may include a source 18 video, video encoder 20 and transmitter 22. The source 18 of the video source device 12 may include a capture device of a video image, such as a video camera, a video archive containing previously captured video, or video data provided by the provider of the video content. As a further alternative, the source 18 video can generate data on the basis of computer graphics as a video source, or a combination of video in real-time and video generated by the computer. In some cases, if the source 18 of the video camera is, the source device 12 may form a so-called camera phone or video phone. In each case, the captured, pre-captured or generated by the computer, the video may be encoded by video encoder 20 for transmission from source device 12 to the receiving device 14 by means of the transmitter 22 and the communication channel 16.

VideoCAD the R 20 receives video data from a source 18 videos. Video taken from the source 18 video, can be a series of video frames. Video encoder 20 working unit pixel (or block of transform coefficients) within individual video frames for encoding video data. Video ad units can be fixed or variable sizes and can vary in size in accordance with a specified coding standard. In some cases, each video frame is a single encoded data element, while in other cases, each video frame may be divided into series or layers which form the encoded unit data array element. In other words, each layer can be a single encoded data element, which includes only part of videobloom frame. The video frame may be divided into layers in any way from the multitude of those. For example, the video frame may be divided into layers based on spatial placement of the blocks within the frame, and the first layer corresponds to the blocks in the upper third of the frame, the second layer corresponds to the blocks in the middle third of the frame, and the third layer corresponds to the blocks in the lower third of the video. As another example, the video frame may be divided into two layers, where every other block belongs to the same layer. Such grouping of data is often referred to as "soma the different" parts of the data array. In another example, the layer may correspond to the blocks within the location of the video, she is identified as an object within the video frame. They may also use other methods for dividing the video frame into layers.

Each video section, often called a macroblock (MB)can be grouped into subsections. As an example, the standard H.264 according to the international telecommunication Union - telecommunications sector (ITU-T) supports vnutrivodnoe prediction in various block sizes, such as 1616, 88, 44 for the luminance components, and 88 chrominance components, and vnutrivodnoe prediction in various block sizes, such as 1616, 168, 816, 88, 84, 48 and 44 for the luminance components and corresponding scaled sizes for components of the color. Video ad units smaller sizes can provide the best resolution and can be used for those locations video frame that have higher levels of detail. In General, the macroblock (MB) and the various sub-blocks can be considered as videobloom. Thus, the MB can be considered as videobloom, and, in the case of division into sections or subsections, MB can be considered as defining sets of videobloom. In addition, the layer can be considered as a set of videobloom, for example, set the MB and/or sub-blocks. As already mentioned, each layer can be an independently decodable unit element data of the video frame. If the video frame is a single encoded data element (not the layer), then the video frame can also be considered as a set of videobloom, for example, a set of MB and/or sub-blocks.

After vnutriglaznam or interconnect prediction of videobloom, video encoder 20 may perform a number of operations videobloom. As will be hereinafter described in more detail with reference to Fig. 2, these additional operations may include operation transformation (for example, converting from integers of size 44 or 88 according to H.264/AVC or a discrete cosine transformation (DCT)), the operation of the quantization and entropy encoding operation (for example, coding with variable length (code word) (VLC), the binary arithmetic coding, or another method of entropy encoding).

Video encoder 20 encodes many videobloom to encode a single data item (for example, a layer or frame) in accordance with the methods described in this disclosure, for the formation of the layered structure of the encoded bit stream. Video encoder 20 separates the header information videobloom layer generated by video encoder 20, the residual information (in the example, residual coefficients) videobloom layer generated by video encoder 20. Video encoder 20 may compose syntax elements of the header blocks in the group. In some cases, each group includes a serial arrangement of certain syntax elements of the block. For example, a group of syntax elements of type unit may include a syntax element type block for the first block layer syntax element type block to the second block layer syntax element type of the block to the third block layer and so on in that order. Video encoder 20 may also form similar groups for other syntax elements of the header, such as the prediction mode, the size of the partition, a motion vector, the pattern of the coded block, the Delta QP, the size of the transform, and others.

Video encoder 20 encodes each group of syntax elements of the header using run-length encoding of. The run-length encoding for groups of syntax elements of the header allows the video encoder 20 to use the correlation between the syntactic elements of the header between the blocks layer. For example, if the first three blocks mentioned layer have the same syntactic element type block, video encoder 20 may encode a series of length three elements to represent the type of unit triblocal instead of encoding type of the block or 1-bit flag separately in a separate header for each block. As a result, the video encoder 20 may more efficiently encode the header information block for many blocks. In some cases, video encoder 20 may use VLC to encode the syntax elements of the header and the coding efficiency is similar to the efficiency of binary arithmetic encoding.

After encoding each of the groups of syntax elements of the header, the video encoder 20 encodes the residual information for each of the blocks mentioned layer. Encoded header information for the blocks of the layer can be considered as one "level" of the bitstream, and residual information can be considered as another "level" of the bitstream. In other words, video encoder 20 may encode the first sequence of bits that corresponds to the encoded header information for a block layer, called in this document "header", and to encode the second bit sequence that corresponds to the encoded residual information, referred to in this document as "residual". Thus, as used herein, the terms "header" and "residual" refers to the different sequences of bits within the encoded level bit stream.

In some cases, video encoder 20 may code the SQL and pass the header level layer in the structure level encoded bitstream to encode and transmit the corresponding residual level mentioned layer. In other cases, however, video encoder 20 may encode and transmit the header level layer in the structure level encoded bit stream after encoding and transmission of the corresponding residual level mentioned layer. In any case, video encoder 20 may further encode an identifier that specifies the location in which the encoded tier structure of the bit stream passes from the header level to the residual level or from residual level to level header. Source device 12 transmits the encoded video data receiving device via the transmitter 22.

The receiving device 14 may include a receiver 24, the video decoder 26 and device 28 of the display. The receiver 24 receives the encoded level bit stream of video data from source device 12 via channel 16. Video decoder 26 decodes level bit stream of video data to obtain the header information for a block layer and residual data for a block layer. Video decoder 26 may identify the header level and the residual level, using the ID found in level bit stream that indicates the location of the transition from the header level to the residual level. Video decoder 26 establishes communication header information (e.g., syntactic elementsarray) with residual information of each of the blocks layer. Video decoder 26 restores the video ad units mentioned layer by forming a block of prediction for each of the blocks using the header information and the combining unit predictions with the corresponding residual information block. The receiving device 14 may display the reconstructed video ad units to the user through the device 28 of the display. The device 28 of the display can contain any of a variety of display devices such as cathode ray tube (CRT), liquid crystal display (LCD), a display based on light emitting diodes (LED), a display based on organic LED or other type of display device.

In some cases, source device 12 and the receiving device 14 can operate essentially in the symmetric mode. For example, source device 12 and the receiving device 14 each may include components encoding decoding video data. Thus, the system 10 may support unidirectional or bi-directional transmission of video data between a video source 12, 14, for example, for streaming video, broadcast video, or video telephony.

Video encoder 20 and video decoder 26 may operate in accordance with the standard video compression, such as the standard of the expert group on the moving image (MPEG-2, MPEG-4, ITU-T H.263, or ITU-T 264/MPEG-4 Part 10, advanced video encoding (AVC). Although this is not shown in Fig. 1, in some aspects, video encoder 20 and video decoder 26 may be each combined with audiocamera and audio decoder, respectively, and may include appropriate devices MUX-DEMUX or other hardware and software to encode and audio data and video data in a common data stream or separate data streams. Thus, source device 12 and the receiving device 14 can work with multimedia data. If applicable, the device MUX-DEMUX can meet the multiplexer Protocol ITU H.223 or other protocols, such as custom Protocol datagrams (UDP).

Standard H.264 AVC/MPEG-4 was formulated by an expert team on video coding (VCEG) of ITU-T with MPEG ISO/IEC as a result of collective cooperation, known as the joint group on video (JVT). In some aspects, the methods described in this disclosure may be applied to the devices, which in General conform to the H.264 standard. The H.264 standard is described by the study group ITU-T in recommendation ITU-T to H.264, advanced video encoding for generic audiovisual services, from March 2005, which may be referred to in this document ka is H.264 or H.264 specification either as standard or specification H.264/AVC.

In some cases, video encoder 20 or video decoder 26 may be configured to support scalable video encoding (SVC) for spatial, temporal scalability and/or scalability of the relationship of signal to noise ratio (SNR). The encoder 20 and the decoder 26 may support different degree of scalability by supporting the encoding, transmission and decoding of the base level and one or more scalable expansion levels. For scalable encoding of video data basic level transmits video data using the basic level of quality. One or more levels of expansion provides additional information to support a higher spatial, temporal level or SNR level. The base layer can be transmitted in a manner that is more reliable than transmission expansion levels. The base level and the expansion levels are not separate sequences of bits within the same coded single element of data like header level and the level of residual data. Instead, the base level and the expansion levels are encoded using a hierarchical modulation at the physical layer so that the base level and the levels of expansion can be transmitted on the same carrier or carriers, but with different and the transmission characteristics, which lead to different error rate packet data (PER).

In some aspects, for broadcast video, the methods described in this disclosure can be applied to advanced H.264 encoding video data for providing real-time video systems land-based mobile multicast transmission of multimedia data (TM3), using the specification of the radio interface intended only for direct communication lines (FLO), "Forward Link Only (FLO) Air Interface Specification", published in July 2007 as technical standard TIA-1099 (the"FLO Specification"). That is, the communication channel 16 may include a wireless information channel used for wireless streaming of video information in accordance with the FLO Specification, or similar. FLO Specification includes examples that define the syntax and semantics of the bitstream, and decoding processes suitable for FLO air interface.

Alternatively, the video may be broadcast in accordance with other standards, such as DVB-H (digital video broadcasting for handheld computers), ISDB-T (digital video broadcasting integrated services for terrestrial communication systems) or DMB (digital multimedia broadcasting). Therefore, the source device 12 may be a mobile wireless terminal, the video streaming server or server is ranslate video. However, the methods described in this disclosure are not limited to any particular type of broadcast, multicast or systems of the type "point-to-point. In the case of broadcast transmission, the source device 12 may perform a broadcast transmission of multiple channels of video data to multiple receiving devices, each of which may be similar to the receiving device 14, shown in Fig. 1. Thus, despite the fact that in Fig. 1 shows one receiving device 14, in the case of the broadcast video source device 12 typically broadcasts of video content simultaneously to many receivers.

In other examples, the transmitter 22, the communication channel 16 and the receiver 24 can be configured to communicate in accordance with any wired or wireless communication system, including one or more of the following systems: Ethernet, phone cable (such as POTS - plain old telephone service), power line, fiber optic and/or wireless communication system, comprising one or more communication systems, multiple access, code division multiple access (CDMA or CDMA 2000), multiple access frequency division multiple access (FDMA), multiple access multi is legirovaniem and orthogonal frequency division multiplexing (OFDM), multiple access with time division multiplexing, for example GSM (global system for mobile communication), GPRS (General packet radio General purpose) or EDGE (GSM environment with enhanced data rate), mobile phone system TETRA (terrestrial trunked radio)system is a broadband multiple access code division multiplexing (WCDMA), high speed data 1xEV-DO (evolution data transfer of the first generation) or 1xEV-DO with the function of Gold Multicast system IEEE 802.18, the MediaFLO systemTMthe system DVB-H or other scheme for exchanging data between two or more devices.

Video encoder 20 and video decoder 26 may be implemented as one or more microprocessors, digital signal processors (DSP), specific integrated circuits (ASIC), programmable gate arrays (FPGA), discrete logic, software, hardware, firmware or combinations thereof. Each of video encoder 20 and video decoder 26 may be included in one or more encoders or decoders, either of which can be integrated as part of a combined encoder/decoder (CODEC) to the appropriate mobile device, subscriber device, broadcast device, the server, and the like. Will add the flax, each of the source device 12 and receive device 14 may include appropriate components of the modulation, demodulation, frequency conversion, filtering and amplification for transmission and reception of encoded video, which can be suitable, including radio frequency (RF) components antenna for wireless transmission of data sufficient to support wireless communication. However, in order to simplify the illustration, these components are combined to form the transmitter 22 of source device 12 and receiver 24 of the receiving device 14, shown in Fig. 1.

Fig. 2 is a block diagram illustrating an example of video encoder 20, which may correspond to the video encoder device 12, shown in Fig. 1. Video encoder 20 may perform intraframe and interframe coding of blocks within video frames. Intraframe coding relies on spatial prediction to reduce exceptions spatial redundancy in video data within a given video frame. For intraframe coding, video encoder 20 performs spatial prediction using the already encoded blocks within the same frame. Interframe coding relies on temporal prediction to reduce or exclude the temporal redundancy in VI is dannyh in adjacent frames of the video sequence. For interframe coding, video encoder 20 performs motion estimation to track the movement of the respective videobloom between two or more adjacent frames.

As shown in Fig. 2, video encoder 20 receives the current video section within the frame, which must be encoded. In the example shown in Fig. 2, video encoder 20 includes a module 32 motion estimation, the storage 32 of the supporting frame, the module 36 motion compensation module 38 of the transformation blocks, the module 40 quantization module 42 of the inverse quantization module 44 inverse transform module 46 entropy encoding. Loop filter for deblocking (not shown) may be applied to the reconstructed videobloom to remove blocky artifacts (image defects). Video encoder 20 also includes adders 48A and 48V ("adders 48"). Fig. 2 illustrates components of the provisional prediction video encoder 20 for interframe coding videobloom. Despite the fact that in order to simplify the illustration in Fig. 2 is not shown, the video encoder 20 may also include components intraframe coding some videobloom. The encoding videobloom according to the present disclosure can be applied to any video blocks, such as blocks subjected to intra-frame coding, Ribakov, subjected to interframe coding.

To perform temporal prediction module 32 motion estimation compares the current video section with blocks in one or more adjacent video frames to form one or more motion vectors. Current video section refers to the video section, which currently is encoded and may contain an input signal to the video encoder 20. Adjacent frame or frames (which contain video ad units, which compares the current video section) can be extracted from the storage 34 of the supporting frame. Vault 34 keyframes, which can include memory or data storage device of any type for storing one or more previously encoded frames or blocks within the previously coded frames. Module 32 motion estimation identifies the block in the adjacent frame, which provides the best prediction for the current videoblog, usually subject to the availability of a certain criterion of distortion depending on the data rate. Motion estimation may be performed for blocks of different sizes, for example 1616, 168, 816, 88 or smaller blocks.

Module 32 motion estimation generates a motion vector (MV) (or many MV in the case of bidirectional prediction), which indicates the magnitude and trajectory of the displacement between the current video section and identified b the eye of predictions, used to encode the current videoblog. The motion vector may be accurate in half or quarter pixel or even higher accuracy, allowing the video encoder 20 to track the movement with greater precision than integer position of the pixels and get the best block prediction. Using the resulting motion vector module 36 motion compensation generates the video section prediction by motion compensation. In the case of integer pixel accuracy, the module 36 motion compensation simply selects the block in the location identified by the motion vector as the unit of prediction. In the case of fractional pixel precision, the module 36 motion compensation can perform interpolation for the formation of block prediction.

Video encoder 20 forms a residual information (marked in Fig. 2 as "RESID INFO") subtracting videoblog predictions generated by the module 36 motion compensation in the case of interframe coding, from the current videoblog in the adder 48A. As described above, the residual information determines the number of differences between the video section of the predictions and the current video section that is encoded. Module 38 conversion unit applies a transform, such as DST or integer transform, the 44 or 88, staticelectricity for the formation of residual transform coefficients. Module 40 quantization quantum residual transform coefficients to further reduce bit rate.

After quantization, the module 42 to the inverse quantization module 44 reverse apply inverse quantization and inverse transformation, respectively, to recover the residual information (marked in Fig. 2 as "RECON RESID"). The adder 48A adds the reconstructed residual information to the block of the prediction generated by the module 36 motion compensation to generate the reconstructed videoblog for storage in the storage 34 of the supporting frame. The restored video section may be used by the module 32 motion estimation and compensation module 36 motion for encoding a block in a subsequent video frame.

Module 46 entropy encoding takes the residual information in the form of quantized residual coefficients for the current videoblog from module 40 quantization. Additionally, the module 46 entropy encoding accept header information block for the current videoblog. The header information may include, for example, the number of syntax elements in the header that identifies the particular characteristics of the current videoblog. One such syntactic element header block is subjected to interframe coding, which may be one or more motion vector of the current videoblog, taken from module 32 of the motion estimation. Other syntax elements of the header of the current videoblog may include, for example, the block type (with interframe or intraframe encoding), the prediction mode (orientation predictions for blocks subjected to intra-frame coding or forward/bidirectional prediction for blocks subjected to interframe coding), partition size (1616, 88, and so on), of the coded block pattern (CBP/RIGHT for brightness and/or chromaticity), the Delta QP, the size of the transform, and others. Other syntax elements of the header can be taken from other components (not shown in Fig. 2), in the video encoder 20.

In traditional video encoders, the syntactic elements of the header for each videoplace and the corresponding residual information for each videoblog encoded in the "block by block". In other words, the conventional video encoder encodes the header information of the first block, followed by the corresponding residual data from the first block, encodes the header information of the second block, followed by the corresponding residual information for the second block, and so on up until all video ad units layer will not be encoded and transmitted. Therefore, the conventional video encoder can be considered as videocode is a, forming in the "block by block" structure of the encoded bit stream, as described in more detail with reference to Fig. 6A.

However, the syntactic elements of the header for a block layer may be spatially correlated. In other words, at least part of the blocks of the layer may contain the same header information, that is, one or more of the same syntactic elements of the header. In accordance with the methods of the present disclosure, the module 46 entropy encoding encodes the header information of two or more blocks of the layer, as will be described in detail below.

In particular, the module 46 entropy coding separates the header information videobloom layer from the residual information videobloom layer. Module 46 entropy encoding assembles the header information blocks into groups based on the type of syntax element header. For example, the module 46 entropy encoding may be grouped together syntactic elements of the block type for each block in the first group, to group together the syntactic elements of the prediction mode for each block in the second group and so on for each type of syntax element. Thus, each group of syntax elements may include syntax elements consistent the blocks. In one case, such groups of syntax elements may be arranged in series so that the first group is sent before the second group, the second group is sent before the third group, and so on. In another case, such groups of syntax elements may be arranged in alternating manner. Both the location of the syntax elements of the header are described in detail below.

After separation and the location of the syntax elements of the header module 46 entropy encoding encodes the syntactic elements of the header groups using run-length encoding of to generate header level. Thus, the header contains the header information of more than one block. Encoding header information of more than one unit allows the module 46 entropy coding to reduce redundancy and better to use correlation header information for multiple blocks of the layer, especially when using VLC. Module 46 entropy encoding additionally encodes the residual information block layer separately from the header information. Thus, the module 26 entropy encoding is not sequentially encodes the header information of each block, and then the residual information of the corresponding block. Instead, the module 46 entropy to the investing encodes the blocks in the rate structure of the bitstream, which includes the first sequence of bits, i.e. the header level, which corresponds to the encoded header information of many videobloom layer, and a second sequence of bits, i.e. the residual level, which corresponds to the coded residual information many videobloom mentioned layer.

Module 46 entropy encoding may optionally encode an indicator that identifies a location in a layered bit stream, in which the transition occurs from the header level to the residual level. Module 46 entropy encoding can encode a number of different types of indicators to identify the location of the split levels in the coded bit stream. For example, the module 46 entropy encoding can encode a unique bit pattern of a certain length to identify the location of such separation. As another example, the module 46 entropy encoding encodes the syntactic element in the header that indicates the length of the header, for example, in bits.

Module 46 entropy encoding waiting will be generated header information and the residual information for the blocks of the layer before performing entropy encoding mentioned layer. Therefore, the video encoder 20 m who may experience a slight delay, waiting for forming header information and the residual information for the blocks layer. This may be unsuitable for some applications, such as applications that require real-time encoding and/or when video encoder 20 restricts memory resources. However, for other applications, such as applications that do not require encoding in real time (e.g., video streaming and broadcast video), or when video encoder 20 has sufficient memory resources, level layout of the bitstream provides the advantages of a coherent structure of the bit stream and high efficiency encoding.

Coding blocks of the layer in the layered structure of the bitstream allows the module 46 entropy coding to improve coding efficiency, it is better to use the correlation information header blocks layer, if it is made use of VLC. Module 46 entropy encoding can also use a tiered structure of the encoded bit stream described in the present disclosure, if it is made use of arithmetic coding. The use of layered encoded bitstream as for VLC, and arithmetic coding can provide a more universal structure of the bitstream for coders VC and arithmetic coders. Additionally, the tiered structure of the bitstream provides the opportunity for unequal protection against errors at the header level and at the residual level. For example, the header level, which transfers more important information can be provided the best protection against errors than the residual level.

In some cases, the module 46 entropy encoding may use a tiered structure of the bit stream described in the present disclosure, in combination with the structure of the bit stream of type "block by block". In other words, the module 46 entropy encoding can encode some of the encoded unit data items in the form of a layered structure of the bit stream and to encode other coded data elements in the form of a structure of the bit stream of type "block by block". For example, in a scalable video bit stream, which includes one bit stream base level and one or more bit-level threads extension bit stream of the base layer can be encoded in the form of a structure of the bit stream of type "block by block", while the degree of expansion can be encoded in the form of a layered structure of the bitstream.

This arrangement provides the benefits of backward compatibility for the bit stream basic level (e.g., existing decoder H.264/AVC may d is to encode the base layer) and a higher coding efficiency for extension level. In this case, the module 46 entropy encoding may contain a flag in the header of the encoded unit data item (for example, the title of the layer or layer header image or sequence level) to indicate the type of structure of the bit stream that is linked to whether the bitstream in a tiered structure, or a structure of type "block by block". Thus, the module 46 entropy encoding may be used as a tiered structure of the encoded bit stream, and the structure of the bitstream, encoded in the "block by block", for example, by dynamically switching between these structures.

Although the methods level encoding described in this disclosure are described with reference to the blocks of the coding layer of a certain frame, the above mentioned methods can be used to encode other coded unit data items in the rate structure of the bitstream. For example, video encoder 20 may encode the encoded unit data items that are more than a certain layer, for example more than one layer, a frame or a sequence in a tiered structure of the bitstream. Additionally, methods according to the present disclosure can be used to encode the encoded unit data elements used is the Finance VLC or arithmetic encoding.

Fig. 3 is a block diagram of the module 46 entropy encoding performed with the possibility of forming a tiered structure of the bitstream in accordance with an aspect of the present disclosure. Module 46 entropy encoding may be placed in the video encoder 20 shown in Fig. 1 and 2. Module 46 entropy encoding receives the data block for many blocks and forms a layered bit stream for transmission to another device for decoding. In the example shown in Fig. 3, the module 46 entropy encoding includes module 50 separate data block, the memory 52 of the header information memory 54 residual information, the module 58 run-length encoding of (RLC) and module 59 coding with variable length (VLC).

During operation, the module 50 data division unit receives the data block for videobloom some layer. The module 50 separate data block can receive data block for videobloom mentioned layer as the data block generated by other components of video encoder 20, for example, the quantization module 20, the module 32 motion estimation and others. The block data received from each block may include residual information (e.g., in the form of quantized residual transform coefficients) and header information (for example, in the form of one or more Cinta the classical elements of the header, such as one or more motion vectors, block type, a prediction mode, the size of the partition, the pattern of the coded block, the Delta QP, the size of the transform, and others).

The module 50 data division unit divides the residual information of each block from the header information of the block. For example, the module 50 separate data block may retain residual information of each block in the memory 56 of the residual information, and stores the header information of each block in the memory 52 of the header information. The module 50 separate data continues to accept data block for videobloom layer, to separate the header information from the residual data and store the divided information in the respective memories 52 and 56.

After admission and separation data block for each of the blocks mentioned layer, the module 54 grouping groups the header information header blocks mentioned layer for use correlation header information between the blocks mentioned layer using run-length encoding of. Module 54 grouping header can be grouped in the same syntax elements of the header for each of the blocks in one group. As an example, suppose that the layer consists of five blocks with header information that includes a syntax element type block and a syntax element is elite-QP, the first block has a block type equal to zero and the Delta QP is equal to zero, the second block has a block type equal to zero and the Delta QP is equal to zero, the third block has a block type equal to one and the Delta QP is equal to zero, the fourth block has a block type equal to one and the Delta QP is equal to zero, the fifth block has a block type equal to zero and the Delta QP is equal to the unit. In this example, the module 54 grouping header groups of syntax elements of the header for these units in the following two groups: one group of syntax elements of type block and one group of syntax elements of the Delta QP.

Module 58 RLC encodes series of each group of syntax elements of the header to reduce redundancy and use the correlation of syntactic elements of the header between the blocks mentioned layer. In one case, the module 58 RLC can encode each group of syntax elements of the title sequence. Thus, the module 58 RLC sequentially encodes the series of the first syntax element header block before the series of the second syntax element blocks, and so on up until the module 58 RLC does not encode the series for the last syntax element blocks. In the above example, the module 58 RLC sequentially encodes a series of syntax element type block five blocks before the series syntaxical the th element of the Delta QP five blocks. In particular, the module 58 RLC sequentially encodes the first series {0, 2} block type, then the second series {1, 2} block type, then the third series of {0, 1} block type, then the first series {0, 4} Delta QP, then the second series {1, 1} Delta QP.

Alternatively, the module 58 RLC encodes header information block layer so that a series of syntax elements of the header are interspersed between them. For example, the module 58 RLC can encode at least one series for each syntax element before encoding more series of any of the syntax elements. In this case, with reference to the above example with a layer consisting of five blocks, with the syntactic elements of the block type and Delta-QP module 58 RLC encodes the first series {0, 2} block type, then the first series {0, 4} Delta QP, then the second series {1, 2} block type, then the third series of {0, 1} block type, then the second series {1, 1} Delta QP. Thus, the module 58 RLC punctuates coded series of syntax elements in those locations where the syntax elements necessary for decoding of the current block, thereby reducing the communication complexity of the syntactic elements of the header and the residual information in the decoder. For example, the module 58 RLC encodes a second series block type before the second series of the Delta QP because the first series block type is shorter than the first the I series Delta QP. If the first series Delta QP was shorter than the first series block type, then the second series Delta-QP can be encoded before the second series of the block type. Thus, the module 58 RLC encodes additional series of syntax elements, if there are additional series for encoding, when the previous series of the same syntactic element is completely exhausted (i.e. completed). In some cases, the module 46 entropy encoding can encode a flag that indicates whether the linked header level consistently or with alternation.

After the formation of a series of syntax elements block layer module 59 VLC encodes a series of syntax elements to form the header level of the bitstream. Module 59 VLC additionally encodes the residual information of each set of blocks separately from the header information for a residual level of the bitstream. Thus, the module 59 VLC encodes a layered bit stream for inclusion in the composition of the first sequence of bits that corresponds to the header information of many videobloom and the second bit sequence, which corresponds to the residual information of many videobloom. Module 59 VLC can encode a series of syntax elements of the header information and the residual information with the use of what Finance one or more coding tables with variable length.

Module 59 VLC can optionally generate an indicator that identifies the location of the transition from the header level to the residual level for a specific layer. In one case, the module 59 VLC can encode a unique bit pattern that indicates the end of the header level. Alternatively, the module 59 VLC can encode the syntax element at the beginning of the level header, which specifies the length, for example, in bits, of the header level. In addition, in those cases when the module 46 entropy encoding may be formed as a layered structure of the bitstream and the structure of the bit stream of type "block by block", module 59 VLC can set a flag in the header of a layer to specify the type structure of the bit stream that is linked to whether the bitstream in a tiered structure, or a structure of type "block by block".

Despite the fact that the module 46 entropy encoding, shown in Fig. 3, is described as a module that performs VLC mentioned methods can be used for arithmetic coding. For example, the module 46 entropy encoding may include the module arithmetic coding instead of module 59 VLC. Series of syntax elements can be arithmetically coded for forming the header level of the bitstream, and residual information can be settled separately arithmetically coded for forming a residual level of the bitstream. Alternatively, the module 46 entropy encoding may use a combination of VLC coding, and arithmetic coding to encode the header information and the residual information. For example, a series of syntax elements of the header can be encoded using VLC, and residual information may be encoded using arithmetic coding, or Vice versa.

Fig. 4 is a block diagram illustrating an example video encoder 26, which may correspond to the video encoder shown in Fig. 1. The video encoder 26 can perform intraframe and interframe decoding blocks in the frames. In the example shown in Fig. 4, video decoder 26 includes a module 60 entropy decoding module 62 motion compensation module 64 inverse quantization module 66 inverse transform and store 68 keyframes. Video decoder 26 also includes an adder 69, which combines the output signals of the module 66 inverse transformation module 62 motion compensation. Fig. 4 illustrates components of the provisional prediction decoder 26 for interframe decoding videobloom. Although this is not shown in Fig. 4, video decoder 26 also includes the components of the spatial predictions, for example the spatial module predskazana is, for intra-frame decoding some videobloom.

Module 60 entropy decoder receives the encoded video bit stream and applies the methods of decoding variable length, for example, through the use of one or more coding tables with variable-length decoding of the bitstream. As it was described above, the encoded video bit stream may be arranged in a tiered structure of the bit stream for more efficient use of correlation header information between blocks on a layer. For example, adopted the bitstream may include a first sequence of bits, i.e. the header level, which corresponds to the header information for multiple blocks, and the second sequence of bits, i.e. the residual level, which corresponds to the residual information for multiple blocks. Module 60 performs entropy decoding decoding in the reverse order with respect to the module 46 entropy encoding, shown in Fig. 2, in order to extract the residual information and the header information of the block layer.

To determine what sequence of bits corresponds to the header information, and what sequence of bits corresponds to the residual information, the module 60 entropy of decterov the deposits detects the level indicator in bit stream, that identifies the location where the transition occurs between the header and the residual level. For example, the module 60 entropy decoding can discover a unique bit pattern that indicates that the encoded bit stream passes from the header level to the residual level. As another example, the module 60 entropy decoding can detect a syntax element in the header that specifies the length, for example, in bits, of the header level. However, the module 60 entropy decoding may find a number of other indicators to identify the transition from the header level to the residual level.

Module 60 entropy decoding decodes the header level and the residual level and stores the decoded syntax elements of the header and the residual information. Module 60 entropy decoding communicates residual information for each block with the corresponding syntactic elements of the header block. Module 60 entropy decoding can reflow the decoded syntax elements header to group syntax elements that belong to the same block. Thus, the module 60 entropy decoding perekomponovkoy data block that is to receive the order type "block by block". For example, in the case of serial-level bit stream, the module 60 entropy decoding can decode and save the residual information, collect the header information and the residual information for each block and to provide such information to other components of the decoding module. However, in the case of a layered bit stream interleaving module 60 entropy decoding can begin restoring some of the blocks of the layer before complete decoding of the header level and the residual level, as will be described in detail below.

Video decoder 26 restores footage videoblog in the "block by block" using the syntax elements of the header and the residual information. Syntax elements of the header can be used by the decoder 26 to configure one or more components. For example, the module 60 entropy decoding may grant the motion vector and the size of the partition in module 62 motion compensation, the values of QP for use in the inverse quantization module 64 inverse quantization or the like. Components of video decoder 26 form the block prediction and the residual block and combine the residual block by block prediction for recovery videoblog.

For each block that is subjected to interframe code the application, module 62 motion compensation takes one or more motion vectors and partition sizes from module 60 entropy decoding and one or more reconstructed reference frame store 68 reference frames, and generates the block prediction, that is, the block motion compensation. Module 64 inverse quantization performs inverse quantization, that is, performs dequantization, quantized residual coefficients in accordance with a syntax element QP. Module 66 inverse transformation applies the inverse transform such as inverse DCT or inverse integer transform, the 44 or 88, dequantizing residual coefficients for the formation of a residual block. The adder 69 adds the block to the prediction generated by the module 62 motion compensation residual block that is provided from the module 66 inverse transform to generate a decoded block.

Using block-based video coding can sometimes lead to a visually distinct blocking at the boundaries of blocks of encoded video frames. In such cases, filtering, eliminating blocking, can smooth the edges of the blocks to reduce or exclude visually distinct blocking. Essentially, a filter that eliminates blocking (not shown), can also be applied to filter the decoded blocks on the I removal of blocking artifacts. After any filtering that eliminates the blocking, which is optional, the reconstructed blocks are placed in storage 68 keyframes, which provides the reference frames for motion compensation, and provides the decoded video to actuate a display device (such as device 28 shown in Fig. 1).

Fig. 5 is a block diagram illustrating an example of a module 60 entropy decoder configured to decode the level of the encoded bit stream in accordance with an aspect of the present disclosure. Module 60 entropy encoding may be placed in the decoder 26 shown in Fig. 1 and 4. Module 60 entropy decoding takes a layered coded video bit stream of a certain layer, and generates block data for blocks mentioned layer to use when restoring the video data. In the example shown in Fig. 5, the module 60 entropy decoding includes module 72 VLC-decoding module 74 RLC-decoding, the memory 76 of the header information memory 78 residual information and the module 79 communication header.

Module 72 VLC-decoding decodes level bit stream layer. As described above, video encoder 20 encodes the layer using the layer structure of the URS bit stream, which includes the first sequence of bits that corresponds to the encryption information header blocks (header level), and the second sequence of bits that corresponds to the encoded residual information blocks (i.e. the residual level). The location of the transition from the header level and the residual level can be identified in a layered bit stream using one or more indicators.

Module 72 VLC-applies decoding means for decoding variable length, for example, through the use of coding tables with variable-length decoding of the header level and the residual level. For example, the module 72 VLC-decoding uses a single set of coding tables for decoding the header level and another set of coding tables for decoding the residual level. After detection of the transition from header to a residual level, the module 72 VLC-decoding can choose a different set of coding tables. The decoded header information and the decoded residual information can be stored in the memory 74 of the header information and the memory 78 residual information, respectively.

After performing decoding variable length in the level header, module 74 RLC-decoding performs decterov is their length compressed at the level of the header to retrieve the header information for a block layer. The decoded header information includes the number of syntax elements, which are grouped based on the types of syntax elements. In one example, the syntax elements may be arranged in such a way that the syntactic elements of the first type (e.g., block type) for all the blocks mentioned layers are grouped together, syntax elements of the second type (for example, the prediction mode) for all the blocks mentioned layers are grouped together and so on. Thus, the decoded header information is sequentially so that all syntax elements of the first type precede all syntax elements of the second type, and all syntax elements of the second type precede all syntax elements of the third type, and so on.

Alternatively, the syntactic elements of the header information may be interspersed with each other. For example, the first subset of syntax elements of the first type may be followed by the first subset of syntax elements of the second type, the first subset of syntax elements of the second type can be followed by the first subset of syntax elements of the third type, the first subset of syntax elements of the third type may be followed by a second subset of syntax elements p is pout type and so on. Thus, a series of syntax elements of the header are interspersed between them. Module 60 entropy decoding can identify a flag that indicates whether linked syntax elements header sequentially or alternation.

Module 79 communication header communicates the residual information of each block layer syntax elements title mentioned blocks. For example, the module 79 link header can communicate residual information of the first block on a layer with the first value of each of the syntax elements of the header of the decoded header level, to communicate the residual information of the second mentioned block layer with the second value of each of the syntax elements of the header, and so on up until the residual information of each of the blocks mentioned layer will not be delivered in accordance with the relevant syntactic elements of the header.

Module 79 communication header communicates the residual information of a certain block with the corresponding syntactic elements of the header module 79 communication header generates the data block for this unit to other components of the decoder for the reconstruction of the mentioned block. Some with whom ucah, module 79 link header can issue the portion of the data block different components of the decoder, as described above with reference to Fig. 4. Thus, the module 79 communication header reorganizes the data in the block layer in a structure of type "block by block" for reconstruction of the video data.

Despite the fact that the module 60 entropy decoding shown in Fig. 5, described as a module that performs VLC decoding, the above mentioned methods can be similarly used for arithmetic coding. For example, the module 60 entropy decoding may include module arithmetic decoding instead of module 72 VLC-decoding. The header level of the bitstream can be arithmetically decoded to generate a series of syntax elements of the header and the residual level level bit stream can be separately arithmetically decoded for the formation of residual information. Alternatively, the module 60 entropy decoding may use a combination of VLC-decoding and arithmetic decoding to decode the header information and the residual information.

Fig. 6 is a block diagram illustrating another example of a module 80 entropy decoder configured to decode at benevolo bit stream in accordance with an aspect of the present disclosure. The module 80 entropy decoding may be placed in the decoder 26 shown in Fig. 1 and 4. The module 80 entropy decoding takes perenesennyj level coded video bit stream of a certain layer, and generates block data for blocks mentioned layer for use in the recovery unit. In the example shown in Fig. 6, the module 80 entropy decoding includes module 82 of the partition module 84 decoding of the header module 85 decoding the residual data, the memory 86 of the header information memory 88 residual information and the module 89 communication header.

Module 82 section adopts a layered bit stream layer level and divides the bit stream at the header level and the residual level. As described above, video encoder 20 may encode the said layer with an indicator that identifies the location of the transition from header to a residual level, such as a unique sequence of bits at the end of the header level or a syntax element that indicates the length of the header level. Module 82 section identifies the location of the transition on the basis of the indicator and separates the header from the residual level. Module 82 section provides the module 84 decode the header of the encoded header and the leaves in the module 85 residual decoding information encoded residual level.

The module 80 entropy decoding can perform synchronized mode "block by block" decoding block layer. Module 84 decoding header decodes the header level to obtain the header information, for example, in the form of one or more syntax elements, and stores the syntax elements of the header in the memory 86 of the header information. Module 84 decode the header can use VLC-decoding or arithmetic decoding to obtain a series of syntax elements of the header, and to decode the run length for the above-mentioned series to obtain the syntactic elements of the header. Perenesennyj level bit stream composed with lots of different syntax elements header, perenesennyj each other. The following series of syntax elements appear when completed the previous series of the same syntactic element. Thus, the coded series of syntax elements are placed in those locations where the syntax elements necessary for decoding of the current block, thereby reducing the communication complexity of the syntactic elements of the header and the residual information in the decoder. Thus, the module 84 decoding header can decode series to obtain sintek the practical elements of the header for the first mentioned block layer without decoding all series of syntax elements of the header.

Module 85 decoding residual data decodes the residual level to obtain residual information, for example, in the form of transform coefficients, and stores the residual coefficients in the memory 88 of the residual information. Module 85 decoding residual information may decode the residual level using VLC coding or arithmetic coding to obtain transform coefficients. Module 85 decoding the residual information and the module 84 decoding of the header can be decoded residual level and header level simultaneously.

Module 89 communication header communicates the residual information of each block layer syntax elements title mentioned blocks. For example, the module 89 link header can generate the data block to the first block in the mentioned layer, once decoded residual information and the header information of the first block. In particular, the module 89 communication header communicates the residual information of the first mentioned block layer with the values of each of the first series of syntax elements of the header of the decoded header level. Thus, other components in a video encoder 26 can begin to restore the first mentioned block layer is about, as will be decoded remaining part of the header information and the residual information. Module 89 communication header continues to communicate residual information with the corresponding syntactic elements of the title as decoding information. Thus, the layout with the alternation of the header allows the decoder 26 to perform synchronized mode "block by block" decoding blocks mentioned layer with reduced latency and reduced the amount of memory that you want to save the header information and the residual information.

Fig 7A-7C illustrate an example of the structure of the bitstream. Fig. 7A illustrates an example of the structure 90 bitstream type "block by block", and Fig. 7B and 7C illustrate examples of layered structures 92A and V bit stream, respectively. Structure 90 bitstream type "block by block" configured in such a way that the header information and the corresponding residual information of each block are coded consistently. In particular, the structure of the bit stream of type "block by block" configured such that encoded information A header of the first block (indicated in Fig. 7A as "CF 1") follows the corresponding encoded residual information 96A unit MV 1, encoded detail what Razia V header of the second block (indicated in Fig. 7A as "MV 2") follows the corresponding encoded residual information 96V for the second unit MV 2, and so on until the last block (indicated in Fig. 7A as "MV n").

As further illustrated in Fig. 7A, information A header block MV 1 includes syntax elements 98A1-C1header (collectively referred to as "syntactic elements 98 header"). Syntax elements 98 header may include the type of unit (interframe or intraframe), the prediction mode (orientation predictions for blocks subjected to intra-frame coding or forward/backward/bidirectional prediction for blocks subjected to interframe coding), partition size (1616, 88, and so on), the motion vector, of the coded block pattern (CBP), a Delta QP, the size of the transform, and others. In other words, each of the syntax elements 98 header may correspond to different syntactic elements. As an example, a syntax element 98A1may correspond to a syntactic element of the block type syntax element V1can match a syntax element prediction mode syntax element 98S1may correspond to a syntactic element of the partition size, and syntactic and syntactic element cell battery (included) is t K 1block type may correspond to a syntactic element CBP. However, information 94Andthe header may contain fewer or more syntax elements 98. Encoded information 94 other header blocks of the bit stream 90 may also contain syntax elements of the header. For example, encoded information V header block MV 2 may include syntax elements 98A2-C2(not shown in Fig. 7A), and the encoded information 94N header block MV n may include syntax elements 98An-98Kn(not shown in Fig. 7A). Thus, in the structure of the bit stream of type "block by block" syntax elements of the header and the corresponding residual information is encoded sequentially for each block.

As described above, the blocks MV 1 MV n can have the same value for one or more of the same syntactic elements of the header. For example, the first part of the blocks may have the same value of a syntax element type block and the second block may have the same value of syntax element Delta QP. Due to the fact that the bitstream 90 type "block by block" configured on the basis of the structure "block by block", that is, sequentially encoded information 94 header is followed by the corresponding residual information 94 of the same block, may not be able to bitstream 90 type "block by block" to make full use of the correlation header information between the blocks. Using a syntax element header for block type as an example, even if the number of consecutive blocks have the same block type, if entropy encoder uses VLC coding and predicts the current block based on the preceding block type, then at least 1-bit flag is included in coded information 94 header of each block to represent syntactic element type block. For example, 1-bit flag set to 1, indicates that the current block type is the same as the previous block type 1-bit flag of 0 indicates that the current block is different from the previous block type, in this case, the current block type must also be coded. Thus, it is sent to at least one bit in one block and one syntactic element 98 to use the correlation header information between the blocks.

Fig. 7B illustrates a consistent level structure 92A bit stream, which includes the header level and the residual level in accordance with an aspect of the present disclosure. The header includes encrypted information header is all blocks coded together to reduce redundancy and make best use of correlation header information between the blocks. As is illustrated in Fig. 7B, the header includes a coded series 99A-C syntax elements (collectively referred to as "series 99 SE"), which are arranged sequentially. Encoded series 99A syntax of the header element contains syntax elements header, subjected to run-length encoding, the same type for all units. In particular, encoded series 99A syntax elements (SE) of the header includes one or more coded series of syntactic element 98A1header block MV 1, series a syntax element 98A2unit MV 2 and so on up to a syntax element 98Anunit MV n, encoded series V SE header includes one or more coded series of syntactic element V1header block MV 1, series a syntax element V2unit MV 2 and so on up to a syntax element 98Bnunit MV n and so on. Thus, successive tier structure 92A bit stream includes a series 99SE each type of syntax elements of the header are arranged sequentially. The residual level includes the encoded residual is e data for each block.

Fig. 7C illustrates peremeshannuyu level structure V bit stream, which includes the header level and the residual level. The header level peremeshennoi level patterns V bit stream includes encoded header information, in which series 99 SE different syntax elements of the header are interspersed within the header level. This is called peremeshennoi layout header level. As is illustrated in Fig. 7C, the header includes a series 99A1SE series V1SE up to series C1SE, followed by a series CH2SE and so on. Series 99A1SE is coded series for the first series of syntax elements 98A. Syntax element 98A refers to a group that includes syntax elements 98A1, 98A2... 98An. In other words, the syntactic elements 98A1, 98A2... 98Ancollectively referred to as syntactic element 98A. Similarly, series V1SE C1SE are coded series for the first series of syntax elements with V on K respectively. Syntax elements V and C refer to the merged group, which includes syntax elements V1, V2... 98Bnand C1, C2... 98Knaccording to the government. Series H2SE is coded series for the second series of syntax element with the shortest first series. For example, if the first series V1syntax element V is the shortest of the first series, the series H2SE is the second series of syntactic element V. However, if the first series C1syntax element C is the shortest series of syntax element, the series H2SE is the second series of syntactic element K. Thus, the header may include at least one coded series 99 SE for each syntax element before any second coded series of any other syntax elements 98. This allows the decoder to start the recovery blocks on a layer before it is fully decoded a header level and the residual level as will be described in detail below.

Thus, perenesena tier structure V bit stream may be arranged in series 99 SE different syntax elements header, peremerzanie in header level so that, when the series one of the syntax elements have been exhausted (i.e. ends), encoded the following series for the same syntactic element (if any). Essentially, perenesena tier structure V bitstream is built dynamically based on the values in each series, instead of being fixed structure. Although perenesena tier structure V bit stream shown in Fig. 7C, shown as a structure that includes only one second series, perenesena tier structure V bit stream may include a second series for all or any part of the syntax elements. Additionally, the syntactic elements of the header may include additional series (e.g., third series, fourth series, the fifth series, and so on) for all or any part of the syntax elements.

Additional series of syntax elements of the header are encoded in the mode alternation in those locations in which the end of the previous series syntactic element of the header. Essentially, the third series of the syntax of the header element can appear before the second series of other syntactic element header and so on depending on the lengths of the series of syntax elements of the header.

Serial tier structure 92A bit stream and perenesena tier structure V bit stream will also contain an indicator 97, which identifies the location where the going is t the transition from the header level and the residual level. Although in the example shown in Fig. 7B and 7C, the indicator 97 is located at the transition, in other cases, the indicator 97 may be a syntactic element of the header at the beginning of the header, which specifies the length of the header.

Fig. 8 is a block diagram of the sequence of operations of the module 46 entropy encoding, forming a layered encoded bitstream. Module 46 entropy encoding receives the data block to videoblog some layer (100). Module 46 entropy encoding receives the data block from the other components of video encoder 20, such as module 40 quantization or module 32 of the motion estimation. The received data block may include residual information (e.g., in the form of quantized residual coefficients) and header information (for example, in the form of one or more syntax elements of the header, such as one or more motion vectors, block type, a prediction mode, the size of the partition, the pattern of the coded block, the Delta QP, the size of the transform, and others).

The module 50 separate data block separates the header information of the block of residual data of the block (102). The module 50 separate data block can save the header information and the residual information in one or more memory modules (104). In some cases, modules pam the tee can be a separate memory modules. In some cases, the memory modules can be the same memory module.

Module 46 entropy encoding determines whether the block end layer (106). If the block is not the last block layer, the entropy encoding module receives the data block to the next block that separates the header information mentioned subsequent block from the residual information mentioned subsequent block, and stores the divided information block.

If the block is the last block layer, the module 54 grouping header links information header blocks mentioned layer for use correlation header information between the blocks mentioned layer using coding with variable length (108). Module 54 grouping header may group syntax elements of the header of each block group based on the type of syntax element header. For example, the module grouping header may group syntax elements block type for blocks in a group of syntax elements of type block. Module 54 grouping header can also form similar groups for other syntax elements of the header, such as the prediction mode, the size of the partition, a motion vector, CBP, QP, the size of the transform, and others.

Module 46 entropy to the investing encodes header information for a block layer in the header (110). For example, the module 58 RLC performs the run-length encoding each of the groups of syntax elements of the header to reduce redundancy and use the correlation of syntactic elements of the header between the blocks mentioned layer. In one case, the module 58 RLC encodes the series of the first syntax element header block, followed by series of the second syntax element for the said blocks, and so on up until the module 58 RLC does not encode the series for the last syntax element for blocks. Alternatively, the module 58 RLC encodes a series of syntax elements of the header so that a series of different syntactic elements of the header are interspersed between them. After the formation of a series of syntax elements for blocks mentioned layer module 59 VLC encodes a series of syntax elements to form the header level of the bit stream.

Module 46 entropy encoding may also encode an indicator that identifies the end of the header level (112). In one case, the module 59 VLC encodes a unique bit pattern that indicates the end of the header level. Alternatively, the module 59 VLC can encode the syntax element at the beginning of the level header, which specifies the length, for example, in bits, of the header.

Module 46 entropy Kodirov who were also encodes the residual information of each of the blocks for the formation of a residual level of the bit stream (114). Module 46 entropy encoding can encode the residual information using VLC coding or arithmetic coding. Thus, the entropy encoding module generates a layered encoded bitstream that includes the first sequence of bits that corresponds to the header information blocks, and the second sequence of bits, which corresponds to the residual information blocks. Module 46 entropy encoding level passes the coded stream (116).

Fig. 9 is a block diagram of the sequence of operations illustrating an exemplary operation of the module 60 entropy decoding decoding layered encoded bitstream. Module 60 entropy decoding takes a layered coded video bit stream of a certain layer (120). Module 60 entropy decoding decodes the header of the bitstream to obtain the syntactic elements of the header blocks mentioned layer (122). Module 72 VLC-applies decoding means for decoding variable length, for example, through the use of one or more coding tables with variable-length decoding of the header level. After performing decoding variable length for header level, the module 74 RLC-decoding vypolnyayutsya run length for header level to obtain the header information for the blocks mentioned layer.

The decoded header information includes the number of syntax elements, which are grouped based on the types of syntax elements. In one example, the syntax elements may be arranged in such a way that the syntactic elements of the first type (e.g., unit type) for all the blocks of the layer are grouped together, syntax elements of the second type (for example, the prediction mode) for all the blocks mentioned layers are grouped together and so on. Alternatively, the syntactic elements of the header information may be interspersed with each other. For example, at least one series for each syntactic element can be encoded before encoding the complementary series of any of the syntax elements. Additional series of syntax elements are encoded, if the previous series of the same syntax element is completed. Thus, an additional series of syntax elements are encoded in those locations in which syntax elements necessary for decoding of the current block, thereby reducing the communication complexity of the syntactic elements of the header and the residual information in the decoder. Module 60 entropy decoding preserves syntactic header element (124).

Module 60 entropy decoding is found, uiwait indicator, that identifies the transition from the header level to the residual level (126). After detection of the transition from header to a residual level, the module 72 VLC-decoding decodes the residual level of the bitstream (128). In some cases, the module 72 VLC-decoding can choose a different set of coding tables for decoding the residual level. Module 72 VLC-decoding retains residual information (130).

Module 79 communication header communicates the residual information of the first block layer syntax elements of the header blocks (132). For example, the module 79 link header can communicate residual information from the first mentioned block layer with the first value of each of the syntax elements of the header of the decoded header level. Module 60 entropy decoding produces an output unit for this unit to other components of videodecoder 26 for the reconstruction of the above-mentioned block (134). In some cases, the module 79 link header can issue the portion of the data block different components of the decoder, as described above with reference to Fig. 4.

Module 60 entropy decoding determines whether the block by the end of the layer (136). If the block is not the last block layer, the module 60 entropy decoding the sets of the link residual information further mentioned block layer syntax elements mentioned subsequent block. If the block is the last block layer, the module 60 entropy decoding waits to receive another level of the encoded bit stream.

Fig. 10 is a block diagram of the sequence of operations illustrating an exemplary operation of the module 80 entropy decoding, decoding perenesennyj level bit stream. The module 80 entropy decoding takes perenesennyj level bit stream some video layer (140). The module 80 entropy decoding detects the light in peremeshannom level bit stream, which identifies the transition between the header and the residual layer (142). The module 80 entropy decoding divides the bit stream at the header level and the residual level (144). The module 80 entropy decoding module provides 84 decode the encoded header the header level, and provides a module 85 residual decoding information encoded residual level (146).

The module 80 entropy decoding decodes the header of the bitstream to obtain the syntactic elements of the header for a block layer (146). Module 84 decoding module header 80 entropy decoding may use the methods VLC-decoding or arithmetic decoding to obtain a series Sint is csicska elements header and performs decoding of the lengths of the series in respect of such series to receive the syntax elements of the header. Perenesennyj level bit stream is arranged in a series of different syntax elements header perenesennyj each other. The following series of syntactic element appear when completed the previous series of the same element. Thus, the coded series of syntax elements are placed in those locations in which syntax elements necessary for decoding of the current block, thereby reducing the communication complexity of the syntactic elements of the header and the residual information in the decoder. The module 80 entropy decoding stores the decoded syntax elements of the header part of the bitstream in the memory 86 of the header information (148).

The module 80 entropy decoding decodes the residual information to obtain a residual transform coefficients (150). Module 85 decoding residual data module 85 entropy decoding can decode the residual level using VLC coding or arithmetic coding to obtain the residual transform coefficients. Module 18 entropy decoding retains residual transform coefficients in the memory 88 residual information (152). The module 80 entropy decoding can decode the residual level and header level is odnovremenno, thereby reducing decoding delay and memory requirements, due to the persistence of the decoded header information and the residual information for a single encoded data element (e.g., layers).

The module 80 entropy decoding communicates residual information of the first block layer with the corresponding syntactic elements of the header block for the data block to the first block (154). The module 80 entropy decoding generates the data block to the first block for the reconstruction of the first block through the other components of the module 80 entropy decoding (156). For example, the module 89 link header can generate the data block to the first block in the mentioned layer, once decoded residual information and the header information of the first block. In other words, the module 89 link header can generate the data block to the first block, while the module 80 entropy decoding still decodes the remaining part of the header information and the residual information.

The module 80 entropy decoding determines whether the block end layer (158). If the block is not the last block layer, the module 80 entropy decoding communicates residual information subsequent backupmanager layer syntax elements title mentioned the subsequent block, as soon as become available residual information and the syntactic elements of the title referred to the subsequent block. Thus, the layout with the alternation of the header allows the module 80 entropy decoding to perform synchronized mode "block by block" decoding mentioned layer with reduced latency and reduced memory requirements. If the block is the last block layer, the module 80 entropy decoding waits to receive another level of the encoded bit stream.

The methods described in this disclosure can be implemented in hardware, software, firmware or any combination thereof. Any signs that are described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the above methods may be implemented at least partially by means of machine-readable data carrier containing commands which, when performed, perform one or more of the methods described above. Machine-readable storage medium may be part of a computer software product, which which may include packaging materials. Machine-readable storage media may include random access memory (RAM), synchronous dynamic memory (SDRAM), read-only memory (ROM), nonvolatile RAM (NVRAM), electrically erasable programmable read-only memory (EEPROM)memory type FLASH, magnetic or optical media storage and other similar devices. The above-mentioned methods, in addition or alternatively, may be implemented at least partially by means of machine-readable data transmission medium that carries or transmits the code in the form of commands or data structures that can be accessed and which can be read and/or executed by a computer.

Code may be executed by one or more processors, such as one or more digital signal processors (DSPS), microprocessors, General purpose, specialized integrated circuits (ASIC), programmable gate arrays (FPGA), or other equivalent integrated or discrete logic circuits. Accordingly, the term "processor"as used in the present description, may refer to any of the preceding structure or any other structure suitable for implementation of the methods described herein. In addition, in some aspects, functionality, op the sledge in this document, can be provided in appropriate software modules or hardware modules configured to encode and decode, or can be embedded in the joint encoder-video decoder (CODEC).

Described various embodiments of the invention. These and other options for implementation are included in the scope of the claims determined following further claims.

1. A method of encoding video data, comprising stages, in which: encode header information of many videobloom encoded
a single data item of video data in the first sequence of bits of the encoded bit stream, the header information includes many different types of syntax elements of the header for each of the many videobloom;
link header information of many videobloom in groups of syntax elements of the header, and each group includes syntax elements of the header of one of the different types of syntax elements in the header of every videobloom; and
encode the residual information of many videobloom in the second sequence of bits of the encoded bit stream, and encoding the header information of many videobloom in the first bit sequence kodirovannikh stream contains interleaving the coded series of parts groups of syntax elements of the header inside the first bit sequence.

2. The method according to claim 1, wherein the encoding of the header information of many videobloom contains a stage on which to perform the run-length encoding with respect to the information header of the many videobloom.

3. The method according to claim 1, wherein the encoding of the header information of many videobloom in the first sequence of bits of the encoded bitstream contains a stage at which sequentially encode the series of each group of syntax elements of the header for the formation of the first bit sequence.

4. The method according to claim 1, wherein interleaving the coded series of parts groups of syntax elements of the header inside the first bit sequence contains the stage at which encode the first series for each of the groups of syntax elements of the header before the encoding of the second series of any of the groups of syntax elements of the header.

5. The method according to claim 4, further containing a stage at which encode the second series for one of the groups of syntax elements of the header, which has the shortest first series, before encoding the second series for any of the other groups of syntax elements of the header.

6. The method according to claim 1, in which the syntactic elements of the header includes at least one of the following: a block type, a prediction mode, the size of the partition, the template to the new block, the motion vector, the change of quantization parameter with respect to the previous block (the Delta QP) and the size of the transform.

7. The method according to claim 1, additionally containing a stage on which to transmit the first sequence of bits before the second sequence of bits.

8. The method according to claim 1, additionally containing a stage at which encode an indicator that identifies a location in the coded bit stream, in which there is a transition from the first bit sequence to the second sequence of bits.

9. The method according to claim 8, in which the coding of the indicator contains the stage at which encode one of the unique bit sequence at the location of the transition and a syntax element that specifies the length of the first sequence of bits.

10. The method according to claim 1, wherein the encoded single data element represents the first encoded unit data item that contains the encrypted header information of many videobloom as the first bit sequence and the encoded residual information for a variety of videobloom as the second bit sequence, and the method further comprises a stage on which encode the second coded single element of the data block so that the header information for each block utoro is encoded on a single data item should be residual information for the corresponding block.

11. The method according to claim 1, wherein the encoded single data element contains one of the layer and frame.

12. The method according to claim 1, wherein encoding at least one of header information and the residual information includes encoding at least one of header information and the residual information using coding with variable length or arithmetic encoding.

13. The encoding device, comprising:
the first encryption module, which encodes the header information of many videobloom coded unit of video data item in the first sequence of bits of the encoded bit stream, the header information includes many different types of syntax elements of the header for each of the many videobloom;
module grouping header, which assembles the header information of many videobloom in groups of syntax elements of the header, and each group includes syntax elements of the header of one of the different types of syntax elements in the header of every videobloom; and
a second encryption module, which encodes the residual information of many videobloom in the second sequence of bits of the encoded bit stream, and the first encryption module peremetallirovaniya series parts groups of syntax elements of the header inside the first bit sequence.

14. The device according to item 13, in which the first encryption module contains a module run-length encoding of which performs a run-length encoding with respect to the information header of the many videobloom.

15. The device according to item 13, in which the first encryption module sequentially encodes the series of each group of syntax elements of the header for the formation of the first bit sequence.

16. The device according to item 13, in which the first encryption module encodes the first series for each of the groups of syntax elements of the header before the encoding of the second series of any of the groups of syntax elements of the header.

17. The device according to clause 16, in which the first encryption module encodes the second series for one of the groups of syntax elements of the header, which has the shortest first series, before encoding the second series for any of the other groups of syntax elements of the header.

18. The device according to item 13, in which the syntactic elements of the header includes at least one of the following: a block type, a prediction mode, the size of the partition, the pattern of the coded block, the motion vector, the change of quantization parameter with respect to the previous block (the Delta QP) and the size of the transform.

19. The device according to item 13, additionally containing a transmitter that transmits the first serial is a sequence of bits before the second sequence of bits.

20. The device according to item 13, in which the first encryption module encodes an indicator that identifies a location in the coded bit stream, in which there is a transition from the first bit sequence to the second sequence of bits.

21. The device according to claim 20, in which the first encryption module encodes one of the unique bit sequence at the location of the transition and a syntax element that specifies the length of the first sequence of bits.

22. The device according to item 13, in which
to encode a single data item represents the first coded single element of data that includes encrypted header information of many videobloom as the first bit sequence and the encoded residual information for a variety of videobloom as the second bit sequence, and
at least one of the first and second modules encoding encodes the second coded single element of the data block so that the header information for each block of the second encoded unit data item should be residual information for the corresponding block.

23. The device according to item 13, which is encoded by a single data element contains one of the layer and frame.

24. The device according to item 13, and the device is present which allows a wireless device.

25. The device according to item 13, the encoding device encodes the data using one of the encoding with variable-length and arithmetic coding.

26. Machine-readable data carrier containing commands to encourage processor:
encode header information of many videobloom coded unit of video data item in the first sequence of bits of the encoded bit stream, the header information includes many different types of syntax elements of the header for each of the many videobloom;
to compose the header information of many videobloom in groups of syntax elements of the header, and each group includes syntax elements of the header of one of the different types of syntax elements in the header of every videobloom; and
to encode the residual information of many videobloom in the second sequence of bits of the encoded bit stream, and a command for prompting the processor to encode the header information of many videobloom in the first sequence of bits of the encoded bit stream contains commands for prompting the processor to interleave the coded series of parts groups of syntax elements of the header inside the first sequence b is tov.

27. Machine-readable data carrier on p, and command for prompting the processor to encode the header information of many videobloom include command for prompting the processor to perform run-length encoding with respect to the information header of the many videobloom.

28. Machine-readable data carrier on p, and command for prompting the processor to encode the header information of many videobloom in the first sequence of bits of the encoded bit stream includes a command for prompting the processor to sequentially encode the series of each group of syntax elements of the header for the formation of the first bit sequence.

29. Machine-readable data carrier on p, and command for prompting the processor to interleave the coded series of parts groups of syntax elements of the header inside the first bit sequence includes a command for prompting the processor to encode the first series for each of the groups of syntax elements of the header before the encoding of the second series of any of the groups of syntax elements of the header.

30. Machine-readable data carrier according to clause 29, additionally containing commands to induce the processor to encode the second series for one of the groups syntax is a separate estimate of the header, which has the shortest first series, before encoding the second series for any of the other groups of syntax elements of the header.

31. Machine-readable data carrier on p, and syntactic elements of the header includes at least one of the following: a block type, a prediction mode, the size of the partition, the pattern of the coded block, the motion vector, the change of quantization parameter with respect to the previous block (the Delta QP) and the size of the transform.

32. Machine-readable data carrier on p, optionally containing commands to induce the processor to transmit the first sequence of bits before the second sequence of bits.

33. Machine-readable data carrier on p, optionally containing commands to induce the processor to code an indicator that identifies a location in the coded bit stream, in which there is a transition from the first bit sequence to the second sequence of bits.

34. Machine-readable data carrier on p, and command for prompting the processor to encode the indicator includes a command for prompting the processor to encode one of the unique bit sequence at the location of the transition and a syntax element that specifies the length of the first paragraph the coherence of bits.

35. Machine-readable data carrier on p, and the encoded unit data element represents the first coded single element of data that includes encrypted header information of many videobloom as the first bit sequence and the encoded residual information for a variety of videobloom as the second bit sequence, and machine-readable data carrier further comprises a command for prompting the processor to encode the second coded single element of the data block so that the header information for each block of the second encoded unit data item should be residual information for the corresponding block.

36. Machine-readable data carrier on p, and the encoded unit data element contains one of the layer and frame.

37. Machine-readable data carrier on p, and command for prompting the processor to encode at least one of header information and the residual information contain commands for encoding at least one of header information and the residual information using coding with variable length or arithmetic encoding.

38. The encoding device, comprising:
means for codiovan the header information of many videobloom coded unit of video data item in the first sequence of bits of the encoded bit stream, the header information includes many different types of syntax elements of the header for each of the many videobloom;
means for linking the header information of many videobloom in groups of syntax elements of the header, and each group includes syntax elements of the header of one of the different types of syntax elements in the header of every videobloom; and
means for encoding the residual information of many videobloom in the second sequence of bits of the encoded bit stream, and means for encoding the header information punctuates coded series parts groups of syntax elements of the header inside the first bit sequence.

39. The device according to 38, in which the means for encoding the header information performs the run-length encoding with respect to the information header of the many videobloom.

40. The device according to 38, in which the means for encoding the header information sequentially encodes the series of each group of syntax elements of the header for the formation of the first bit sequence.

41. The device according to 38, in which the means for coding the header encodes the first series for each of the groups of syntax elements of the header before to what derounian second series of any of the groups of syntax elements of the header.

42. The device according to paragraph 41, in which the means for coding the header encodes the second series for one of the groups of syntax elements of the header, which has the shortest first series, before encoding the second series for any of the other groups of syntax elements of the header.

43. The device according to 38, in which the syntactic elements of the header includes at least one of the following: a block type, a prediction mode, the size of the partition, the pattern of the coded block, the motion vector, the change of quantization parameter with respect to the previous block (the Delta QP) and the size of the transform.

44. The device according to 38, further containing a means for transmitting the first sequence of bits before the second sequence of bits.

45. The device according to 38, in which the means for encoding encodes header information indicator, which identifies the location in the coded bit stream, in which there is a transition from the first bit sequence to the second sequence of bits.

46. The device according to item 45, in which the means for encoding the header information encodes one of the unique bit sequence at the location of the transition and a syntax element that specifies the length of the first sequence of bits.

47. The device according to claim 3, in which encoded a single data item represents the first coded single element of data that includes encrypted header information of many videobloom as the first bit sequence and the encoded residual information for a variety of videobloom as the second bit sequence, and the device further comprises means for encoding the second encoded by a single element of the data block so that the header information for each block of the second encoded unit data item should be residual information for the corresponding block.

48. The device according to 38, in which the encoded unit data element contains one of the layer and frame.

49. The device according to 38, and the encoding device encodes the data using one of the encoding with variable-length and arithmetic coding.

50. The method of decoding video data, comprising stages, which are:
decode the first sequence of bits of the encoded bitstream to obtain header information of many videobloom coded single element of data, and the header information includes many different types of syntax elements of the header for each of the many videobloom, and info is the information header of the many videobloom arranged in groups of syntax elements of the header, each of the groups includes syntax elements of the header of one of the different types of syntax elements in the header of every videobloom;
decode the second sequence of bits of the encoded bitstream to obtain residual information many videobloom; and
relate the residual information of each of the many videobloom with the appropriate header information,
when this header information of many videobloom in the first sequence of bits of the encoded bit stream has been subjected to interleaving the coded series of parts groups of syntax elements of the header inside the first bit sequence.

51. The method according to item 50, in which the communication of the residual information of each of the many videobloom with the appropriate header information contains the stage at which communicate residual information of each of the many videobloom with many relevant syntax elements of the header.

52. The method according to item 50, optionally containing phase, which detect the light in the coded bit stream, which identifies the location in which the end of the first bit sequence and the second sequence of bits.

53. The method according to item 50, additional what about the containing phase, which restores each of videobloom encoded unit data element using the residual information of the corresponding block and the corresponding header information.

54. The method according to item 53, in which the recovery of each videobloom contains the stage at which restores each video section of the encoded unit data item, as you will only be decoded with the appropriate header information and the residual information.

55. The method according to item 53, in which the recovery of each videobloom contains the stage at which restores each video section of the encoded unit data element simultaneously with the decoding of the remaining parts of the two sequences of bits encoded mentioned a single data element.

56. The method according to item 50, in which the decoding of the first bit sequence to obtain header information and decoding the second sequence of bits to obtain residual information contains the phase in which simultaneously decode the first bit sequence and the second sequence of bits.

57. The decoding device, comprising:
at least one decoder module that decodes the first sequence of bits of the encoded bitstream to obtain header information of mnogostadiinogo encoded unit data item, the header information includes many different types of syntax elements of the header for each of the many videobloom and header information of many videobloom arranged in groups of syntax elements of the header, and each group includes syntax elements of the header of one of the different types of syntax elements in the header of every videobloom, and decodes the second sequence of bits of the encoded bitstream to obtain residual information many videobloom; and
module link header that communicates residual information of each of the many videobloom with the appropriate header information,
and header information of many videobloom in the first sequence of bits of the encoded bit stream has been subjected to interleaving the coded series of parts groups of syntax elements of the header inside the first bit sequence.

58. The device according to 57, in which the module is connected, the title communicates the residual information of each of the many videobloom with many relevant syntax elements of the header.

59. The device according to 57, in which at least one decoder detects the indicator, coded b is produced by the wholesale flow, that identifies the location where the end of the first bit sequence and the second sequence of bits.

60. The device according to 57, further containing a means for restoring each of videobloom encoded unit data element using the residual information of the corresponding block and the corresponding header information.

61. The device according to p in which the tool to restores each of videobloom coded single element of data, once decoded, the corresponding header information and the residual information.

62. The device according to p in which the tool to restores the first encoded video section of a single data item at the same time, as a module for decoding decodes a portion of the first sequence of bits to obtain header information of the second videoblog mentioned the encoded unit data item.

63. The device according to 57, and the device is a wireless device.

64. The device according to 57, in which at least one module of the decoder simultaneously decode the first bit sequence and the second sequence of bits.

65. Machine-readable data carrier containing to the team to encourage, at least one processor:
decoding the first sequence of bits of the encoded bitstream to obtain header information of many videobloom coded single element of data, and the header information includes many different types of syntax elements of the header for each of the many videobloom and header information of many videobloom arranged in groups of syntax elements of the header, and each group includes syntax elements of the header of one of the different types of syntax elements in the header of every videobloom;
decoding the second sequence of bits of the encoded bitstream to obtain residual information many videobloom; and
to communicate the residual information of each of the many videobloom with the appropriate header information,
and header information of many videobloom in the first sequence of bits of the encoded bit stream has been subjected to interleaving the coded series of parts groups of syntax elements of the header inside the first bit sequence.

66. Machine-readable data carrier on p, and commands to induce at least one processor to communicate OST is full of information every videobloom with the appropriate header information includes commands for motivation, at least one processor to communicate the residual information of each of the many videobloom with many relevant syntax elements of the header.

67. Machine-readable data carrier on p, optionally containing commands to induce at least one processor to detect the indicator in the coded bit stream, which identifies the location in which the end of the first bit sequence and the second sequence of bits.

68. Machine-readable data carrier on p, optionally containing commands to induce at least one processor to restore each of videobloom encoded unit data element using the residual information of the corresponding block and the corresponding header information.

69. Machine-readable data carrier on p, and commands to induce at least one processor to restore each of videobloom include commands to restore each videoblog coded single element of data, once decoded, the corresponding header information and the residual information.

70. Machine-readable data carrier on p, and commands to urge at least the underwater processor to restore each of videobloom include commands to restore the first videoblog encoded unit data element simultaneously with the decoding of the first sequence bits to obtain header information of the second videoblog mentioned the encoded unit data item.

71. Machine-readable data carrier on p, optionally containing commands to induce at least one processor to simultaneously decode the first bit sequence and the second sequence of bits.

72. The decoding device, comprising:
means for decoding the first sequence of bits of the encoded bitstream to obtain header information of many videobloom coded single element of data, and the header information includes many different types of syntax elements of the header for each of the many videobloom and header information of many videobloom arranged in groups of syntax elements of the header, and each group includes syntax elements of the header of one of the different types of syntax elements in the header of every videobloom, and decoding the second sequence of bits of the encoded bitstream to obtain residual information many videobloom; and
the means for communicating the residual information of each of the many videobloom with the appropriate header information,
moreover, information is Oia title many videobloom in the first sequence of bits of the encoded bit stream has been subjected to interleaving the coded series of parts groups of syntax elements of the header inside the first sequence bits.

73. The device according to item 72, in which the means for communicating communicates the residual information of each of the many videobloom with many relevant syntax elements of the header.

74. The device according to item 72, further containing a means for detecting light in the coded bit stream, which identifies the location in which the end of the first bit sequence and the second sequence of bits.

75. The device according to item 72, further containing a means for restoring each of videobloom encoded unit data element using the residual information of the corresponding block and the corresponding header information.

76. The device according to item 75, in which the tool to restores each of videobloom coded single element of data, once decoded, the corresponding header information and the residual information.

77. The device according to item 75, in which the means to restore restores the first encoded video section of a single data item at the same time as the means for decoding decodes a portion of the first sequence of bits to obtain header information of the second videoblog mentioned encoded dinitrosobenzene data.

78. The device according to item 72, in which the means for decoding simultaneously decode the first bit sequence and the second sequence of bits.



 

Same patents:

FIELD: radio engineering, communication.

SUBSTANCE: method to code a moving image codes a video stream containing the first dynamic image and the second dynamic image for superimposition onto the first dynamic image. The method includes the following: stages (S5301-S5303) to detect a section of continuous reproduction, which is a group of partial sections and is exposed to continuous reproduction; a stage (S5304) to code the first and second dynamic image in partial sections, making a section of continuous reproduction, in order to satisfy a limitation that a threshold value used for processing of transparency by means of a brightness key in superimposition, does not change: and a stage (S5305) to develop control information containing flag information, indicating that the threshold value is fixed in the section of continuous reproduction.

EFFECT: provision of a record medium, a method to code a moving image, which may suppress heterogeneity of reproduction without increase of processing load during reproduction.

6 cl, 59 dwg

FIELD: information technology.

SUBSTANCE: each nonzero coefficient of the enhancing layer coefficient vector is encoded without knowing any other subsequent coefficients. Encoding the enhancing layer in a single pass can avoid the need to perform a first pass for analysing the coefficient vector and a second pass for encoding the coefficient vector based on the analysis.

EFFECT: providing statistical encoding of the bit stream of an enhancing layer in one encoding pass, thereby simplifying encoding, reducing encoding delay and memory requirements.

66 cl, 15 dwg

FIELD: information technology.

SUBSTANCE: pixel clusters are specified for use when compressing and decompressing an image. Image information used to specify clusters may include pixel values in a predetermined position relative the pixel or corresponding motion vectors, gradients, texture etc. When compressing images, image information with respect to pixels is analysed to determine the cluster to which it belongs. For each cluster, a set of control parameters is calculated for a post-processing operation, e.g., filter coefficients for filtering or statistical data for local generation of texture. The set of control parameters is selected depending on image content. The compressed image and the sets of control parameters are transmitted to a decompressing device. After decompression, image information, which is a decompressed image, is analysed for classification of pixels according to clusters, and different sets of control parameters for selected clusters are used to control post-processing in pixel positions.

EFFECT: abating compression artefacts.

14 cl, 30 dwg

FIELD: information technology.

SUBSTANCE: video encoder may adaptively select a coding table for use in encoding a syntax element of a current video block based on corresponding syntax elements of one or more previously encoded blocks. In one aspect, the video encoder may adaptively select the coding table for use in encoding a block type of the current block based on block types of one or more video blocks adjacent to the current video block, i.e., neighbouring video blocks. The video encoder may also predict one or more other header syntax elements of the current block based on at least one of the previously encoded video blocks. If prediction is successful, the video encoder may encode a flag to indicate the success of prediction.

EFFECT: adaptive coding of video block header information based on one or more previously encoded video blocks is provided.

92 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: in an image encoding system compression processing is applied to an input image signal, comprising multiple colour components, encoded data obtained after independent encoding processing of the input image signal for each of the colour components, and the parameter that indicates which colour component corresponds to encoded data is multiplexed with the bit stream.

EFFECT: higher encoding efficiency and providing possibility to include data for one image in one access unit, and establish identical time information and single encoding mode for corresponding colour components.

6 cl, 25 dwg

FIELD: information technology.

SUBSTANCE: method for encoding at least one picture corresponding to at least one of at least two views of multi-view video content to form a resultant bitstream, wherein in the resultant bitstream at least one of coding order information and output order information for the at least one picture is decoupled from the at least one view to which the at least one picture corresponds.

EFFECT: possibility to manage the list of reference images for coding of multi-view sequences.

68 cl, 25 dwg

FIELD: information technology.

SUBSTANCE: disclosed is an encoding method involving: defining access units; and an encoding each of the images included in the access unit for each access unit. Defining involves: encoding unit determination for determining whether to uniformly encode the images included in an access unit, uniformly encode on a field basis or frame-by-frame basis; and determining the type of field to determine whether to uniformly encode images as top fields or bottom fields when it has been determined that images included in the access unit must be encoded on a field basis. During encoding, each of the images is encoded for each access unit in a format defined when determining encoding units and the type of field.

EFFECT: defining a container or access unit when each of the images or different MVC component types are encoded differently using frame coding or field coding.

2 cl, 21 dwg

FIELD: information technologies.

SUBSTANCE: unit comprises serially connected facility of content classification and facility of multimedia data processing. The facility of multimedia data content classification is made as capable to identify data that corresponds to inadmissible level of electromagnet radiation. The facility of multimedia data processing is made as capable of transposition of multimedia data into a secure form. The unit is structurally made as an insert into a signal cable of the indication panel.

EFFECT: reduced level of electromagnetic noise that they radiate and thus provision of higher safety of devices.

2 dwg

FIELD: information technologies.

SUBSTANCE: following stages are carried out: division (2100) of an upper layer macroblock into simplest units; calculation (2200) of an intermediate position for each simplest unit within a low resolution image from a simplest unit position depending on modes of upper layer macroblock coding and images of high and low resolution; identification (2300) of a basic layer macroblock called base_MB, containing a pixel arranged in the intermediate position; calculation (2400) of a final position within a low resolution image from an alleged position of the basic layer depending on coding modes of base_MB macroblock and upper layer macroblock and images of high and low resolution; identification (2500) of the basic layer macroblock called real_base_MB, containing a pixel arranged in the final position; and production (2600) of motion data for the upper layer macroblock from motion data of the identified real_base_MB.

EFFECT: improved efficiency of video coding.

11 cl, 5 dwg

FIELD: information technologies.

SUBSTANCE: capability of signalling is provided about several values of decoding time for each sample at the level of a file format, which makes it possible to use various values of decoding time for each sample or a subset of samples when decoding a full flow and a subset of this flow. A unit of alternative decoding time is determined, designed to signal several values of decoding time for each sample. Such unit may contain a compact table version, which makes it possible to index from the alternative decoding time to quantity of samples, at the same time the alternative decoding time is the decoding time used for the sample in the case, when it is required to decode only a subset of an elementary flow stored on a path. Each record in the table contains multiple sequential samples with identical value of time difference and difference between these sequential samples, and a full "time-sample" chart may be built by adding differences.

EFFECT: reduced complexity of calculations in decoding of scaled video data.

16 cl, 7 dwg

FIELD: video decoders; measurement engineering; TV communication.

SUBSTANCE: values of motion vectors of blocks are determined which blocks are adjacent with block where the motion vector should be determined. On the base of determined values of motion vectors of adjacent blocks, the range of search of motion vector for specified block is determined. Complexity of evaluation can be reduced significantly without making efficiency of compression lower.

EFFECT: reduced complexity of determination.

7 cl, 2 dwg

The invention relates to the encoding-decoding stereo audio

FIELD: video decoders; measurement engineering; TV communication.

SUBSTANCE: values of motion vectors of blocks are determined which blocks are adjacent with block where the motion vector should be determined. On the base of determined values of motion vectors of adjacent blocks, the range of search of motion vector for specified block is determined. Complexity of evaluation can be reduced significantly without making efficiency of compression lower.

EFFECT: reduced complexity of determination.

7 cl, 2 dwg

FIELD: radio engineering, communication.

SUBSTANCE: video coder separates information of a heading of video blocks of a layer (or a different coded single element of data) from residual information of video blocks of the specified layer, and performs coding of series lengths in respect to information of a heading of video blocks for better use of correlation of heading information between blocks of the specified layer. After coding of information of the blocks heading in the specified layer the video coder codes residual information for each of the blocks of the specified layer and sends coded information of the heading as the first sequence of bits and sends coded residual information as the second sequence of bits.

EFFECT: provision of a level structure of a coded bitstream, which uses correlation in heading information between video blocks of a coded signal element of data.

78 cl, 12 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to encoding and decoding. An encoding method, which includes obtaining codes corresponding to prediction residues obtained according to prediction analysis, applied to time sequence signals included in a predefined time interval of time sequence input signals with a number of bits to be assigned to codes of prediction codes, respectively, switched according to whether the index which indicates the level of periodicity and/or stationarity, corresponding to time sequence signals in the predefined time interval or time sequence signals in the interval before the predefined time interval of time sequence input signals, satisfies a condition which indicates high periodicity and/or high stationarity, or a condition which indicates low periodicity and/or low stationarity.

EFFECT: higher compression efficiency.

30 cl, 8 dwg

FIELD: coding elementary-wave data by means of null tree.

SUBSTANCE: proposed method includes generation of elementary-wave ratios pointing to image. In the process bits of each elementary-wave ratio are associated with different bit places so that each place is associated with one of bits of each elementary-wave ratio and associated bits are coded with respect to each place of bits to point to null tree roots. Each place of bits is also associated only with one of bits of each elementary-wave ratio. Computer system 100 for coding elementary-wave ratios by means of null tree has processor 112 and memory 118 saving program that enables processor 112 to generate elementary-wave ratios pointing to image. Processor 112 functions to code bits of each place to point to null tree roots associated with place of bits.

EFFECT: enhanced data compression speed.

18 cl, 7 dwg

FIELD: converting code of received video sequence using extrapolated movement data received from video sequence.

SUBSTANCE: proposed method for code conversion involves reception of first bit stream of compressed picture data having some coding parameters. These parameters may relate to GF-structure of picture frames, picture frame size, to parameter showing if frames presented in input bit stream are picture fields or frames, and/or if they form picture frames presented in bit stream, direct or interlaced sequence. First and second movement vectors are obtained from input bit stream and used together with weighting coefficients to extrapolate third movement vector for output bit stream of compressed picture data. Output bit stream that differs from input one by one or more parameters is outputted as converted-code output signal.

EFFECT: provision for minimizing of or dispensing with movement estimation in code conversion process.

22 cl, 4 dwg, 1 tbl

FIELD: protection of video information against unauthorized copying.

SUBSTANCE: proposed method using watermarks to protect video information against unauthorized copying by changing scale of pattern in the course of copying includes introduction of watermark in original video signal with different scales. Watermark is maintained in each scale for preset time interval sufficient to enable detector circuit in digital-format video recorder to detect, extract, and process information contained in watermark. Watermark scale is changed by end of preset time interval preferably on pseudorandom basis to ensure appearance of each of all scales in predetermined scale variation range as many times as determined in advance. In this way definite scale possessing ability of watermark recovery to initial position and size can be identified and used for watermark detection.

EFFECT: enhanced reliability, facilitated procedure.

24 cl, 7 dwg

FIELD: multimedia technologies.

SUBSTANCE: method includes at least following stages: determining, whether current value of processed coefficient of discontinuous cosine transformation is equal or less than appropriate threshold value, used in current time for quantizing coefficients of discontinuous cosine transformation of image blocks of common intermediate format, and if that is so, then value of discontinuous cosine transformation coefficient is set equal to zero, then currently used threshold value is increased for use as threshold value with next processing of discontinuous cosine transformation coefficient, in opposite case currently used threshold value is restored to given original threshold value, which is used as threshold value for next processing of discontinuous cosine transformation coefficient; determining, whether increased threshold value is greater than given upper limit of threshold value, and if that is so, than increased threshold value is replaced with given upper limit.

EFFECT: higher quality.

8 cl, 4 dwg

FIELD: methods and devices for memorization and processing of information containing video images following one another.

SUBSTANCE: from each image recorded prior to current time appropriately at least one image area is selected and aperture video information is recorded with placement information. from video-information at least one mixed image is generated with consideration of appropriate placement information. Mixed image is utilized for display in accordance to movement estimation, movement compensation or error masking technology frames.

EFFECT: decreased memory resource requirements for memorization of multiple previously received images.

3 cl, 4 dwg

FIELD: engineering of devices for transforming packet stream of information signals.

SUBSTANCE: information signals represent information, positioned in separate, serial packets of digital format data. These are transformed to stream of information signals with time stamps. After setting of time stamps, which are related to time of arrival of data packet, time stamps of several data packets are grouped as packet of time stamps, wherein, in accordance to realization variant, size of time stamps packet equals size of data block.

EFFECT: improved addition of data about time stamps to data packets with fixed size.

6 cl, 29 dwg

Up!