Tone mapping for bit-depth scalable video codec

FIELD: information technology.

SUBSTANCE: scalable video codec converts lower bit depth video data to higher bit depth video data using decoded lower bit depth video data for tone mapping and tone mapping derivation. The conversion can also be used for filtered lower bit depth video data for tone mapping and tone mapping derivation.

EFFECT: high encoding efficiency.

7 cl, 3 dwg

 

The level of technology

The present invention in General relates to scalable video codecs.

Scalable video codecs provide the ability to transmit different level of image quality of different consumers, depending on the type of service they prefer. The video services with low quality can be less expensive than using higher quality.

In a scalable video encoder low bit depth can be used as the baseline, and higher bit depth can be called superior level. The greater the bit depth, the better the video quality.

In a scalable video encoder and decoder may be provided as a single module. In some cases, only the encoder can be provided and, in other cases, there may be only a decoder. Scalable video encoder provides for the system to work, at least with the level of the baseline. Thus, in systems with low cost, you can only use the baseline and in more advanced systems the high cost you can use the improved level.

Preferably improved output level from baseline. With this purpose you can use the inverse mapping of tones to increase the bit depth erovnebisebi line to bit depth, advanced level. In some cases, for example, the baseline may be 8 bits per pixel, and an improved level can leave a 10, 12 or more bits per pixel.

Brief description of drawings

1 schematically shows the system of the encoder and decoder in accordance with one embodiment of the present invention;

figure 2 presents the system description of the encoder and decoder in accordance with another alternative embodiment of the present invention; and

figure 3 presents the system of another version of the embodiment of the present invention.

Detailed description of the invention

As shown in figure 1, the scalable video codec includes an encoder 10, which is connected via the video transmission or through the drive 14 video decoder 12. Figure 1 shows the encoder from one codec decoder from another codec.

As an example, the network computer can communicate over the network with another computer. Each computer may have a codec that includes both the encoder and the decoder, so that the information can be encoded in a single node, transmitted via the network to another node, which then decodes this encoded information.

The codec shown in figure 1, is a scalable video codec (SVC, MVC). This means that it is made with the possibility of encoding and/or decode the Finance information with different bit depths. Video sources 16 and 26 can be connected to the encoder 10. Video source 16 may use N-bit video data, while the video source 26 may provide the M-bit video data, where the bit depth of M is greater than the bit depth is N. In other embodiments, the embodiment may be provided more than two sources, with more than two bit depths.

In each case, information from the video source provides the encoder. In the case of a video source 16 with a smaller bit depth information is provided in the encoder 18 to the baseline. In the case of a video source 26 with a greater bit depth used by the encoder 28 advanced level.

However, the decoded information to the baseline at point b of the encoder 18 baseline treated with reverse display colors to increase its bit depth for M-bits for use in encoding on the enhanced level. Thus, the decoded N-bit video provides, in one variant embodiment of the module 20 reverse display colors. Inverse mapping 20 tones increases the bit depth and generates M-bit output of the encoder 28 superior level. The decoded stream is also presented for derivative 24 display colors. Derivative 24 colors also receives information from the M-bit video source 26. The output of the derivative 24 is tabraani tones used for reverse display 20 tones.

Simultaneously, the encoded output at point a of the encoder 18 output to send or save 14 video data.

The use of the decoded stream for derivative 24 display colors residual coding in the encoder 28 of the level of improvement can be reduced, which in some cases improves the coding efficiency due to better prediction in the encoder 28.

The encoder 1 may correspond to, for example, the requirements of the standard H.264 (advanced video codec (AVC, UCMJ) and MPEG-4 Part 10)compression standard. Standard H.264 was prepared by the Joint video team (JVT, CSAS), which includes ITU-T SG16 Q.6, also known as VCEG (GACV, the Group of experts of encoding video data) and ISO-IEC JTC1/SC29/WG11 (2003), known as MPEG (GEDI, the Group of experts for the moving image). Standard H.264 was developed for use in digital television broadcasting, direct satellite broadcast video data to digital video transmission on the subscriber line, for media drives interactive data, multimedia messaging, digital terrestrial broadcast television and remote video surveillance, which is shown here only as a few examples.

Although one embodiment may correspond to VideoCAD the formation H.264, the present invention is not limited to this. Instead, variations of the embodiment can be used in various systems integration, which includes MPEG-2 (ISO/IEC 13818-1 (2000) MPEG-2, available in the International organization for standardization, Geneva, Switzerland) and VC1 (SMPTE M (2006) available in SMPTE (OCVT, Society of engineers, film, video and television United States), white plains, new York 10601).

The encoder provides information by transmitting or storing 14 video data for use by the decoder. Information that can be provided may include a video stream baseline (BL, BL), reverse display colors (ITM, CBOs), the filter taps of the adaptive filter 24 and the video stream superior level (EL, CS). Some information can be included in the packet header. For example, information reverse display colors (CBOs) and information exhaust filter may be provided in the corresponding header in packet data transmission.

After receiving the relevant information to the decoder 12, the decoder 30 baseline decode the N-bit display video in the display 32. However, if, instead, will be provided by the equipment for advanced level, may provide for the display 40 with a greater bit depth. (Usually two displays may not be included in the composition). Output Deco the EPA main line, which has N-bit data, convert video to M-bits, using the module 34 reverse display colors, which also serves information CBOs about reverse display colors performed by the encoder 10.

The video decoder is Samovodene device, since the information available to the decoder, is used to encode. The decoder can estimate the same information for decoding encrypted information without searching this information in the encoder.

Usually any type of display colors can be used to increase the density of bits of video data baseline that includes scaling based on the inverse of the block and inverse linear mapping in parts.

Module derived 24 display colors, shown in figure 1, find the relationship between the video data, large/small bit depths. Usually interdependence when the display is brought out through the statistical property of the original video data with high bit depths and from the original video data of low bit depth on the side of the encoder.

Build a reference table (LUT, TMS), using the pixel x input N low bit depth and located in the same pixel location in the input data M with a greater bit depth. The term "located in the same place, as expected, indicates the pixel is in the same position in the two image sources 16 and 26.

For each pixel xiinput data with a low bit depth and located in the same pixel location iniinput data with a high bit depth, let

then the j-th input LUT [j]=sumj/numj.

If (numj=0), then LUT [j] is the weighted average LUT [j-] LUT [j+], where j-and j+if available, represent the nearest non-zero neighbors of j-th input data.

Instead of using the input pixel source 16 with a lower bit depth of the decoded output pixel encoder 18 baseline use with the input data source 26 with a greater bit depth in order to develop TMS display. Pixel z represents a decoded output N with a lower bit depth, and is located in the same location of the input pixel represents the input data M with a greater bit depth. For each pixel ziin the decoded output data with a lower bit depth and located in the same pixel location iniinput data with greater bit depth, let

then, the j-th input LUT [j]=sumj/numj.

If (numj=0), then LUT[j] is the weighted average LUT[j-]

and LUT[j+], where j-and j+if access is s, represent the nearest neighboring non-zero data for the j-th input data.

In figure 2, using adaptive technology content, using content analysis and filtering 42 receive TMS display colors. The pixel z is decoded output data with a lower bit depth N, and is located in the same location of the input pixel I has a greater bit depth of M If there is no edge pixels in the immediate vicinity surrounding the target pixel z, then the target pixel z may be replaced by a filtered pixel f to obtain TMS display colors.

For each pixel xiin the decoded output data with a lower bit depth and located in the same place pixel yiin the input data with greater bit depth, if there is no edge pixel in the immediate vicinity of xi:

j-th input LUT [j]=sumj/numj.

If (numj=0), then LUT [j] is the weighted average LUT [j-] LUT [j+], where j-and j+if available, represent the next non-zero neighbors of j-th input data.

Operator the Sobel edge used for content analysis and filtering 42 in one variant embodiment. Suppose that the target pixel z:

The boundary metric (EM) for the target pixel z is formulated as the convolution of the weighting in equation below with its neighbors 33, NH9 (z), as follows:

EAT(z)=|NH9(z)*E_h|+|NH9(z)*E_v|+|NH9(z)*E_P45|+|NH9(z)*E_N45|

The use of two directions, E_v and E_h may be sufficient for many applications. Detection at a 45 degree additionally improves the detection of edges, but with a higher computational complexity.

Other methods of content analysis can be used to detect edges, such as the algorithm Kenny and the algorithm based on the derivative.

In figure 2 the target pixel filter support filter coming from neighboring pixels. Linear filter medium or filter in some embodiments, the embodiments can be used with the detector edges.

The definition of a neighbourhood can be naturally aligned with the definition of the block size specified in the popular standard video encoding, such as ERM and H.264. The block size can be 44, 84, 48 and 88, as examples. This alignment receiving 24 display tone adapts the content-based block. Although you can use the neighborhood of 33, other neighboring pixels can also be used.

Table display colors can be you in the dena-based channels brightness and chromaticity, respectively. TMS for brightness can be used to display the pixel brightness, and TMS for chroma can be used to display the pixel color. In some scenarios, one table color share for channels b and SG, or you can use two separate tables for b and CR, respectively.

In some embodiments, the realization of the interdependence of the tones used to predict the pixels with greater bit depth, by use of the decoded pixels with lower bit depths and located in the same location of the input pixels with higher bit depths. By using the decoded pixels with a lower bit depth, instead of the input pixels with a lower bit depth, reduce the amount of residual coding, and achieves better coding efficiency in some embodiments of the incarnation.

Adaptive to the content of technology use neighboring pixels to obtain a filtered pixel as a replacement unfiltered decoded pixel to output interdependence display colors. When using analysis of neighborhood pixels across the edge eliminate in order to get a smoother pixels and the best prediction of pixels with higher bit depths in some embodiments of the incarnation. Thus, in some cases the x receive greater coding efficiency. In connection with the resulting by samovyvozom on the side of the decoder, in some embodiments, the embodiments are not required to transfer any additional service data from the video encoder on the side of the decoder.

As shown in figure 3, the encoders and decoders are presented on figures 1 and 2, in one variant embodiment, can be a part of the graphics processor 112. In some embodiments, embodiments of encoders and decoders, shown in figures 1 and 2, may be embodied in hardware, and in other embodiments, embodiments they may be implemented as software or firmware. In the case of the embodiment in the form of appropriate software code may be stored in any appropriate semiconductor, magnetic or optical storage device, including a main memory 132. Thus, in one variant embodiment, the code 139 source can be stored in a readable device, carrier, such as a primary storage device 132, for execution by the processor, such as processor 100 or the graphics processor 112.

Computer system 130 may include an actuator 134 of the hard disk drive and removable media 136, which is connected via the bus 104 with the core logic 110 chipset. The main logic circuit may connect the at to the graphics processor 112 (via the bus 105) and to the main processor 100, in one variant embodiment. Graphics processor 112 may also be connected via the bus 106 to the buffer 114 of the frame. The buffer 114 of the frame may be connected through a bus 107 with the screen display 118, which, in turn, is connected via the bus 108 with conventional components such as a keyboard or mouse 120.

The blocks indicated in figure 1 and 2, can be hardware or software components. In the case of software components in the drawings may be labeled with a sequence of instructions that can be stored on a machine-readable medium such as a semiconductor storage device in the form of an integrated circuit device - an optical drive or device in a magnetic tape drive. In this case, the instructions perform using the computer or in the system based on a processor that receives instructions from memory and executes them. In some cases, the instructions may be a firmware that can be stored on the respective recording media. One result of the execution of such instructions is to improve the image quality, which ultimately display on the display screen.

References in this description to "one embodiment" or "alternative embodiment" means that a particular feature, structure, or the characteristic, described in connection with a variant of the embodiment is included in at least one of the variants of the embodiments covered by the invention. Thus, the appearance of the phrase "one embodiment" or "in a variant embodiment" does not necessarily represent a link to the same version of the incarnation. In addition, specific properties, structures, or characteristics can be introduced in other appropriate forms, in addition to the specific options described embodiments, and all such forms can be covered by the claims of this application.

Although the present invention has been described in relation to a limited number of variants of the embodiment to a person skilled in the art will be understood various modifications and variations. It is assumed that the appended claims cover all such modifications and variations which are within the true nature and scope of the present invention.

1. Video encoder comprising: an encoder base level image with a small bit, with output decoded data; a module reverse the display colours connected with said output decoded data; an encoder advanced level connected with the said module reverse display colors; and the modulus of the derivative of the display colors, coupled with the aforementioned encoder basic the level of the image with a low bit depth, moreover, the above-mentioned module derived display colors connected with the said module reverse the display colours and a video source advanced level.

2. The encoder according to claim 1 in which the said module derived reverse display colors used for the development of the informational table display colors using pixels of video data of the base level image with a low bit depth and video superior level.

3. The encoder according to claim 1, and referred to reverse the display colours used to use located in the same location of the pixels in the above-mentioned video data of the base level image with a low bit depth, and the above-mentioned video advanced level.

4. The encoder according to claim 2, whereby the said module derived reverse display colors used for the development of the informational table display colors using neighboring pixels in the above-mentioned video data of the base level image with a low bit depth, and the above-mentioned video advanced level.

5. The encoder according to claim 1, comprising a filter connected to the said output of the decoded data.

6. The encoder according to claim 5 in which the said module reverse display colors coupled to the output of the above-mentioned filter.

7. The encoder according to claim 6, in which the modulus of the derivative of the display colors, is connected to the output upon the wrapped filter.



 

Same patents:

FIELD: physics.

SUBSTANCE: when controlling an ambient illumination element, a host event is detected, a light script associated with the detected event is retrieved and the retrieved light script is rendered in accordance with the detected event. A user may associate the light script with the event and/or an event type which corresponds to the event. A default association of events and/or event types may be provided, although these default associations can be modified by the user. An event type which corresponds to the event can be identified and a light script associated with the identified event type can be rendered in response to the detected event.

EFFECT: reduced viewer fatigue and improved realism and depth of experience.

20 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: image encoder includes the following: a predicted-image generating unit that generates a predicted image in accordance with a plurality of prediction modes indicating the predicted-image generating method; a prediction-mode judging unit that evaluates prediction efficiency of a predicted image output from the predicted-image generating unit to judge a predetermined prediction mode; and an encoding unit that subjects the output signal of the prediction-mode judging unit to variable-length encoding. The prediction-mode judging unit judges, on the basis of a predetermined control signal, which one of a common prediction mode and a separate prediction mode is used for respective colour components forming the input image signal, and multiplexes information on the control signal on a bit stream.

EFFECT: high optimality of encoding the signal of a moving image.

4 cl, 86 dwg

FIELD: information technology.

SUBSTANCE: image includes, when applying encoding processing to three colour components using the 4:0:0 format, data for one image into one access module, which enables to establish the same time information or identically established encoding modes for corresponding colour components. In an image encoding system for applying compression processing to an input image signal, comprising multiple colour components, encoded data obtained after independent encoding processing of the input image signal for each of the colour components, and the parameter that indicates which colour component corresponds to encoded data is multiplexed with the bit stream.

EFFECT: high encoding efficiency owing to use of a single encoding mode for corresponding colour components.

2 cl, 25 dwg

FIELD: information technology.

SUBSTANCE: invention relates to an image signal processing device, which enables to reproduce the appearance of an image on a plasma display panel (PDP), using other display devices such as a cathode-ray tube or liquid-crystal display (LCD), while processing signals. In an image processing module, such processing for an image signal for which an image received when the image signal is displayed in a display device of another type besides a PDP, may seem like an image displayed on a PDP. At least one reproduction colour shift is performed, which is associated with a moving image which forms as a result of that, RGB glow is included in the said order of reproduction, smoothing structure used in the direction of space, reproduction of the smoothing structure used in the direction of reproduction time, the interval between pixels, and reproduction of an array of strips. The invention can be used when, for example, an image which must look like an image displayed on a PDP, is displayed on an LCD.

EFFECT: possibility of obtaining a type of image on a plasma panel display reproduced on another display different from the plasma panel display such as a liquid-crystal display, while processing signals.

6 cl, 20 dwg

FIELD: information technologies.

SUBSTANCE: it is suggested to do coding and decoding uniformly for multiple colouration formats. Based on control signal providing for type of colouration format of inlet signal from dynamic image, if colouration format is 4:2:0 or 4:2:2, the first unit of prediction mode detection with intra-coding and the first unit of predication image coding with intra-coding are applied to component of dynamic image inlet signal colouration component, and the second unit of prediction mode detection with intra-coding and the second unit of prediction image formation with intra-coding are applied to colouration component. If colouration format is 4:4:4, the first unit of prediction mode detection with intra-coding and the first unit of prediction image formation with intra-coding are applied to all colour components to do coding, and unit of coding with alternating length multiplexes control signal as data of coding, which should be applied to element of dynamic image sequence in bitstream.

EFFECT: improved mutual compatibility between coded video data of various colouration formats.

12 cl, 24 dwg

FIELD: information technology.

SUBSTANCE: invention relates to encoding and decoding digital images. A device is proposed for encoding/decoding a dynamic image, in which during compressed encoding through input of data signals of the dynamic image in 4:4:4 format, the first encoding process is used for encoding three signals of colour components of input signals of the dynamic image in general encoding mode and the second encoding process is used for encoding three signals of colour components input signals of the dynamic image in corresponding independent encoding modes. The encoding process is carried out by selecting any of the first and second encoding processes, and compressed data contain an identification signal for determining which process was selected.

EFFECT: more efficient encoding dynamic image signals, without distinction of the number of readings between colour components.

18 cl, 15 dwg

FIELD: information technologies.

SUBSTANCE: method is suggested for selection and processing of video content, which includes the following stages: quantisation of colour video space; making selection of dominating colour with application of mode, median of average or weighted average of pixel colorations; application of perception laws for further production of dominating colorations by means of the following steps: transformation of colorations; weighted average with application of pixel weight function affected by scene content; and expanded selection of dominating colour, where pixel weighing is reduced for majority pixels; and transformation of selected dominating colour into colour space of surrounding light with application of three-colour matrices. Colour of interest may additionally be analysed for creation of the right dominating colour, at that former video frames may control selection of dominating colours in the future frames.

EFFECT: creation of method to provide imitating surrounding lighting by means of dominating colour separation from selected video areas, with application of efficient data traffic, which codes averaged or characteristic values of colours.

20 cl, 43 dwg

FIELD: information technologies.

SUBSTANCE: invention concerns systems of coding/decoding of the squeezed image with the use of orthogonal transformation and forecasting/neutralisation of a motion on the basis of resolving ability of builders of colour and colour space of an input picture signal. The device (10) codings of the information of the image the forecastings (23) block with interior coding is offered is intended for an adaptive dimensional change of the block at generating of the predicted image, on the basis of the signal of a format of chromaticity specifying, whether is resolving ability of builders of colour one of the format 4:2:0, a format 4:2:2 and a format 4:4:4, and a signal of the colour space specifying, whether the colour space one of YCbCr, RGB and XYZ is. The block (14) orthogonal transformations and the quantization block (15) are intended for change of a procedure of orthogonal transformation and quantization procedure according to a signal of a format of chromaticity and a signal of colour space. The block (16) of return coding codes a signal of a format of chromaticity and a signal of colour space for insert of the coded signals gained, thus, in the squeezed information of the image.

EFFECT: increase of image coding and decoding efficiency.

125 cl, 12 dwg, 1 tbl

FIELD: information technologies.

SUBSTANCE: device and method are suggested which are intended for effective correction of wrong colour, such as purple fringe, created as a result of chromatic aberration, and for generating and output of high quality image data. Pixel with saturated white colour is detected from image data, at that in the area around detected pixel having saturated white colour the pixel of wrong colour and pixels having colour corresponding to wrong colour such as purple fringe are detected out of specified area. Detected pixels are determined as wrong colour pixels, and correction processing on the base of surrounding pixels values is performed over detected wrong colour pixels.

EFFECT: design of image processing device which allows to detect effectively an area of wrong colour.

25 cl, 22 dwg

FIELD: physics.

SUBSTANCE: invention concerns image processing technology, particularly YCbCr-format colour image data coding/decoding to smaller data volume by finding correlation between Cb and Cr chroma signal components of colour image data. The invention claims colour image coding method involving stages of: chroma signal component conversion in each of two or more mutual prediction modes; cost calculation for conversion values in each of two or more mutual prediction modes with the help of cost function defined preliminarily; selection of one or more mutual prediction modes on the basis of calculation result and conversion value output for the selected mutual prediction mode; entropic coding of output conversion values, where preliminarily defined cost function is selected out of cost function defining distortion in dependence of transfer rate, function of absolute subtract value amount, function of absolute converted subtract, function of square subtract sum and function of average absolute subtract.

EFFECT: increased efficiency of image coding.

88 cl, 23 dwg

FIELD: information technology.

SUBSTANCE: invention relates to increasing error tolerance in standard H.264/advanced video coding (AVC) and scalable video coding (SVC). A system of modifying error resiliency features by conveying temporal level 0 picture indices, such as tl0_pic_idx, within a supplemental enhancement information (SEI) message instead of optionally including them in the network abstraction layer (NAL) unit header is provided. Furthermore, a mechanism is provided for enabling repetition of SEI messages in teal-time transport protocol (RTP) packets. Such repetition of SEI messages facilitates detection of lost temporal level 0 pictures on the basis of any received packet.

EFFECT: improved method of increasing error tolerance in AVC and SVC standards.

20 cl, 4 dwg

FIELD: information technology.

SUBSTANCE: image includes, when applying encoding processing to three colour components using the 4:0:0 format, data for one image into one access module, which enables to establish the same time information or identically established encoding modes for corresponding colour components. In an image encoding system for applying compression processing to an input image signal, comprising multiple colour components, encoded data obtained after independent encoding processing of the input image signal for each of the colour components, and the parameter that indicates which colour component corresponds to encoded data is multiplexed with the bit stream.

EFFECT: high encoding efficiency owing to use of a single encoding mode for corresponding colour components.

2 cl, 25 dwg

FIELD: information technology.

SUBSTANCE: disclosed is an improved system and a method for providing improved inter-layer prediction for extended spatial scalability in video coding, as well as improved inter-layer prediction for motion vectors in the case of extended spatial scalability. In various versions, for the prediction of macroblock mode, the actual reference frame index and motion vectors from the base layer are used in determining the need to merge two blocks. Additionally, multiple representative pixels in a 4x4 block can be used to represent each 4x4 block in a virtual base layer macroblock. The partition and motion vector information for the relevant block in the virtual base layer macroblock can be derived from all of the partition information and motion vectors of those 4x4 blocks.

EFFECT: less complex calculations and high efficiency of coding and decoding during scalable video coding.

24 cl, 12 dwg

FIELD: information technology.

SUBSTANCE: method of receiving a signal comprises steps on which time-shifted signal frames are received over multiple radio-frequency (RF) channels. One of the signal frames is demodulated using first pilot signals and second pilot signals contained in the signal frames, and the channel of the demodulated signal frame is adjusted, after which the service content is decoded from the signal frame, the channel of which has been adjusted.

EFFECT: possibility of easily detecting and reconstructing a transmitted signal, high efficiency of transmitting data.

11 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: method involves evaluating a history of transform coefficient values associated with one or more previous layers of the SVC scheme, and estimating one or more refinement coefficient values associated with a current layer of the SVC scheme based on the history. On the encoding side, the coding process may include excluding information for one or more refinement coefficient values from the bitstream and signaling to the decoder that such information is excluded from the bitstream. On the decoding side, the coding process includes parsing the bitstream to identify information which signals to the decoder that information is excluded from the bitstream, and generating such information based on the history associated with one or more previous layers of the SVC scheme.

EFFECT: high efficiency of scalable video coding based on history of history of transform coefficient values.

25 cl, 11 dwg

FIELD: information technologies.

SUBSTANCE: method is realised through application of an algorithm, which applies data of measurements produced from parametres related to a streaming video player, and/or parametres related to information transportation. The specified data is used as the input data in a model designed to generate a value corresponding to quality of multimedia sequence, for instance, such as MOS assessment.

EFFECT: calculation of a quality value based on the model of quality perceived by an end user and corresponding to one of extracted input parametres.

18 cl, 3 dwg

FIELD: information technologies.

SUBSTANCE: it is proposed to represent each colour pixel with three colour components, every of which is initially coded with ten bits. Coding is carried out by breaking the initial colour video frame into non-covering space units, and subsequent separation of a bit representation of each colour component of a pixel into a senior part, made of more than one senior bit, and a junior part made of at least one junior bit, then separate coding of the senior and junior parts, besides, coding of the senior part is carried out by application of more than one coding method, every of which takes into account pixel-to-pixel connections only within the limits of the processed space unit, estimation of a coding error, selection of a coding method, giving the smallest error, sending data on the method of coding by means of a prefix code transfer, coding of the junior part, which is carried out by averaging of more than one value, included into the junior part, besides, dimensions of the averaging areas within the limits of the junior part, a number of bits, fixed in advance, is set as required for a compact representation of initial space colour unit.

EFFECT: efficient compression of a colour high-quality image without visible visual distortions.

25 cl, 29 dwg, 13 tbl

FIELD: information technologies.

SUBSTANCE: CSF is developed with one or more units of network abstraction level (NAL) as a frame of a random access point (RAP), and adjacent frames are sent, which include CSF and a frame, which is not a RAP frame, besides, each of them has an identical identification number.

EFFECT: provision of video coding and video decoding of channels switching frames, making it possible to provide for a grip and resynchronisation of a video flow with preservation of compression efficiency.

46 cl, 21 dwg

FIELD: information technologies.

SUBSTANCE: first ratio is defined, related to the first type of a video unit; the second ratio is defined, related to the second type of a video unit, besides, the first and second ratios are based on symbols, indicating whether signs of specification coefficients have changed or remained the same relative to the appropriate coefficients of the previous level in SVC circuit. The first table VLC is selected from multiple VLC tables for use in coding of a video unit of the first type on the basis of the first ratio, and the second table VLC - from multiple VLC tables for use in coding of a video unit of the second type on the basis of the second ratio, and the video units of the first type are coded on the basis of the first table VLC, and the video units of the second type - on the basis of the second table VLC.

EFFECT: improved efficiency of video coding.

23 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: encoder for video signal scalable coding has an image encoder for generating a bit stream of the base layer and a bit stream of an improved layer. The bit stream of the base layer and the bit stream of the improved layer are formed by breaking down the image into multiple image blocks, grouping the multiple image blocks into at least one group of sectors in the bit stream of the base layer and into at least two groups of sectors in the bit stream of the improved layer, coding all of the at least one group of sectors in the bit stream of the base layer and less than all of the least two groups of sectors in the improved layer so that, at least one group of sectors from at least two groups of sectors in the bit stream of the improved layer is intentionally not coded, coding the syntax structure element in the heading to indicate at least one group of sectors in the improved layer which is intentionally not coded.

EFFECT: high efficiency of video signal scalable coding.

18 cl, 5 dwg

FIELD: coding elementary-wave data by means of null tree.

SUBSTANCE: proposed method includes generation of elementary-wave ratios pointing to image. In the process bits of each elementary-wave ratio are associated with different bit places so that each place is associated with one of bits of each elementary-wave ratio and associated bits are coded with respect to each place of bits to point to null tree roots. Each place of bits is also associated only with one of bits of each elementary-wave ratio. Computer system 100 for coding elementary-wave ratios by means of null tree has processor 112 and memory 118 saving program that enables processor 112 to generate elementary-wave ratios pointing to image. Processor 112 functions to code bits of each place to point to null tree roots associated with place of bits.

EFFECT: enhanced data compression speed.

18 cl, 7 dwg

Up!