Compressing video data without visible losses

FIELD: information technology.

SUBSTANCE: disclosed is an apparatus and a method of compressing video data YUY (or YCrCb) for storage in memory or decompression after obtaining blocks from memory. Compression is performed using a quantiser for compressing video data to the required overall compression coefficient R, even if the input of brightness and colour data into the overall compression differ for each sub-block, each preferably selected in accordance with the texture estimate. Selection is performed for each sub-block to perform either linear or nonlinear quantisation during compression. Compression is performed without using data from blocks obtained outside the compressed block, wherein video data blocks may be obtained and restored in any required order. As an alternative, an encoder directly selects blocks from memory, which are then restored and encoded.

EFFECT: reduced required bus bandwidth or storage size and memory for video data streams.

20 cl, 17 dwg

 

The technical field to which the invention relates.

The present invention in General relates to data processing and, more particularly, to compression and decompression of video data in the video memory before encoding or output.

The level of technology

Typical video architecture is designed for pre-processing video data before saving them in the storage device of the video frame, which is usually done via an external bus. The video then get from the video memory during the encoding process and/or output. The amount of data sent over the data bus between the encoder and the memory can be very large, resulting in the need to use large amounts of memory, which requires a huge bandwidth of the memory bus, and therefore leads to greater energy consumption.

Figure 1 shows the typical architecture of a camcorder or camera. The input video image received through input devices, such as CCD sensors (CCD, charge-coupled device) or CMOS (CMOS, complementary to the structure of metal-oxide-semiconductor), then save to the external storage device after pre-processing. Usually pre-processing includes noise reduction and conversion of video data from RGB (GLC, red,green, blue) to YUV (one value of Y for two values of the color difference signals U+V). Other processing device, such as a video encoder or a display device, and then reads the video data from the video memory.

Since the amount of video data is typically large, the required bus bandwidth to transmit image data through the external bus is significantly large. In particular, when used in the field of HD video (HF, high definition) required bus bandwidth is excessive, the cost of bandwidth and energy consumption becomes very high, which complicates the implementation of systems for forming a still/video images with a low cost.

Another technical problem in systems video cameras in this respect is associated with the desired memory size. Most of the devices embodied in the form of a Soc (System on chip). Usually, the amount of external memory (such as SDRAM (SDSU,synchronous dynamic random access memory), as a rule, higher than that of other devices. Therefore, the reduction of the required memory size can reduce the total cost of the system.

In line with this, there is a need for a way to reduce the required bandwidth of the bus and/or size of the drive and memory for video streams, and, in particular, when it is performed before item is acessem of encoding video data. The present invention accomplishes these and other objectives and overcomes the limitations of the solutions of the prior art.

The invention

The described device and method for video compression is lossless, which uses a method of compression/decompression of video data YUV. For simplicity, here we will use the term YUV video data, although it should be understood that such cuts as YCrCb video data or other similar encoding schemes are equally applicable to listed here the description and the claims. Input video data is compressed by dividing each frame into a set of compressed blocks having a preset number of pixels, for each component (for example, N pixels for Y, M pixels (U, V). The compression blocks do not overlap with other blocks compression and, thus, they can be processed independently from each other without the need to access information from other blocks compression to image data which can be accessed randomly, which is especially well suited for use in a video encoder. As a result, the retaining blocks of the compression in the storage device requires less memory and less bandwidth bus, taking into account the small size of the compressed blocks. After that, the compressed video data receive in any desired order (e.g., posledovat the flax) from memory and restore them with the help of, for example, the encoder and/or, less preferably, for output to the display. Decompression allows you to recover data in their original format in which the device or application that uses the recovered data, you do not need to have information about what the data was ever compressed.

Before compression preferably perform an assessment of the complexity so that there may be optimal levels and modes of compression.

During compression makes predictions to predict the value of the current pixel on the basis of previous pixels, then determine the size of the subblock, and perform the division into sub-blocks. After that make a decision regarding the type of quantization (e.g., linear or nonlinear), for example, preferably, for each subblock. For nonlinear quantization in a variant embodiment of the invention provides for evaluation of the quantization parameter (QP, PC), during which you do not want to search for all possible values of PC. Quantization is then performed on the basis of the above definition, and, finally, the sub-blocks of compressed data are Packed into blocks. During compression in one variant embodiment of the invention first performs data compression of color, in which information data compression color becomes available and is used during data compression brightness. In what ways embodiment of the present invention provide a choice of different levels of compression in response to the complexity of the video data and the number of available bits. The compressed data of the luminance and chrominance are combined in the final compressed block.

Because the types and levels of quantization is determined in accordance with the complexity of the data, compressed video data after decompression (although they do not always accurately represent the original image data) will still visually appear to the viewer - man, how the data without loss. Thus, the apparatus and method in accordance with the invention operate with reduced requirements to a storage device and with a reduced bandwidth, while providing the required quality, to the extent that the data do not contain visible loss.

Goal of reducing the bandwidth of the bus and/or memory size above as an example. The use of the compression method is not limited to such specific advantages applications, but they can be used to provide benefits in other embodiments, the applications in which you want to use the compression of video data, for example, when transmitting compressed video data over the network prior to decompression or when storing the compressed video data in any other multimedia device, prior to decompression, for example, on hard disk drive or storage device.

The invention can be performed in a variety of ways, including, but without limitation, the following :

Varian is the embodiment of the invention is a device memory made with the possibility of data videopicture in one or more of the module, containing: (a) module video compression, which is connected via the bus transmission signals with the memory and configured to (a) (i) compression of the input video luminance and chrominance YUV, using the quantizer, without the use of pixel data from pixels outside of each block in the compressed video data having a reduced number of bits per pixel, (a) (ii) store the compressed video data in the video memory; and (b) the module decompressing the video data is arranged to receive compressed blocks video data in any order and decompression of the compressed video data contained in the memory, receiving the recovered video data that have the same format and approach of the original image data that have been taken and compressed using a video compression module, in which the said module decompression of video data is configured to output the recovered video data.

In video compression is preferably performed in accordance with common ratio R of compression, which operates within the block compression and which can be expressed ratiometrically or the number of bits contained in the resulting block. At least, in one preferred variant of the embodiment of the same or different levels of compression to choose for brightness data and color data, while maintaining the overall R-value compression. At least, in one preferred mode, if only the data of color do not have a high level of compression, the device selects the compression ratio so as to minimize the use of bits in the data color while optimizing use of bits in the data brightness.

At least in one variant embodiment, the video device perform an assessment of the complexity of the texture blocks for compression before compression. Allocate space compression for brightness data and color data in each block, so that the number of bits allocated to the brightness data and color data in the compressed block is determined in response to the processing of estimating the complexity of the texture.

At least in one variant embodiment of the information of luminance and chrominance are combined into blocks of compressed video data, which preferably fill bits filling, to maintain a fixed size of the compressed blocks.

At least one device configuration for data compression, brightness for a given block using the information identified during the process of compressing the color data of the same block. At least in one mode of the invention, video data compression is performed by using non-linear quantization, such as within a combination of linear and non-linear quantization. In predpochtitelno variant embodiment of the step size of quantization with different accuracy used in the nonlinear quantization.

In at least one embodiment, embodiments of the invention perform the prediction pixel during compression. Prediction pixel starts from the source of the reference pixel selected in the middle of a block, which determine the directions of prediction right and left, which can be processed in parallel if required. It should be understood that the choice of the middle pixel (or close to the average for the early prediction reference pixel should remain the same as for the right and left directions of prediction. Prediction pixel values is performed by predicting the current pixel values based on the previous values of the pixel for which the prediction in these two directions perform independently, allowing, thus, the parallel execution of the forecasting process in right and left directions, so that, thus, allows to reduce the required processing time.

In at least one embodiment, the embodiment performs the division into sub-blocks in accordance with the desired configuration. To determine the separation of the sub-blocks of the first count value of the subblock, at least for part of the possible configurations of the subblock, then discard the configuration of the sub-blocks, the cost of which p is Evesham specified threshold value, and/or the number of bits exceeds the number of bits available in the block compression.

At least in one variant embodiment of the invention, the input video data for a device is taken from the image sensor, which can be integrated in the camcorder or the camera, or other device imaging. The format of the input video data carry out so that they are included as brightness information and the color information.

At least one objective of the invention is that the compression and decompression of run toward reducing requirements for bus bandwidth and/or memory used during the preliminary processing of YUV video data, a pre-encoding blocks, which are obtained from memory specific to the encoder, which typically follow inconsistent (e.g., on the order of block number). Alternatively, or in addition, the compression and decompression in accordance with the invention can be performed in practice when transmitting compressed video data over the network prior to decompression or to save the video in a multimedia device prior to decompression.

One embodiment of the invention is directed to a device video encoder, designed to encode YUV video data, comprising: (a) videoa is better made with the possibility of sharing the video data of the pixel with one or more module; (b) the integration module connected via the bus transmission signals with the memory and configured to (b) (i) a video compression luminance and chrominance YUV, using a quantizer to receive a compressed video data having a reduced number of bits per pixel, without requiring access to data from other blocks; (b) (ii) store the compressed video data in the video memory; and (C) the module decompression of video data is performed with a block of video memory in any desired order and decompression of the compressed video with obtaining the restored video data, which have the same format and approach of the original image data that have been taken and compressed using the integration module; and (d) the encryption module, which is inconsistent selects blocks of video data from the memory and which receives and encodes the recovered video data.

In one variation of the embodiment of the invention is directed to a method of compression and decompression YUV video data, comprising: (a) compress the input video data, with a coefficient R of compression, using the quantizer with the receiving blocks of compressed video data having a reduced number of data bits of brightness and/or color data for each block of video data; (b)in which the compression of the input video data are performed without use of data from pixels outside of the compressed block; (C) retain the compressed video data in a video memory; and (d)perform the decompression of the compressed video data for any of the blocks of video data selected in the order of or without regard to the order, to generate output the recovered video data. In one variant embodiment, the compression and decompression is performed in combination with the process of encoding video data, which is inconsistent incoming blocks of video data from the storage device, choose restore in accordance with the invention, and take to encode.

In one variant embodiment of the invention is directed to a method of compression and decompression YUV video data, comprising: (a) compress the input video data with a compression ratio R using a quantizer, obtaining blocks of compressed video data having a reduced number of data bits of brightness and/or color for each block of video data; (b) compression is performed for each block of video data without using data obtained outside of each compressed block; and (C) choose either linear or non-linear quantization for each sub-block within a given block, for which compress; (d) retain the compressed video data in the video memory; and (e) perform the decompression of the compressed video data for any of the blocks of video data received from the video memory that is selected in either the order, to generate output the recovered video data.

In one variant embodiment of the invention is directed to a method of compression and decompression YUV video data, comprising: (a) compress the input video data with a compression ratio R using a quantizer, obtaining blocks of compressed video data having a reduced number of data bits of brightness and/or color for each block of video data; (b) compress the blocks of video data without using data of the blocks outside the compressed block; and (C) assess the complexity of the texture data of the brightness and complexity of the texture color data; (d) choose the same or different compression levels for brightness data and color, while maintaining the overall R-value compression; (e) select either linear or non-linear quantization for each sub-block within a given block, which compress in response to the characteristics detected in the block; (f) in which, during compression, the compression process of the data brightness use the information from the compression process color data for that block; (g) retain the compressed video data in a video memory; (h) receive blocks of video data from the video memory in any desired manner and at any time after referred to store the compressed video data; (i) perform the decompression of the compressed image data for the floor is Chennai blocks to generate a restored video output.

The present invention is directed to a set of preferred options embodiments that can be implemented either individually or in any combination, without going beyond the present description.

A variant embodiment of the invention provides a device and method for compression and decompression blocks YUV video data.

Another variant embodiment of the invention is that each compressed block does not overlap with other compressed blocks, and each compressed block is compressed independently, without recourse to information from other blocks.

Another variant embodiment of the invention is that, if the number of bits of compressed data after compression is less than the target number of bits, which corresponds to the desired coefficient (R) compression, then use bits of content, to increase the total number of output bits to the desired amount so that it remains fixed. Since the amount of generated bits of each block compression is fixed, you can refer to the compressed block in any position and do his decompression, without recourse to information from other blocks. Therefore, when using the proposed method, it is possible to do direct access to image data and decompression of video blocks of video data from a random position in the video data, for example, by the teas, when you want to provide access to any area of a video frame for processing or encoding video.

In another variant embodiment of the invention is directed to data compression brightness using the information identified as a result of data compression color. In this way, in accordance with the invention initially compress the data color. After compression, the color of at least part of the information passed to the module brightness to prepare for the data compression brightness. Data bits can hold any desired period of time, or a certain number of bits after compression. For data compression, brightness use this information compression. Finally, the compressed data of the luminance and chrominance are packaged into a single compressed block. This ensures that the data size will be within the range defined by the target compression ratio.

In another variant embodiment of the invention is directed to the fact that in the beginning, before compression, perform an assessment of the complexity of the texture blocks of data compression of luminance and chrominance. Each module of the compression data of the luminance and chrominance) can automatically provide different levels (or degrees) compression (e.g., weak, medium or strong). The compression ratio is determined on the basis of the complexity of the texture, in accordance with which a particular kolichestvoparkov then allocate fields for brightness and color in the compressed block.

In another variant embodiment of the invention is directed to the use of a common compression ratio, defined as the value of R, which controls the overall degree to which compresses the blocks of video data. The value of R, as an example, is the ratio of (the number of input bits/number of output bits) for a given block. The coefficient R is equal to two (2) indicates that the compressed blocks contain half the number of bits used for the source data. It should be understood, however, that compression can be expressed in any desired format, for example, in any form radiometrical designation, or in the selection of a specified number of bits per unit of output video data.

Another variant embodiment of the invention allows to choose the degree to which compress the values Y, U and V while maintaining the overall R-value compression. It should be noted that this does not mean that the compression ratio is fixed for Y, U and V in a separate block compression, since it is possible to allocate bits based on different compression ratios between color and brightness.

In another variant embodiment of the invention is directed to a process to automatically determine the best ratio between brightness and color, while maintaining the overall R-value compression. In General, the human eye is Bo is sensitive to the noise of the luminance signal, than the noise of a color signal. Therefore, the preferred strategy selection bits minimizes the use of bits for color data, and ensures maximum use of bits for data of brightness. However, when determining that the color must have a certain degree of complexity, in particular, when determining that they are very complex, then at least one variant embodiment of the invention allows to use for the data color a sufficient number of bits to prevent visual artifacts. Such a reduction to the maximum and the minimum use of bits is determined in accordance with the received information about the complexity of the texture data of brightness and color.

In another variant embodiment of the invention selects different levels of compression. For example, you can define low, medium and high quality modes of compression. The low quality based on the use of large aspect ratio, while in the high quality mode using a small compression ratio. In one variant embodiment of the invention performs the determination of the final best compression mode. It should be noted that the compression level can be defined for data compression, brightness, and color data, which define the area of the block used for the brightness, and plot b is the eye, used for color.

Another variant embodiment of the invention is that the solution to determine the best compression mode take on the received information about the complexity of the texture and the number of available bits.

In another variant embodiment of the invention begins prediction pixel medium pixel row of pixels instead of the first pixel in which the other pixels of the forecast relative to the initial pixel. Expect two areas of forecasting, it is possible to reduce the required processing time.

Another variant embodiment of the invention is directed to division into sub-blocks in accordance with the quantization and decisions about the configuration of the subblock. The overall compression ratio R is determined in accordance with the number N of pixels used for brightness data, and the number M of pixels used for data compression of chrominance (U and V, or CR and b). For a given value (N, M) data compression of luminance and chrominance there are a number of different configurations of the sub-blocks. The sizing module sub-blocks or similar device to determine the optimal configuration of the sub-block, using the specified set of input data and conditions.

In another variant embodiment of the invention is directed to the calculation of the cost, at least for part of the different possible configurations is Adblock, (for example, or different combinations of sub-blocks) based on the specified information, such as obtained from Information_from_chroma, R, precision PC, etc. as an example, the cost estimation can be performed in relation to the number of generated output bits. If the number of output bits for a given configuration of the sub-block exceeds the threshold value is preferably determined by the number of available bits, then the configuration of the sub-blocks cast and check the following possible configuration of the subblock. At least in one variant embodiment, if none of the possible configurations is not available, then the N pixels of the original block compression is used as the sub-block.

In another variant embodiment of the invention is directed to a choice of either linear or non-linear quantization during compression of the block.

Another variant embodiment of the invention is directed to estimation of the parameter (PC) quantization for use in nonlinear quantization, as described here.

Other variants of embodiment of the invention will be presented in the following parts of the description, which presents a detailed description of the invention with the purpose of fully disclosing the preferred embodiment variants of the invention without imposing restrictions on him.

Brief description of drawings

The invention will be understood from a consideration of the Institute the following drawings, presented only to illustrate:

figure 1 shows a block diagram of a conventional architecture of a camcorder or camera that represents the storage of the image data after pre-processing in video memory;

figure 2 shows the block diagram of the architecture of the video camera or the camera in accordance with one embodiment of the present invention, representing the compression of the image data before being stored in the video memory;

figure 3 shows a block diagram subjected to the compression unit, in accordance with a variant embodiment of the present invention;

figure 4 shows the block diagram of the decompression unit in accordance with a variant embodiment of the present invention;

figure 5 shows the block diagram of the compression of color data and brightness data in accordance with a variant embodiment of the present invention, representing information for data compression of color used in the process of data compression brightness;

figure 6 shows the block diagram of the sequence of operations total compression in accordance with a variant embodiment of the present invention;

7 shows a block diagram of a sequence of operations to support different modes of compression in accordance with a variant embodiment of the present invention;

on Fig shows the block diagram of the decision making about the level of compression in accordance with variations is that the embodiment of the present invention, representing the estimate of the complexity of textures and analysis in preparation for the decision about the level of compression and the performance of the compression method;

figure 9 shows the block diagram of the sequence of operations of the compression method in accordance with a variant embodiment of the present invention, representing, either linear or nonlinear Q performed for the block in response to the decision quantization;

figure 10 shows a diagram of pixels for prediction of the pixel in accordance with a variant embodiment of the present invention, representing the choice of the reference pixel in the middle of the pixels;

figure 11 shows the scheme of the pixels for the right and left directions of prediction in response to selection of the reference pixel, as shown in figure 10;

on Fig shows the block diagram select the size of sub-blocks in accordance with a variant embodiment of the present invention;

on Fig shows the block diagram of the sequence of operations when determining the size of sub-blocks in accordance with a variant embodiment of the present invention;

on Fig shows the block diagram of the sequence of operations of the non-linear quantization in response to the evaluation values of the PC and the residual data in accordance with a variant embodiment of the present invention;

on Fig shows the block diagram of the sequence of operations evaluation PC under the variant of the embodiment of the present invention, shown when executed in response to the determination of the residual data;

on Fig shows the block diagram of the selection of linear or non-linear quantization in accordance with a variant embodiment of the present invention;

on Fig shows the block diagram of the sequence of operations when making decisions about the linear or non-linear quantization in accordance with a variant embodiment of the present invention.

Detailed description of the invention

Referring more specifically to the drawings, for illustrative purposes, the present invention is embodied in the device, in General, are presented in figure 2 - Fig. You should understand that you can change the configuration and details of the parts of the device, and that you can change the concrete steps of the method and the sequence of their execution, without going beyond the basic disclosed concepts here.

The method of compression/decompression of video data YUV

Figure 2 illustrates a variant of the 10th embodiment of the present invention, to perform compression and decompression of video data of the YUV (or YCrCb). Compression can be used to obtain many of the advantages associated with the use of, for example, to reduce the requirements of the bus bandwidth and the video memory.

The apparatus and method in accordance with the present invention receives input signals from a video device 12, the example via the bus 18, 14 processes the data, and then compresses the video data 16 before they are stored in the external memory 20. The video encoder 24 or the device 28 to display receives compressed video data from the video memory and executes them decompression 22, 26 before using the video data. Because video data is compressed before they are stored in memory, the required bus bandwidth is much smaller than the original, while the necessary amount of video memory (for example, SDSU) similarly reduced.

Figure 3 illustrates an example of option 30 of the embodiment YUV or similar unit, compressed in accordance with the present invention. The format of the input video data block 32 compression compression method in accordance with the invention is a YUV (or YCrCb). During the compression process the frame (which in this example consists of video data Y, CR and b) is divided into a set of blocks compression. Block compression is defined as the N pixel data 34 of the luminance (Y) and M pixels for each of the data 26, 38 chrominance (CR, b), as shown in the drawing. Therefore, the total number of pixels in each block compression is (N+2M). If bits used to represent pixel, then the total number of bits will be (N+2M)×IN bits.

The compressor unit does not overlap with other compression blocks, and each block compression compress independently, without recourse to information in each the x blocks.

Since the ratio R is taken as the input data, one variant embodiment of the method 40 compression generates compressed blocks 42 with a fixed size. The number of generated bits are calculated according to the formula:

The total number of output bits = (N+2M)*B/R.

For example, if R equals 2, the total generated the number of output bits will be half of the original number of bits YUV.

If the number of bits of compressed data after compression is less than

the target number of bits defined by R, then preferably inserted bits filling to maintain a fixed total number of output bits. Since the amount of generated bits of each block, the compression ratio is fixed, the compressed block in any position can be obtained and recovered without recourse to information from other blocks. Therefore, using the method in accordance with the invention, it becomes possible to access to video data in a random position (as a unit of compression). This is necessary in embodiments of the application in which you want to apply to an arbitrary region of the frame to the other video processing.

In Fig. 4 shows a variant embodiment 50 of the method of recovery. The recovery method takes a block 52 of compression, which restores 54 to block 56 compression, which is returned to the original resolution UV, such as N pixels for Y 58, M pixels for U 60 M pixels for V. Restored YUV data can also be used in other devices for video processing. Since the size of the video data after recovery will be the same as the original data, other devices will not be able to recognize on the basis of the video any difference, introduced by the compression algorithm.

In Fig. 5 illustrates a variant of 70 of the embodiment of the compressor unit and the ratio between the compression modules, brightness and color. At least in one variant embodiment of the data compression brightness is performed in response to information obtained during the compression of color data for the same block. As you can see, the data 72 of the chrominance (UV) take first compression 74 data color information data compression of chrominance Information Jrom_chroma, transfer and use in combination with data 76 of the luminance (Y) to perform compression 78 brightness data. Thus, it is preferable that the compression of color was performed initially in accordance with the method in accordance with this invention. It should be understood that the information obtained for the data of the color, you can contain within any specified time or number of bits after compression, for example, for use during data compression brightness. Finally, the compressed data is e brightness and color pack, as represented by block 80, in one compressed block 82. In accordance with at least one preferred embodiment, it is guaranteed that the data size will be within the range defined by the target compression ratio.

Figure 6 illustrates a variant of 90 General embodiment of the compression method in accordance with the invention. Shown here receiving unit 92 data compression color block 94 data compression brightness. Before compression perform 96 evaluation of the complexity of the texture color data and perform 100 assessment of the complexity of the texture data of the brightness value of any particular order is not assumed). Compression 98 color data and the compressed data 102 brightness performed with data taken after both estimates the complexity of the brightness data and color data, while the compression brightness additionally receives information from data compression color. The compressed data of the luminance and chrominance then take and pack 104, and generate a compressed block 106.

Returning to the compression process, it should be understood that each module compression (brightness and color) in the present invention can provide different levels (degree) compression (e.g., weak, medium, strong). In accordance with at least one embodiment of the invention, the compression ratio is preferably chosen in response to the difficulty level of the text the market.

The compression ratio represented by the variable R, controls the compression ratio of compression blocks Y, U and V. it Should be noted that this does not mean that Y, U and V are subjected to a fixed compression ratio R, as Y, U and V (or Y, CR and b) each can be individually compressed with any desired ratio, unless resulting from the compression of the block corresponds to the total R-value compression. Therefore, it becomes possible to allocate bits based on different compression ratios between color and brightness.

In one variant embodiment of the invention embodied method tries to optimize (i.e., to find the best compression ratio within the limitations of the technology and available information) the ratio between brightness and color, while maintaining the overall compression ratio is equal to R. This variant embodiment of the invention takes into account the fact that the human eye is generally more sensitive to noise in the data brightness than to the noise in the data color. Therefore, in the preferred separation strategies bits for color data using the minimum number of bits so that it was possible to make the maximum number of bits used for data brightness. However, in some cases, for example, when determining that the color is very text by the author, mode of the invention allows to use for the data color a sufficient number of bits to prevent visual artifacts.

At least in one variant embodiment, the ratio of allocated bits is chosen on the basis of determining the complexity of the texture brightness data and color data. The complexity of the texture, as shown in Fig. 6, can be obtained, for example, by calculating the average values of the residual block compression.

7 illustrates a variant embodiment 110 of the method of the invention to support different modes of compression quality, depending on the compression level. For example, block 112, the compression may be subjected to compression modes with low 114, medium 116 and 118 high quality, in accordance with the invention. The low quality occurs when you use a high compression ratio, while mode video high quality is achieved when using a small compression ratio. In one preferred variant embodiment of the method 120 is performed to select the final best compression mode, which receive the final compressed block 122. It should be understood that the compression level can be defined as in the compression mode of the brightness data and the mode data compression color.

One example of an embodiment of different levels (modes) compression can be represented in the use of the implement precision bits of the output quantization. For modes with a low compression ratio, you can use a high degree of accuracy, while for low precision, you can use the modes with high compression. Depending on the desired compression ratio of the quantization accuracy defined at different levels. As shown in Fig. 7, determining the best mode of the compression generated by the system in response to information about the complexity of textures and bits available.

In Fig. 8 illustrates an exemplary variant embodiment 130 of the decision on the level of compression in accordance with the present invention. Data 132 brightness (Y) and data 134 chrominance (UV) are, respectively, to estimate 136 the complexity of the texture data of brightness and evaluation 138 the complexity of the texture color data. These estimate are described in the following sections. Evaluation of the texture data of the luminance and chrominance then use the analyzer 140 textures, the output of which is used when making decisions about the level of compression in block 142, and then perform a compression method in accordance with block 144, obtaining 146 at the output of the compressed color data and output data of the color data 148. The compression method described with reference to Fig. 5, while the texture analysis and decision-making about the level of compression is described below.

Assessment of the complexity of the texture data brightness

The following pseudocode Il is ustrinum, as an example, and not limitation, the assessment of the complexity of the texture data of brightness. For each subblock, the complexity of the texture brightness analyze, and appreciate the complexity as follows.

index: index the position of the pixel

index bp: the index of the provisions of sub-blocks

Luma_Texture_Complexity (LTC)

This value indicates the complexity of the current block data compression brightness. Three different levels of complexity can be defined: weak, medium and strong.

Method of detection:

For each sub-block, to calculate the Residual [x] = Current pixel [x] - predicted pixel [x];

For each subblock, to find the maximum of the Residual [x] within sub-blocks, for example, max_Residual [bp];

To calculate average_residue among max_Residue [bp] for the entire block compression;

If averagejresidue < Threshold_l:

All luma_block considered as "weakly complex."

If average_residue ≥ Threshold_2:

To calculate the number of sub-blocks so that max_Residue [bp] > Threshold_3; If the calculated number < Threshold_4;

Then, the whole luma_block consider the block as "moderate complexity". Otherwise, the entire unit brightness consider the block as "high complexity".

Assessment of the complexity of the texture data color

The following pseudocode illustrates, as an example and not for limitation, an estimate of the complexity of the texture color data. For each subblock, the complexity of the texture color data analyze, and complexity is assessed as follows.

The complexity of the texture color data (STS)

Method of detection:

For each sub-block, to calculate the Residual [x] = Current pixel [x] - predicted pixel [x];

If any Residual [x] < Threshold_l:

If the residual value at the border sub-block is greater than Threshold_2; Then the sub-block is considered as "low_complex".

Otherwise, the sub-block data color 4x1 considered as "high_complex"

The decision about the compression level

As an example, and not limitation, the decision about the level of compression can be embodied in accordance with the following pseudo-code.

If(STS=high)

If(LTC==low)

Chroma_Compression_Level=LOW_COMPRESSION_MODE

Luma_Compression_Level=HIGH_COMPRESSION_MODE

Else if(LTC=medium)

Chroma_Compression_Level=HIGH_COMPRESSION_MODE

Luma_Compression_Level=MIDDLE_COMPRESSION_MODE

Else (LTC=high)

Chroma_Compression_Level=LOW_COMPRESSION_MODE

Luma_Compression_Level=HIGH_COMPRESSION_MODE

Else if(CTC=low)

If(LTC=low)

Chroma_Compression_Level=LOW_COMPPvESSION_MODE

Luma_Compression_Level=HIGH_COMPRESSION_MODE

Else if(LTC==medium)

Chroma_Compression_Level=MIDDLE_COMPRESSION_MODE

Luma_Compression_Level=MIDDLE_COMPRESSION_MODE

Else(LTC=high)

Chroma_Compression_Level=HIGH_COMPRESSION_MODE

Luma_Compression_Level=MIDDLE_COMPRESSION_MODE

The compression method at each level

Figure 9 illustrates a sample of 150 of the embodiment of the compression method in accordance with the invention. Block 152 compression take in the process 154 prediction, which perform the prediction of the current pixel value on the basis of the previous pixel values. P the following, as the prediction value is obtained, the residual value (the difference between the current pixel and the predicted pixel) count in process 154 forecasting. The process 156 a decision on the size of the sub-blocks to determine the optimal size of the subblock, based on the complexity of texture and available budget of bits. Once a decision will be made in 156 on the size of the subblock, the residual data set is divided in block 158 to arrays of smaller size, called sub-blocks. The process 160 of quantization applied to each subblock of residual data, to reduce the number of output bits. Therefore, in each subblock is possible to perform quantization using a unique value PC (quantization parameter). The drawing shows that the decision to perform linear quantization 164, or parameter estimation (PC) 162 quantization and then perform a nonlinear quantization 166. In any case, the results of the sub-blocks are Packed in 168, to obtain the final compressed block 170.

Prediction implies that the current pixel is predicted from the previous pixels. The predicted pixel is used to calculate the "balance" between the actual current pixel value and the predicted pixel value. In the method in accordance with the invention, the residual is quantized and coded at a later stage.

The predicted value x[n]=F(x[n-1], x[n-2],...)

The remainder of x[n]=abs(x[n] is the Predicted value x[n])

Figure 10 and 11 illustrate the mechanisms of the initial forecasting 190 pixels and support two directions 210 forecasting, respectively. It should be understood that the set of pixel data can be of any size, with a sample data set is shown for simplicity as an example and not for limitation. For simplicity, the explanation will describe the data set in the same direction, although these mechanisms can also be applied to data in two directions. In these drawings the numbers in the boxes show the position of the pixels in the block compression.

It should be noted in figure 10 that the prediction of 190 begins with the middle pixel 194 data 192 instead of the first pixel. Pixel in approximately the middle position (for example, in position 16) 194 set as the initial reference pixel, with all other pixels forecast, starting from this pixel. As shown in figure 11, the mechanism 210 supports two areas of forecasting, which is designated as the right direction 212 and left direction 214. Since the reference pixel (i.e. the pixel in position 16) does not change, the prediction of each of these two directions is independent. Therefore, it becomes possible to treat the prediction in parallel to p is avago and left directions, in the result, it is possible to significantly reduce the required processing time.

The compressor unit is divided into subblocks. The purpose of the division into sub-blocks is to provide independent PCs for each subblock. Because a smaller value of PC can be used to minimize noise associated with quantization, the overall quality can be improved or divided into many smaller sub-blocks. However, if the number of sub-blocks is large, the number of service data for the value of the PC.

The division into sub-blocks for quantization and making decisions about the configuration of sub-blocks

For given (N, M) pixels there are many configurations of sub-blocks that you can use. The "decision on the size of the subblock" determines the best configuration of the sub-blocks in response to the received parameters.

One example of splitting the sub block based on information obtained after compression of color. In one variant embodiment of the present invention data compression chroma carried out first, in which information of this process make available for use during compression brightness. For example, the total number of bits used for data compression, color and complexity of the texture color data, can be viewed as information Jrom_chroma.

On Fig illustrated version 230 of the embodiment is Oia, designed for a decision on the size of the sub-blocks in a module 232 a decision on the size of the subblock, in response to receiving information Jrom_chroma, R, and QP_precision. The output module 232 provides the best size of the subblock, for example, 2×2, 4×4, 8×8 etc.

On Fig illustrates one possible embodiment 250 of the module subblock_size_decision". Module decision takes 252 information_from_chroma, R, and QP_precision for each possible configuration of the sub-blocks. Calculate the cost calculations for each configuration (254, 260, 266) based on the specified information. The value can be expressed as the estimated number of generated output bits. Presents a multi-stage scheme (256, 262) decision, in which, if the number of output bits more than is available, then this specific configuration drop, and verify the following possible configurations. Alternative embodiments are presented here as an example of how the sub-block 2582×2 sub-block 2644×4, etc. until the conclusion 266 without separation (N subblock). You can see that when none of the possible configurations is not acceptable, then the N pixels of the original block compression using 266 as subsection without separation 268.

It should be understood that the method according to the invention is arranged to support two processes quantization, linear and nelineinogo the quantization.

1. Estimating the quantization parameter in nonlinear quantization

Given a set of possible values of PC, the system can check all these values PC to search for the best match. However, the complexity of calculations in a comprehensive search of the PC reduces efficiency and has a high cost of implementation.

On Fig illustrated variant 270 embodiments of the method with reduced complexity, to estimate the value of the PC, without searching all possible values of the PC. Residual data 272 is shown as taken from block 274 valuation PC. As the value of the PC, a specific assessment, and residual data are taken nonlinear Q process 276.

In Fig. 15 illustrates a variant 290 embodiment estimates a PC, which is shown in detail. The evaluation unit of the PC in accordance with the invention uses the original pixel values with the residual data 292 sub-blocks, used to search for the maximum residual value 294 in the sub-blocks. The maximum value is then used to determine the level 296 decision for quantization. In other words, the maximum residual value can be found with a maximum level solutions, which can be defined 298 best value PC. Table 1 shows one example of display of the maximum solution level and the values of the PC.

For decision-making in relation to values is To Dubina Q nonlinear quantization, use the maximum residual data using the original data within the block 4x1 (evaluation PC). Table 1 shows the mapping between the estimated values of PC and data max_residual within the sub-block.

2. The decision on linear/non-linear quantization

In this variant embodiment of the present invention, each sub-block may be quantized in response to the selection of linear or non-linear quantization, in accordance with the method according to the invention. For example, in response to large residual values for the sub-block, the value of PC is also showing a trend of increasing, this introduces significant quantization noise, resulting in the formation of undesirable visual artifacts.

On Fig illustrated variant embodiment 310, consisting in the choice of non-linear or linear quantization in response to the video content, to prevent the growth of visual distortions, such as noise. The figure shows the estimated parameters used in the selection, which presents as a residual 312 value sub-blocks and the original data 314 pixels. However, any other options available at this stage, can also be used when making 316 decision quantization to obtain the final solution 318 on the method of quantization.

On Fig illustrated version 330 embodiment of the module "decision nainan the m/linear quantization". Data, such as residual values 332 sub-blocks and the original values 334 pixel data of the subblock take in module 336 detecting edges. It should be understood that the detection 336 edges may be implemented using any number of alternative ways, using the original pixels or residual values or similar values. For example, if the maximum number of residual values in the subblock is greater than the threshold value, it can be considered as the presence of sharp edges. If a sharp edge is not detected in step 338, then decide to use non-linear quantization 346. Otherwise, perform 340 valuation to use linear quantization, which is then compared at step 342 with the available number of bits for receiving the decision to use linear quantization 344. If the value is larger than the available number of bits, choose nonlinear quantization 346.

Although the above description contains many details, these should not be considered as limiting the scope of the invention, but merely as an illustration of some preferred at present variants of the embodiment of this invention. Therefore, it should be understood that the scope of the present invention fully encompasses other variations of the embodiments that mo is ut to be obvious to a person skilled in the art, and scope of the present invention, accordingly, should not be limited by anything except the attached claims, in which reference to an element in the singular is not intended "one and only one"unless it is not set explicitly, but rather "one or more". All structural and functional equivalents of the above-described preferred variant of embodiment, which are known to experts in the field of engineering, specifically given herein by reference, and assuming that they are covered by the present invention. In addition, it is not necessary that the device or method solved each problem, search for solutions which were implemented by the present invention, so that they were covered by the present invention. In addition, none of the elements, components or steps of the method in the present disclosure is not intended for the public, regardless of lists whether the element, component or step of the method explicitly in the claims. No element of the following claims should not be construed in accordance with the provisions of 35 U.S. 112, sixth paragraph, unless the element is not represented explicitly, using the phrase "means for".

Example display the highest level of decision-making and values PC
DLD0D1D2D3D4D5D6D7
MRRr<44≤r<48≤r<1616≤r<3232≤r<4848≤r<6464≤r<128128≤r<256
That is, PC01234567
DL (SD) - Level decision-making; MRR (MAO) - the Maximum range of residual values; that is, the CS - Score PC

1. Device for compression and decompression YUV video data, comprising: a memory, configured to transmit image data of the pixel with one or more module; the module the integration connected via the bus transmission signals mentioned in what deepmeta and made with the possibility of compression blocks of the input video luminance and chrominance YUV, using the quantizer, to receive a compressed video data having a reduced number of bits per pixel, and each block is defined as N pixel data brightness and M pixels for each of the color data, the total number of pixels in each block compression is N+2M, and compression is performed in accordance with the General compression ratio R, which controls the compression unit and the compression unit performs without the use of pixel data obtained outside of each block, save the aforementioned compressed video data in said video memory; and a module decompression of video data is arranged to receive compressed blocks video data in any order and decompression of the compressed video data stored in said memory, receiving the recovered video data, which have the same format and approach of the original video data which have been compressed by using the above-mentioned a video compression module; the module decompressing the video data is made with the possibility of withdrawal referred to the recovered video data, which makes predictions of the pixel during compression, and the above-mentioned prediction pixel starts with an initial reference pixel selected in the middle of a block, and determine the right and left directions forecasting that about ablativus in parallel, when the reference pixel is the same as for the right and left directions forecasting, and choose different compression levels for brightness data and color data while maintaining the overall R-value compression so that the compression ratio for color data is higher than the compression ratio for the data of brightness.

2. The device according to claim 1, in which the mentioned common ratio R, the compression ratio can be expressed in ratio or the number of bits contained in the residual block.

3. The device according to claim 1, in which, if only the data of color do not have a high level of complexity, degree mentioned compression is chosen so as to minimize the number of bits used for color data, and to reduce to the maximum the number of bits used for data brightness.

4. The device according to claim 1, additionally containing an estimate of the complexity of the texture block compression mentioned before compression.

5. The device according to claim 4, in which the number of bits allocated to the brightness data and color data in the compressed block is determined in response to the evaluation of the complexity of the texture.

6. The device according to claim 1, additionally containing a filling of the compressed video data bits of the filling to maintain a fixed size of the compressed blocks.

7. The device according to claim 1, in which when data compression, brightness for a given block of modulating ithout information from data compression color of the block.

8. The device according to claim 1, in which when the above-mentioned prediction values of the pixels receive the projected value of the current pixel on the basis of the previous pixel values.

9. The device according to claim 1, additionally containing the separation of sub-blocks in accordance with the desired configuration.

10. The device according to claim 8, further containing a calculation of the value of the subblock, at least for part of the possible configurations of the sub-block and drop configuration of the subblock, the cost of which exceeds the threshold value or the available number of bits.

11. The device according to claim 1, in which the mentioned input video data received from the image sensor.

12. The device according to claim 1 in which the said device is integrated in the camcorder or the camera.

13. The device according to claim 1, in which the above-mentioned entry contain video format with information about the luminance and chrominance YUV in the form of Y, CR and b.

14. The device according to claim 1 in which the said device is made with compression and decompression of video data before encoding video data for transmission of compressed video over the network prior to decompression or to save the video in a multimedia device prior to decompression.

15. The device according to claim 1, in which: the said video data compression is performed by using nonlinear quantum the Oia and high or low dimensional accuracy of the quantization step used in the mentioned non-linear quantization.

16. The device is a video encoder for encoding YUV video data, comprising: a memory made with the bus signal, which is made with the possibility of sharing the video data of the pixel with one or more modules of video data; a video compression module connected via the aforementioned signal bus with the said memory and executed to compress the input video luminance and chrominance YUV, using the quantizer, without using the pixel data obtained from the pixels located outside each block in the compressed video data having a reduced number of bits per pixel, and each block is defined as N pixel data brightness and M pixels for each of the data color, while the total number of pixels in each block compression is N+2M, and compression is performed in accordance with the General compression ratio R, which controls the compression unit, and the preservation of the mentioned compressed video data in said video memory; and a module for decompressing the video data, made searchable and decompression of the compressed video data stored in said video memory, for receiving the recovered video data, which have the same format and approach of the original image data that have been taken and compressed mentioned a video compression module; and the module coding done the config with the possibility of inconsistent selection of blocks of video data to fetch the said module decompression of video data and for encoding the recovered video data, the output of the above-mentioned decompression module, which performs the prediction pixel during compression, and the above-mentioned prediction pixel starts with an initial reference pixel selected in the middle of a block, and determine the right and left directions of prediction, which are processed in parallel, with the reference pixel is the same for both right and left directions forecasting, and choose different compression levels for brightness data and color data while maintaining the overall R-value compression so that the compression ratio for color data is higher than the compression ratio for the data of brightness.

17. The method of compression and decompression of video data YUV, containing the following steps: compress the input video data with a compression ratio R using a quantizer, obtaining blocks of compressed video data having a reduced number of bits in the data brightness and/or color data for each block of video data; the compression of the input video data is performed without using data obtained from the pixels located outside the compressible block, and each block is defined as N pixel data brightness and M pixels for each of the color data, the total number of pixels in each block compression is N+2M, and compression performed in the accordance with common ratio R compression, which controls the compression unit, and retain the mentioned compressed video data in a video memory; and perform decompression mentioned compressed video data for any of the blocks of video data received in any desired order from said video memory, for generating output the recovered video data, which makes predictions of the pixel during compression, and the above-mentioned prediction pixel starts with an initial reference pixel selected in the middle of a block, and determine the right and left directions of prediction, which are processed in parallel, with the reference pixel is the same as for the right and left directions forecasting, and choose different compression levels for brightness data and color data while maintaining the overall R-value compression so that the compression ratio for color data is higher than the compression ratio for the data of brightness.

18. The method according to 17, further containing a stage at which: encode mentioned video data in response to inconsistent selection of blocks of video data from said memory and get referred to the recovered video data.

19. The method of compression and decompression of video data YUV, containing the following steps: compress the input video data with a compression ratio R, using the quantizer with the floor is the group of blocks of compressed video data, having reduced the number of bits in the data brightness and/or color for each block of video data, where each block is defined as N pixel data brightness and M pixels for each of the color data, the total number of pixels in each block compression is N+2M, and compression is performed in accordance with the General compression ratio R, which controls the compression unit, and the above-mentioned compression is performed for the video data without using data obtained from the pixels located outside the compressible block; choose either linear or non-linear quantization for each sub-block within a given unit to which is applied the above-mentioned compression; retain the mentioned compressed video data in a video memory and perform decompression mentioned compressed video data for any of the blocks of video data received in any desired order from said video memory, for generating a recovered output video data, which makes predictions of the pixel during compression, and the above-mentioned prediction pixel starts with an initial reference pixel selected in the middle of a block, and determine the right and left directions of prediction, which are processed in parallel, with the reference pixel is the same as for the right and left directions cons the licensing, and choose different compression levels for brightness data and color data while maintaining the overall R-value compression so that the compression ratio for color data is higher than the compression ratio for the data of brightness.

20. The method of compression and decompression of video data YUV, containing the following steps: compress the input video data with a compression ratio R using a quantizer, obtaining blocks of compressed video data having a reduced number of bits in the data brightness and/or color for each block of video data, where each block is defined as N pixel data brightness and M pixels for each of the color data, the total number of pixels in each block compression is N+2M, and compression is performed in accordance with the General compression ratio R, which controls the compression unit, and the above-mentioned compression perform for blocks of video data without using the pixel data obtained outside of the compressed block; perform an assessment of the complexity of the texture data of the brightness and complexity of the texture color data; chosen individual compression levels for brightness data and color data while maintaining the overall R-value compression; choose either linear or non-linear quantization for each sub-block in this block, to be referred to the compression of the response to the characteristics of the unit; which at the time referred to compression in the compression process of the data brightness use the information from the compression process color data for the same block; retain the mentioned compressed video data in a video memory; receive blocks mentioned video data from said video memory in any desired order and at any time, following after the above mentioned save the compressed video data; and perform decompression mentioned compressed video data blocks received, to generate the recovered output video data, and which performs a prediction pixel during compression, and the above-mentioned prediction pixel starts with an initial reference pixel selected in the middle of a block, and determine right and left direction prediction, which are processed in parallel, with the reference pixel is the same as for the right and left directions forecasting, and choose different compression levels for brightness data and color data while maintaining the overall R-value compression so that the compression ratio for color data is higher than the compression ratio for the data of brightness.



 

Same patents:

FIELD: information technology.

SUBSTANCE: result is achieved by obtaining an image having a plurality of colours, obtaining a shift value of light flux of a second colour relative to light flux of a first colour, the shift value being determined by optical characteristics of a lens through which the light flux is transmitted to an image capturing unit, interpolating the signal level of the second colour in aberration coordinates from the signal level of pixels, having a second colour, around aberration coordinates, extracting the high-frequency signal level of the first colour of a target pixel in accordance with the degree of reduction of the high-frequency signal level in the signal level of the second colour in aberration coordinates, outputting, as the signal level of a pixel of second colour in the target pixel, a signal level obtained by adding the signal level extracted at the high-frequency extraction step to the signal level of the pixel of second colour, calculated at the shift correction step.

EFFECT: correcting chromatic aberration of a lens with restoration of high-frequency components lost due correction of colour position shift, for which position shift correction is performed.

9 cl, 11 dwg

FIELD: information technologies.

SUBSTANCE: decoding stage is carried out to decode the first identifying information, which indicates whether signals of appropriate colour components are independently coded, and to decode the second identifying information in the case when the first identifying information indicates that signals of appropriate colour components are coded independently, the second identifying information indicates whether all colour components are intra-coded, besides, in the case when the first identifying information indicates that signals of the appropriate colour components are coded independently, and the second identifying information indicates that all images of appropriate colour components are intra-coded, the decoding stages generates a decoded image, in which performance of deblocking filtration is off at each border between blocks serving as elements of conversion and quantisation.

EFFECT: improved optimality and efficiency of decoding in case when coded signals of a dynamic image do not have a difference in respect of a number of counts between colour components.

15 dwg

FIELD: information technology.

SUBSTANCE: image processing circuit corrects colour in a predetermined colour range so that it is corrected such that it partially includes the reference range of red hue, lying in the centre of the colour range of the constant hue, lying from the achromatic colour which is the colour of the lowest saturation in the extended colour reproduction region, to the red colour having the highest colour saturation in the extended colour reproduction range, but does not include red colour having the highest colour saturation in the extended colour reproduction range. At that moment, the image processing circuit corrects said colour such that its hue is replaced with hue lying close to the yellow colour in the extended colour reproduction range, and hue from the reference range of red hue is replaced with red colour hue in the sRGB standard colour reproduction range, and further, the value by which the hue changes to hue lying close to the yellow colour is less than the colour lying further from the reference range of red hue in the extended colour reproduction range than for the colour lying close to the reference range of red hue in the extended colour reproduction range.

EFFECT: providing a video display device capable of displaying video using a display with a wide colour gamma based on a video signal corresponding to a standard having a narrower colour reproduction range than the range of the display with a wide colour gamma, while solving the problem of hue shift when displaying red colour with average saturation.

3 cl, 9 dwg

FIELD: information technology.

SUBSTANCE: in an image encoding system compression processing is applied to an input image signal, comprising multiple colour components, encoded data obtained after independent encoding processing of the input image signal for each of the colour components, and the parameter that indicates which colour component corresponds to encoded data is multiplexed with the bit stream.

EFFECT: higher encoding efficiency and providing possibility to include data for one image in one access unit, and establish identical time information and single encoding mode for corresponding colour components.

6 cl, 25 dwg

FIELD: information technology.

SUBSTANCE: when controlling a diffused illumination element, the category of data displayed by the unit is identified. Diffused illumination data associated with the identified category are extracted and the extracted diffused illumination data are displayed according to the displayed data. The extracted diffused illumination data can be a diffused illumination script which can determine temporary parts of the diffused illumination data. Diffused illumination data can be associated with a category based on user input. A data subcategory can be identified and diffused illumination data can be modified with additional diffused illumination data associated with the subcategory. Association of the category with diffused illumination data can be edited by the user. Default association of the category with diffused illumination data can be provided.

EFFECT: eliminating the direct link between the diffused illumination effect and context of the displayed video.

18 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: secondary video signal is generated, said signal being composed of signals having values derived via conversion of intermediate values into values lying inside an output range according to a predetermined conversion rule when intermediate brightness values (determined by formulas where Smin is the output value of the lower limit, Xr to Xb are values of RGB signals of the main video signal, k is a constant, and Lr to Lb are intermediate values of RGB brightness), include a value greater than the value of the upper output limit, otherwise a secondary video signal consisting of a signal having an intermediate brightness value is generated.

EFFECT: preventing gradation error when a given video signal shows colour in a region outside the colour range of the video display element, performing signal conversion processing with low arithmetic load.

16 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: scalable video codec converts lower bit depth video data to higher bit depth video data using decoded lower bit depth video data for tone mapping and tone mapping derivation. The conversion can also be used for filtered lower bit depth video data for tone mapping and tone mapping derivation.

EFFECT: high encoding efficiency.

7 cl, 3 dwg

FIELD: physics.

SUBSTANCE: when controlling an ambient illumination element, a host event is detected, a light script associated with the detected event is retrieved and the retrieved light script is rendered in accordance with the detected event. A user may associate the light script with the event and/or an event type which corresponds to the event. A default association of events and/or event types may be provided, although these default associations can be modified by the user. An event type which corresponds to the event can be identified and a light script associated with the identified event type can be rendered in response to the detected event.

EFFECT: reduced viewer fatigue and improved realism and depth of experience.

20 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: image encoder includes the following: a predicted-image generating unit that generates a predicted image in accordance with a plurality of prediction modes indicating the predicted-image generating method; a prediction-mode judging unit that evaluates prediction efficiency of a predicted image output from the predicted-image generating unit to judge a predetermined prediction mode; and an encoding unit that subjects the output signal of the prediction-mode judging unit to variable-length encoding. The prediction-mode judging unit judges, on the basis of a predetermined control signal, which one of a common prediction mode and a separate prediction mode is used for respective colour components forming the input image signal, and multiplexes information on the control signal on a bit stream.

EFFECT: high optimality of encoding the signal of a moving image.

4 cl, 86 dwg

FIELD: information technology.

SUBSTANCE: image includes, when applying encoding processing to three colour components using the 4:0:0 format, data for one image into one access module, which enables to establish the same time information or identically established encoding modes for corresponding colour components. In an image encoding system for applying compression processing to an input image signal, comprising multiple colour components, encoded data obtained after independent encoding processing of the input image signal for each of the colour components, and the parameter that indicates which colour component corresponds to encoded data is multiplexed with the bit stream.

EFFECT: high encoding efficiency owing to use of a single encoding mode for corresponding colour components.

2 cl, 25 dwg

FIELD: information technology.

SUBSTANCE: video encoding method involves selecting a target frame of a reference vector and a reference frame from already encoded frames; encoding information for labelling each frame; setting the reference vector for indicating a region in the target frame of the reference vector relative the target encoding region; encoding reference vector; searching the corresponding regions using image information of the target region of the reference vector belonging to the target frame of the reference vector and indicated through the reference vector and the reference frame; determining the reference region in the reference frame based on the search result; forming predicted image using reference frame image information corresponding to the reference region; and encoding the difference information between the target encoding region information and the predicted image.

EFFECT: efficient encoding of vector information used for inter-frame predictive encoding, even when the reference frame used in the inter-frame predictive encoding differs between the target encoding region and its neighbouring region.

33 cl, 14 dwg

FIELD: information technology.

SUBSTANCE: apparatus and method for highly scalable intraframe video coding are disclosed. Conventional macroblock discrete cosine transform (DCT) tools are integrated with the subband filter banks for improved efficiency of scalable compression. Enhancement layers are represented in a subband domain and coded by an inter-layer frame texture coder utilising an inter-layer prediction signal formed by the decoded previous layer. Each quality enhancement layer is additionally scalable in resolution.

EFFECT: improved efficiency of a combined coding system.

17 cl, 27 dwg

FIELD: information technology.

SUBSTANCE: method includes encoding and decoding a video stream using macroblocks comprising greater than 16×16 pixels, for example, 64×64 pixels. In one example, an apparatus includes a video encoder configured to encode a video block having a size greater than 16×16 pixels, generate block-type syntax information which indicates the size of the block, and generate a coded block pattern value for the encoded block, wherein the coded block pattern value indicates whether the encoded block includes at least one non-zero coefficient. The encoder may set the coded block pattern value to zero when the encoded block does not include at least one non-zero coefficient or set the coded block pattern value to one when the encoded block includes a non-zero coefficient.

EFFECT: high efficiency of video prediction.

48 cl, 18 dwg, 2 tbl

FIELD: information technology.

SUBSTANCE: computer-implemented video compression method for an online video game or application, involving running video games and applications on a hosting service in response to user input received from a plurality of client devices, wherein the video games and applications generate uncompressed video; detecting a maximum data rate of a communication channel between a hosting service and a client by transmitting a feedback signal from the client to the hosting service; compressing the uncompressed video using a low-latency video compressor to generate a low-latency compressed video stream; transmitting the low-latency compressed video stream from the hosting service to the client; detecting that the maximum data rate will be exceeded if a specific frame of a frame sequence is transmitted from the hosting service to the client over that communication channel, and instead of transmitting the frame which may cause to exceed the maximum data rate, ensuring that the client continues display on the screen the previous frame of the frame sequence.

EFFECT: reduced latency.

26 cl, 40 dwg

FIELD: information technology.

SUBSTANCE: quantisation parameter for each macroblock is selected by limiting the range of all possible quantisation parameters to a particular range of possible quantisation parameter values, wherein the range is a subset of possible quantisation parameters and the range is based on the value of the predicted quantisation parameter.

EFFECT: higher video coding speed and fewer quantisation parameters which are verified for each video macroblock.

44 cl, 5 dwg

FIELD: information technologies.

SUBSTANCE: method and system are provided to use conversion of more than 8×8 and non-rectangular conversion and for generation of a syntactic element "conversion size" indicating conversion for decoding of video information. The syntactic element "conversion size" is generated with the help of a coder, being based on the size of the predicted video block and on content of the video block. Additionally, the syntactic element "conversion size" may be generated in according with the set of rules for selection from 4×4, 8×8 and large sizes of conversion during the coding process. The decoder performs reverse conversion, being based on the syntactic element "conversion size" and rules, used by the coder, at the same time the syntactic element "conversion size" is transmitted to the decoder as a part of a coded bit stream of video information.

EFFECT: improved efficiency of video coding.

16 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: image encoding method, involving: dividing a current picture into a unit having a predetermined size; determining internal prediction mode to be applied to the current unit to be encoded, according to the size of the current unit; and performing internal prediction with respect to the current unit according to the determined internal prediction mode, wherein the internal prediction mode includes a prediction mode for performing prediction using a drawn line having an angle tan-1(dy/dx), where dx and dy are integers, near each pixel in the current unit.

EFFECT: high efficiency of encoding video.

15 cl, 44 dwg, 5 tbl

FIELD: information technology.

SUBSTANCE: disclosed is a video encoding and decoding method for altered time compression based on fragmented links and not full reference images. In a typical video frame sequence, only portions (i.e. mosaic element) of each frame includes moving objects. Furthermore, in each frame, moving objects are inherently enclosed in specific regions which are common for all frames in the video frame sequence. Such common regions of movement are identified and images are extracted from the identified regions of video frames. Since these images can only display portions of frames, these images are named "fragments", and when compensating for movement, in order to generate predicted frames, these fragments and not the whole frame are used as reference images.

EFFECT: high efficiency of encoding video.

40 cl, 18 dwg

FIELD: information technologies.

SUBSTANCE: decoding stage is carried out to decode the first identifying information, which indicates whether signals of appropriate colour components are independently coded, and to decode the second identifying information in the case when the first identifying information indicates that signals of appropriate colour components are coded independently, the second identifying information indicates whether all colour components are intra-coded, besides, in the case when the first identifying information indicates that signals of the appropriate colour components are coded independently, and the second identifying information indicates that all images of appropriate colour components are intra-coded, the decoding stages generates a decoded image, in which performance of deblocking filtration is off at each border between blocks serving as elements of conversion and quantisation.

EFFECT: improved optimality and efficiency of decoding in case when coded signals of a dynamic image do not have a difference in respect of a number of counts between colour components.

15 dwg

FIELD: information technologies.

SUBSTANCE: video recording device comprises a compression facility, which compresses video data in process of speed control implementation, including variation of a quantisation step so that the bit transfer speed is reduced within the previously determined period to the target speed of bit transfer, a recording facility, which records compressed data onto a recording medium, and a facility for calculation of full speed, which calculates full speed of transferring bits of the result of video data compression with the help of the compression facility from the start of compression to the current moment. The compression facility comprises a facility for limitation of a quantisation step, which limits the varied quantisation step previously for a certain upper limit, which is lower than the upper limit varied in the compression facility, and a facility for fixation of the quantisation step, which fixes the quantisation step to the previously determined upper limit, when the calculated full speed of bit transfer exceeds the target speed of bit transfer.

EFFECT: compression of video data so that bit transfer speed is reduced to target bit transfer speed together with saving minimum quality of an image, and recording of compressed video data.

6 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: method and apparatus for illumination and colour compensation for multi-view video coding are disclosed. A video encoder includes an encoder for encoding a picture by enabling colour compensation of at least one colour component in prediction of the picture based on a correlation factor relating to colour data between the picture and another picture. The picture and the other picture have different view points and both correspond to multi-view content for the same or similar scene.

EFFECT: high efficiency of MVC during illumination mismatch between pairs of pictures.

81 cl, 5 dwg

Up!