Image encoding apparatus, image decoding apparatus, image encoding method and image decoding method

FIELD: physics.

SUBSTANCE: loop filter 6 includes a region classification unit 12 for extracting an estimate value of each of the regions which make up a local decoded image in order to classify each of the regions into a class belonging to the region, according to the estimate value, and a unit 13 for forming and processing a filter for each class belonging to one or more regions from the regions which make up the local decoded image, forming a Wiener filter which minimises the error arising between the input image and the local decoded image in each of the one or more regions belonging to a class, in order to compensate for the distortion on the one or more regions using the Wiener filter.

EFFECT: high image quality.

3 cl, 18 dwg

 

The technical field to which the invention relates

The present invention relates to an apparatus encoding image and method of encoding images for encoding with compression and transmission of image and device image decoding and method for decoding pictures for decoding encoded data transmitted by the device for encoding image to restore the image.

The level of technology

Traditionally, in accordance with the methods of coding video according to international standards, for example, MPEG and ITU-T H-26x, after the input video frame is divided into macroblocks, each of which is pixeldam block of 16×16, and prediction with motion compensation is performed for each macroblock, data compression is performed by performing orthogonal transformation and quantization for signal prediction error in units of blocks.

However, the problem is that as the compression ratio becomes high, the compression efficiency is reduced as a result of deterioration in the quality of the reference image prediction, which is used when performing prediction with motion compensation.

To resolve this issue in accordance with such a method of encoding as MPEG-4 AVC/H. 264 (�M. non-patent reference 1), the distortion in the form of blockiness that occurs in the reference image to predict the quantization of the coefficients of the orthogonal transformation, trying to remove by performing the filtering process by the deblocking circuit.

Fig. 17 is a block diagram showing the device of the encoding of images, as disclosed in non-patent reference 1.

In this arrangement, the coded image when receiving the image signal, which is a goal that needs to be encoded, the module 101 is split into blocks divides the image signal of the macroblock, and outputs the image signal in units of macroblocks in the module 102 prediction as the divided image signal.

When receiving the divided image signal from the module 101 of division into blocks, the module 102 calculates the prediction signal of the prediction error by the prediction image signal of each color component in each macroblock in a frame or between frames.

In particular, when performing prediction with motion compensation between frames, the forecasting module searches for a motion vector in units of either themselves macroblock or each of the subunits, in which each macroblock is more accurately divided.

The forecasting module then performs forecasting to�pensala of motion for the reference image signal stored in the memory device 107, through the use of motion vectors to generate the image prediction with motion compensation, and determines the difference between the prediction signal showing the prediction image by motion compensation, and the divided image signal to calculate an error signal of the prediction.

The module 102 also outputs the prediction parameters for signal generation forecasting, the forecasting module determines when the signal prediction, the module 108 encoding of variable length.

For example, the parameters for generating signals prediction include snippets of information, such as mode internal forecasting, showing how to perform a spatial prediction for each frame, and a motion vector that shows the magnitude of motion between frames.

When receiving the signal of the prediction error from the module 102 prediction module 103 compression quantum signal of the prediction error to detect the compressed data after the process is completed DCT (discrete cosine transform) to the signal of the prediction error to remove the correlation of the signals from the signal prediction error.

When receiving the compressed data from the module 103 to the compression module 104 local�th decoding performs inverse quantization of the compressed data and then performs an inverse DCT process for the compressed data, inversely quantized thus, to calculate the error signal of the prediction signal corresponding to the prediction error output from the module 102 prediction.

When receiving the signal of the prediction error from the module 104 local decoding adder 105 sums the signal of the prediction error and the prediction signal that is output from the module 102 prediction to generate a local decoded image.

Loop filter 106 removes the distortion in the form of blockiness, are superimposed on the signal of the local decoded image showing the local decoded image generated by the adder 105, and stores the signal of the local decoded image to the remote distortion in the memory 107 as a reference image signal.

When receiving the compressed data from the module 103 of the compression module 108 encoding variable length entropy-encodes the compressed data to output a bit stream that is coded result.

When the bit stream module 108 encoding multiplexes the variable-length parameters for generating signals prediction output from the module 102 prediction, in the bit stream and outputs the bit stream.

In accordance with the method disclosed in non-patent reference 1, contour fil�R 106 adjusts the amount of smoothing according to the information includes quantization resolution, encoding mode, the degree of variation of the motion vector, etc. for pixels near the border of the block DCT to provide reducing distortion that occurs at the boundary of the block.

As a result, the quality of the reference image signal can be enhanced, and the efficiency of prediction with motion compensation in a subsequent process of encoding can be improved.

On the contrary, the problem in the method disclosed in non-patent reference 1, is that the components of the upper frequency of the signal is lost with the increase of the compression ratio at which the encoded signal, and therefore, the entire screen is smoothed too much, and encoded video is blurred.

To resolve this problem, non-patent reference 2 discloses a technology of applying the Wiener filter as a loop filter 106 and the formation of this loop filter 106 so that the distortion depending on the square error between the image signal, which must be encoded, which is the signal of the original image and a reference image signal corresponding to this image signal is minimized.

Fig. 18 is an explanatory drawing showing the principle in order to improve the quality of the reference image signal by using the Fi�tra Wiener in the encoding device images disclosed in non-patent reference 2.

Fig. 18, the signal s corresponds to the image signal, which must be encoded, which is introduced in the module 101 of the separation of the blocks shown in Fig. 17, and the signal s' corresponds to either the signal of the local decoded image output from the adder 105, shown in Fig. 17 or the signal of the local decoded image in which the distortion that occurs at the boundary of the block is reduced through loop filter 106, as disclosed in non-patent reference 1.

More specifically, the signal s' is the signal in which the distortion (noise) e when encoding is superimposed on the signal s.

The Wiener filter is defined as a filter that is applied to the signal s' in such a way as to minimize the distortion (noise) e when encoding using the criterion of distortion depending on the square error. Typically, the coefficients of the w filter can be determined by using the following equation (1) as from the matrix Rs s'autocorrelation of the signal s', and the matrix Rss'cross-correlation between the signals s and s'. The size of the matrices Rs s'and Rss'corresponds to a certain number of taps of the filter.

w=Rs's'1Rsmi> s'(1)

By applying the Wiener filter having the coefficients of the w filter, the signal s "hat", the quality of which is raised ("^"attached to the letter of the alphabet is referred to as "hat", since this application is an electronic application for a patent in Japan), is detected as a signal corresponding to the reference image signal. The device coding of images, as disclosed in non-patent reference 2, determines the coefficients of the w filter in each of the two or more different numbers of taps for each full frame image, which is a goal that needs to be encoded, and once the filter having the number of taps that optimizes the amount of code of the coefficients of the w filter and distortion (e'=s"hat"-s), which is calculated after the filtering process is implemented using the criterion of distortion depending on the transfer rate, additionally divides the signal s' into many blocks having a certain size, chooses to apply a Wiener filter having an optimum number of taps, which is defined above, for each block and transmits information on the activation/deactivation of the filter for each block.

As a result, to�additional amount of code required in order to perform the filtering process Wiener, can be reduced, and image quality prediction can be improved.

The prior art documents

Non-patent reference

Non-patent reference 1. The MPEG-4 AVC (ISO/IEC 14496-10)/ITU H.-T 264

Non-patent reference 2. T. Chujoh, G. Yasuda, N. Wada, T. Watanabe, T. Yamakage, "Block-based Adaptive Loop Filter", VCEG-AI18, Conference ITU-T SG16/Q. 6, July 2008

Disclosure of the invention

Since conventional encoding device image has such a structure as described above, one Wiener filter is calculated for the frame that is the target that needs to be coded, the information indicating whether to apply or not the process of Wiener filter is applied to each of the blocks that make up each frame. However, the problem is that since it is identical to the Wiener lter applied to any block of each frame, there is a case where the Wiener lter is not always the optimal filter for each block, and the image quality cannot be improved sufficiently.

The present invention is made to solve the above problem, and therefore, the purpose of the present invention is to provide an encoding device image decoding device images, the method Kodir�of images and method of decoding images to improve accuracy to improve the quality of images.

In accordance with the present invention provided an apparatus to encode an image, wherein the filtering module includes a classification module fields to extract estimates each of the areas that make up the local decoded image obtained by the local decoding to classify each of the fields on the class to which the region belongs according to the evaluation value, and a module for compiling and processing the filter for each class to which one or more areas, of the areas that make up the local decoded image, belong, formation of the filter that minimizes the error produced between the input image and the local decoded image in each of the one or more regions belonging to the class to compensate for a distortion superimposed onto one or more areas, through the use of the filter.

Because the filtering module in accordance with the present invention includes a classification module fields to extract estimates each of the areas that make up the local decoded image obtained by the local decoding to �to klassifitsirovat each of the fields in the class to which the region belongs according to the evaluation value, and a module for compiling and processing a filter for each class to which one or more areas of the areas that make up the local decoded image, belong, formation of the filter that minimizes the error between the input image and the local decoded image in each of the one or more regions belonging to the class to compensate for a distortion superimposed onto one or more areas, through the use of the filter provides the advantage of allowing to improve the accuracy of improving the quality of images.

Brief description of the drawings

Fig. 1 is a block diagram showing the device of encoding images in accordance with option 1 implementation of the present invention;

Fig. 2 is a block diagram showing the loop filter 6 of a device for encoding images in accordance with option 1 implementation of the present invention;

Fig. 3 is a block diagram of the sequence of operations ways, showing the process performed by loop filter 6 of a device for encoding images in accordance with option 1 implementation of the present invention;

Fig. 4 is an explanatory drawing showing an example of the classes to which�s four regions (region A, area B, area C and area D), which constitute the local decoded image is classified;

Fig. 5 is an explanatory drawing showing 16 blocks (K) which is part of a local decoded image;

Fig. 6 is an explanatory drawing showing an example of a bit stream generated by node 8 encoding of variable length;

Fig. 7 is a block diagram showing the device of decoding images in accordance with option 1 implementation of the present invention;

Fig. 8 is a block diagram showing the loop filter 25 of the device decoding images in accordance with option 1 implementation of the present invention;

Fig. 9 is a block diagram showing the loop filter 25 of the device decoding images in accordance with option 1 implementation of the present invention;

Fig. 10 is a block diagram of the sequence of operations ways, showing the process performed by a loop filter 25 of the device decoding images in accordance with option 1 implementation of the present invention;

Fig. 11 is a block diagram of the sequence of operations ways, showing the process performed by loop filter 6 of a device for encoding images in accordance with varianta 2 implementation of the present invention;

Fig. 12 is an explanatory drawing showing an example of choice of Wiener filter for each of the blocks (K) which is part of a local decoded image;

Fig. 13 is a block diagram of the sequence of operations ways, showing the process performed by a loop filter 25 of the device decoding images in accordance with option 2 implementation of the present invention;

Fig. 14 is a block diagram of the sequence of operations ways, showing the process performed by loop filter 6 of a device for encoding images in accordance with option 3 implementation of the present invention;

Fig. 15 is a block diagram of the sequence of operations ways, showing the process for the first frame that runs through loop filter 6;

Fig. 16 is a block diagram of the sequence of operations ways, showing the process for the second or subsequent frame is performed through a loop filter 6;

Fig. 17 is a block diagram showing the device of the encoding of images, as disclosed in non-patent reference 1; and

Fig. 18 is an explanatory drawing showing the principle in order to improve the quality of a reference signal image using Wiener filter.

The implementation of the invention

D�Leah, in order to further clarify this invention, preferred embodiments of the present invention are described with reference to the accompanying drawings.

Option 1 implementation

Fig. 1 is a block diagram showing the device of encoding images in accordance with option 1 implementation of the present invention. Fig. 1 module 1 of division into blocks, performs a process of dividing the image signal that is an input image and which is a goal that needs to be encoded, macroblocks and output the image signal in units of macroblocks in module 2 predict as the divided image signal.

When receiving the divided image signal from the module 1 of division into blocks, the forecasting module 2 performs the prediction process for the divided image signal in the frame or between frames to form the prediction signal.

In particular, when performing prediction with motion compensation between frames, the forecasting module detects the motion vector in units of macroblocks or each of sub-blocks into which a macroblock is more accurately classified as separated from the image signal and reference image signal showing a reference image stored in storage devices� 7, to form the prediction signal showing the prediction image from the motion vector and the reference image signal.

After the formation of the prediction signal, the prediction module then performs the process of calculating the signal of the prediction error which is the difference between the separated image signal and the prediction signal.

In addition, when forming the prediction signal, the forecasting module 2 defines the parameters for generating signals prediction, and outputs the parameters for generating signals prediction in node 8 encoding of variable length.

For example, the parameters for generating signals prediction include snippets of information, for example the mode of the internal prediction showing how to perform spatial prediction in the frame and a motion vector that shows the magnitude of motion between frames.

The processing module forecasting module consists of 1 split into blocks and module 2 predict.

Module 3 performs compression process of a process, a DCT (discrete cosine transform) to the signal of the prediction error calculated by the forecasting module 2 to calculate DCT coefficients by the quantization DCT coefficients to produce compressed Dan�s, which are the DCT-coefficients are quantized thus, node 4 local decoding and node 8 encoding of variable length. Module 3 compression is a differential compression module images.

The local node 4 performs a decoding process for performing an inverse quantization of the compressed data output from the compression module 3, and perform the inverse DCT process for the compressed data is inversely quantized in such a way as to calculate an error signal of the prediction signal corresponding to the prediction error output from the module 2 predict.

The adder 5 performs the process of summing the signal of the prediction error calculated by node 4 local decoding, and the prediction signal generated by the forecasting module 2 to form the signal of the local decoded image showing the local decoded image.

Module local decoding consists of node 4 local decoding and adder 5.

Loop filter 6 performs the process of performing a filtering process for compensating the distortion imposed on the signal of the local decoded image generated by the adder 5, the output signal of the local decoded image filtered thus, in Sapmi�ment device 7 as the reference image signal when displaying information filter, which loop filter uses when performing the filtering process, the node 8 encoding of variable length. Loop filter 6 is a filter module.

The storage device 7 is a recording medium for saving the reference image signal output from loop filter 6.

Node 8, the variable length coding process performs entropy coding of the compressed data output from the compression module 3, information filter output from loop filter 6, and the parameters for generating signals predict the output from the forecasting module 2 to form a bit stream showing these encoded results. Node 8 encoding of variable length is the encryption module of variable length.

Fig. 2 is a block diagram showing the loop filter 6 of a device for encoding images in accordance with option 1 implementation of the present invention.

Fig. 2, the storage device 11 frames is a storage medium to store only one frame of the signal of the local decoded image generated by the adder 5.

Module 12 classification of areas performs the process of extracting estimates each of the areas that make up the local decoded image shown by one frame �collective alarm of the local decoded image, stored in the memory device 11 frames to classify each of the fields on the class to which the region belongs according to the estimated value.

Module 13 of the compilation and processing of filter performs the process of formation for each class to which one or more areas included in the areas that make up the local decoded image, belong, Wiener filter which minimizes an error occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in each of the one or more regions that belong to the class, and use the Wiener filter to compensate the distortion imposed on the region.

Module 13 of the compilation and processing of the filter also performs a process of outputting the information of the filter of the Wiener filter in node 8 encoding of variable length.

Next is explained the operation of the device for video coding.

When receiving the image signal, which is a goal that needs to be encoded, module 1 of division into blocks divides the image signal of the macroblock, and outputs the image signal in units of macroblocks in module 2 predict as the divided image signal.

When receiving the divided image signal from the module 1 �of OTDELENIE into blocks of module 2 detects predict the parameters for signal generation forecasting, the forecasting module uses to execute the prediction process for the divided image signal in the frame or between frames. Then, the forecasting module generates the prediction signal showing the prediction image, using the parameters for generating signals prediction.

In particular, the forecasting module detects a motion vector which is a parameter for generating signals of forecasting used to perform the forecasting process between frames of divided image signal and reference image signal stored in the memory device 7.

After receiving the motion vector module 2 predict then generates a prediction signal by performing prediction with motion compensation for the reference image signal by using the motion vector.

After forming the prediction signal showing the prediction image, the module 2 calculates the prediction signal of the prediction error which is the difference between the prediction signal and the divided image signal, and outputs a prediction error in module 3 compression.

When forming the signal prediction module 2 also predict about�determines the parameters for generating signals prediction, and outputs the parameters for generating signals prediction in node 8 encoding of variable length.

For example, the parameters for generating signals prediction include snippets of information, for example the mode of the internal prediction showing how to perform spatial prediction in the frame and a motion vector that shows the magnitude of motion between frames.

When receiving the signal of the prediction error of the forecasting module 2 module 3 compression calculates the DCT-coefficients by means of a process, a DCT (discrete cosine transform) to the signal of the prediction error and then quantum DCT-coefficients.

Module 3 compression then outputs the compressed data, which are DCT-coefficients are quantized thus, node 4 local decoding and node 8 encoding of variable length.

When receiving the compressed data from the compression module 3 node 4 local decoding performs inverse quantization of the compressed data and then transfers the inverse DCT process for the compressed data is inversely quantized in such a way as to calculate an error signal of the prediction signal corresponding to the prediction error output from the module 2 predict.

After node 4 local decoding computes the signal prediction error, the adder 5 adds the signal of the prediction error and the prediction signal generated by module 2 prog�of Osinovaya, to form the signal of the local decoded image showing the local decoded image.

After the adder 5 generates a signal of the local decoded image, loop filter 6 performs a filtering process for compensating the distortion imposed on the signal of the local decoded image and stores the signal of the local decoded image filtered thus, in the memory 7 as the reference image signal.

Loop filter 6 also displays information about the filter, loop filter uses when performing the filtering process, the node 8 encoding of variable length.

Node 8, the variable length coding process performs entropy coding of the compressed data output from the compression module 3, information filter output from loop filter 6, and the parameters for generating signals predict the output from the forecasting module 2 to form a bit stream showing these encoded results.

At this time, although the encryption module of variable length entropy also encodes the parameters for generating signals prediction encoding device of the alternative images can multiplex the parameters for generating signals predict in �Otok bits which forms a device for encoding images, and to output the bit stream without entropy coding parameters for generating signals prediction.

Hereinafter is specifically explained a process performed by a loop filter 6.

Fig. 3 is a block diagram of the sequence of operations ways, showing the process performed by loop filter 6 of a device for encoding images in accordance with option 1 implementation of the present invention.

First, the storage device 11 frames loop filter 6 stores only one frame of the signal of the local decoded image generated by the adder 5.

Module 12 classification of areas retrieves an evaluation value of each of the areas that make up the local decoded image shown by one frame signal of the local decoded image stored in the memory device 11 frames, and classifies each area of the class to which the region belongs according to the evaluation value (step ST1).

For example, for each area (each block having an arbitrary size (M×M pixels)), classification module fields retrieves the variance of the signal of the local decoded image DCT coefficients, motion vector, �parameter quantization DCT coefficients, etc. in the area as an evaluation value and performs the classification class on the basis of these pieces of information. In this case, M is an integer equal to or greater than 1.

For example, when the variance of the signal of the local decoded image in the field is used as the evaluation value in the case that each of the regions is classified into one of the class 1-N (N is an integer equal to or greater than 1), (N-1) threshold values prepared in advance, and the variance of the signal of the local decoded image is compared with each of the (N-1) threshold values (th1<th2<...<thN-1), and the class to which the region belongs, is identified.

For example, when the variance of the signal of the local decoded image is equal to or exceeds thN-3and less thN-2the region is classified in class N-2. In addition, when the variance of the signal of the local decoded image is equal to or exceeds th2and less th3the region is classified into class 3.

In this case, although an example in which (N-1) threshold values prepared in advance is displayed, these thresholds can be changed dynamically for each sequence or each frame.

For example, using a motion vector in the field as an evaluation conducted�ins, classification module fields calculates an average vector that is an average value of motion vectors, or the average vector that is an average value of motion vectors, and identifies the class to which the region belongs according to the magnitude or direction of the vector.

In this case, the mean vector has components (x and y components), each of which is the average of the respective components of the motion vectors.

In contrast, the mean vector has components (x and y components), each of which is the average of the respective components of the motion vectors.

When the module 12 classification classifies areas, each of the areas in one of the classes 1 to N, the module 13 of the compilation and processing of filter forms for each class to which one or more areas included in the areas that make up the local decoded image, belong, Wiener filter which minimizes an error occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in each of the one or more regions that belong to the class (steps ST2-ST8).

For example, if a local decoded image consists of four regions (region A, region B, region C and region D) as shown in Fig. 4, when the area A and C are classified into class 3, the region B is classified in class 5, and area D is classified in class 6, the module compilation and processing of the filter generates a Wiener filter which minimizes an error occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in each of the areas A and C belonging to class 3.

Module for compiling and processing filter further generates a Wiener filter which minimizes an error occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in the area B that belongs to class 5, and also generates a Wiener filter which minimizes an error occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in the area D belonging to class 6.

For example, in the case of composing the filter with variable number of taps in the formation of the Wiener filter that minimizes the error, the module 13 of the compilation and processing of the filter calculates the costs as shown below for each different number of taps and then determines the number of taps and the values of the filter coefficients that minimize the cost./p>

Cost=D+λ·R(2)

where D is the sum of the quadratic errors between the image signal that is the target, which must be coded in the field to which the target applies the filter, and the filtered signal of the local decoded image, λ is a constant and R is the volume of codes generated in the contour filter 6.

Although in this case the costs are defined by equations (2), this case is only an example. For example, only the sum of D squared errors can be specified as costs.

In addition, another estimated value, for example, the sum of the absolute values of the error may be used instead of the sum of D squared errors.

After the formation of the Wiener filter for each class to which one or more regions belong, the module 13 of the compilation and processing of filter determines whether or not each of the blocks that make up the local decoded image (for example, each of local regions, which is less than each of the areas A-D, which is a local decoded image), the unit for which the module compilation and processing of the filter should perform the filtering process (steps ST9-ST16).

More specifically, for each of the blocks that make up l�local decoded image, module 13 of the compilation and processing of the filter compares the errors occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in the block between before and after the filtration process.

For example, if a local decoded image consists of 16 blocks (K) (K=1, 2,..., and 16), as shown in Fig. 5, the module compilation and processing of the filter compares the sum of squared errors occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in each block (K) between before and after the filtration process.

Block 1, block 2, block 5 and block 6 shown in Fig. 5, corresponds to the area A as shown in Fig. 4, block 3, block 4, block 7 and block 8 shown in Fig. 5 correspond to the region B shown in Fig. 4, block 9, block 10, block 13 and block 14 shown in Fig. 5, corresponds to the area C shown in Fig. 4, block 11, block 12, block 15 and block 16, shown in Fig. 5, corresponds to the area D shown in Fig. 4.

Although the module compilation and processing of the filter compares the sum of quadratic errors between before and after the filtration process, the module compile and filter treatment alternative can compare costs (D+λ∙R), shown by equation (2), or the amount of AB�opytnyh error values between before and after the filtration process.

When the sum of the squared errors found after the filtering process, less the sum of the squared errors detected before the filtering process, the module 13 of the compilation and processing of the filter determines that the block (K) is the block that is the target for filtering.

On the contrary, when the sum of the squared errors found after the filtering process, equals or exceeds the sum of the squared errors detected before the filtering process, the module compilation and processing of the filter determines that the block (K) is the block that is not the purpose for filtering.

Module 13 of the compilation and processing of the filter then calculates the costs for the implementation of the filtering process, which leads to the fact that the costs have become low, on the stages ST1-ST16, and the cost of failure of the filtration process the entire frame currently being processed to determine, or not to perform the filtering process of the entire frame currently being processed (steps ST17-ST18).

When determining in step ST18, to perform the filtering process of the entire frame, the module compilation and processing of the filter sets a flag (frame_filter_on_off_flag) is equal to 1 (activated) and then performs a filtering process which leads to the fact that the costs have become low, on the stages ST1-ST16, and outputs a local decoded image�of Azania, for which the module compilation and processing of the filter has performed the filtering process, in the storage device 7 as the reference image signal (steps ST19-ST20).

For example, when a region including the block (K) is the area B, and class, to which the area B belongs, is a grade 5, module preparation and processing filter performs the filtering process in the block (K) by using a Wiener filter 5 and outputs a local decoded image, for which the module compilation and processing of the filter has performed the filtering process, in the storage device 7 as the reference image signal.

At this time, when determining the stages ST1-ST16 that costs are minimised when the process of choosing whether or not to perform the filtering process for each block is performed (while flag (block_filter_on_off_flag)=1 (activated)), the module compiling and processing the filter outputs the signal to be filtered local decoded image block (K) for which module compilation and processing of filter determines not to perform the filtering process in the storage device 7 as the reference image signal as is, without performing the filtering process in the block (K). On the contrary, when determining, at the stages ST1-ST16 that costs are minimised when the selection process is also a�perform or not the filtering process for each block, not running (while flag (block_filter_on_off_flag)=0 (deactivated)), the module compiling and processing filter performs the filtering process for each of all signals of the local decoded image in the frame by using the Wiener filter of the class to which the area to which the signal of the local decoded image belongs, classified, and outputs a local decoded image, for which the module compilation and processing of the filter has performed the filtering process, in the storage device 7 as the reference image signal.

On the contrary, when determining, in step ST18, not to perform the filtering process of the entire frame, the module compilation and processing of the filter sets a flag (frame_filter_on_off_flag) 0 (deactivated) and outputs the signal to be filtered local decoded image in the storage device 7 as the reference image signal as it is (steps ST21-ST22).

At stages ST2-ST22 in the block diagram of the sequence of operations of way, "min_cost" is a variable to maintain the minimum cost, "i" is the index number of tap[i] of taps of the filter and cycle counter, and "j" is the index size bl_size[j] block and a loop counter.

In addition, "min_tap_idx" is the index (i) number of filter taps in the time when costs are minimized, "min_bl_size_idx" is�tsya index (j) unit size at the moment when costs are minimised, and "MAX" is the initial value of the minimum cost (sufficiently large value).

- tap[i] (i=0-N1)

The sequence in which N1 (N1>=1) different numbers of filter taps, which are determined in advance and each of which can be selected, saved.

- bl_size[j] (j=0-N2)

The sequence in which N2 (N2>=1) different sizes of blocks (bl_size[j] × bl_size[j] of the pixels, which are determined in advance and each of which can be selected, saved.

- block_filter_on_off_flag

A flag indicating whether to perform or not the process of choosing whether or not to perform the filtering process for each block in the frame currently being processed.

- frame_filter_on_off_flag

A flag indicating, or not to perform the filtering process for the frame currently being processed.

Step ST2 is a step of setting initial values, and stages ST3-ST8 are cycle to perform the process of selecting the number of taps of the filter.

In addition, the step ST9 is the stage of establishing initial values, and the steps ST10-ST16 are the cycle for performing the process of block size and the process of determining whether or not to perform the filtering process for each block having the selected block size.

In addition, the steps ST17-ST18 are the stages of determining whether or not to perform the filtering process only �Adra, currently being processed, the steps ST19-ST20 are the stages of performing optimum filtration process, which is determined at the stages ST1-ST16, when frame_filter_on_off_flag=1 (activated), and the stages ST21-ST22 are the stages of the job frame_filter_on_off_flag equal to 0 (inactive) and a failure of the filtration process for the frame currently being processed.

After the formation of the Wiener lter and then performing the filtering process by the above method, the module 13 of compiling and processing the filter outputs information of the filter of the Wiener filter in node 8 encoding of variable length.

Information filter includes a flag (frame_filter_on_off_flag) showing the fact, or not to perform the filtering process for the frame currently being processed.

When this flag is activated (shows that the filtering process is performed), the information as shown below, information included in the filter.

(1) the Number of Wiener filters (the number of classes, each of which one or more regions belong to)

- The number of Wiener filters may be different for each frame.

(2) Information (the index) by the number of taps of each filter Wiener

When all filters are common in the frame, the total number of taps is included.

- When the number of taps is different for each filter, the number of taps of each filter is enabled.

(3) Information on kOe�facientem actually used Wiener filter (Wiener filter of each class, which one or more regions belong),

- Even if the Wiener filter is formed, the Wiener lter is not enabled when the Wiener filter is not actually used.

(4) activate/deactivate and information block size of the filters for each block

Flag (block_filter_on_off_flag) showing the fact, or not to perform the operation activate/deactivate (or not to perform the filtering process for each block in the frame currently being processed.

- Only when block_filter_on_off_flag activated, information block size (index) and information activate/deactivate the filtering process for each block is included.

In this embodiment, the implementation of an example in which pieces of information (1) to (4) are included in the information filter is shown. The number of Wiener filters, the number of taps of each filter Wiener and block size for activation/deactivation can be stored by the device for encoding images, and by means of a device for decoding pictures in the quality of the information identified jointly in the device-based image coding device and image decoding instead of encoding and transmission of data fragments between them.

In addition, although in the above explanation of Fig. 3 is illustrated as a specific example of the process, we have managed�CSOs through a loop filter 6, steps ST9-ST16 can be omitted, and the process of failure of the operation activation/deactivation for the filtering process for each block ((4) is not included in the information filter) may be included as part of the process performed by loop filter 6.

As mentioned above, a filter output from the module 13 of the compilation and processing of the filter, entropy coded by means of node 8 encoding of variable length, and is transmitted to the device to decode images.

Fig. 6 is an explanatory drawing showing an example of a bit stream generated by node 8 encoding of variable length.

Fig. 7 is a block diagram showing the device of decoding images in accordance with option 1 implementation of the present invention.

Fig. 7, when receiving the bit stream from the device for encoding images, the node 21 decoder performs variable length decoding of the variable length compressed data, information and filter settings for signal generation forecasting, which are included in the bit stream. The node 21 of the variable length decoding module is decoding variable length.

Module 22 performs a prediction process of forming the prediction signal showing the prediction image through the IP�of olzovanija parameters for signal generation forecasting, to which the node 21 of the variable length decoding used decode variable length. In particular, if the motion vector is used as a parameter for generating signals prediction, the prediction module performs a process of forming the prediction signal from the motion vector and the reference image signal stored in the memory device 26.

Module 22 forecasting is the module imaging prediction.

Module 23 of decoding errors of prediction performs a process of performing inverse quantization for compressed data to which the node 21 of the decoding applied to the variable length decoding, and then perform the inverse DCT process for the compressed data is inversely quantized in such a way as to calculate an error signal of the prediction signal corresponding to the prediction error output from the forecasting module 2 shown in Fig. 1.

The adder 24 performs the process of summing the signal of the prediction error calculated by the module 23 of decoding errors of prediction, and the prediction signal generated by the module 22 predict, to calculate the decoded signal of the image signal corresponding to the decoded image output from the adder 5, showing�tion in Fig. 1.

The decoding module consists of module 23 of decoding errors of forecasting and adder 24.

Loop filter 25 performs a filtering process to compensate for the distortion superimposed onto the decoded signal of the image output from the adder 24, and then performs the process of outputting the signal of the decoded image, filtered thus beyond the device decoding images in the storage device 26 as a signal filtered decoded image. Loop filter 25 is filtering engine.

The storage device 26 is a storage medium to save the signal filtered decoded image output from loop filter 25, as a reference image signal.

Fig. 8 is a block diagram showing the loop filter 25 of the device decoding images in accordance with option 1 implementation of the present invention.

Fig. 8 storage device 31 frames is a storage medium to store only one frame of the signal, the decoded image output from the adder 24.

Module 32 classification of areas performs the process of extracting estimates of each of the areas that constitute the decoded image shown by one frame signal d�encoded image, stored in the memory device 31 frames to classify each of the fields on the class to which the region belongs according to the evaluation value, similar to module 12 of the classification of the area shown in Fig. 2.

Module 33-processing filter performs the process of the formation of the Wiener lter, which is applied to the class to which each of the regions classified by the classification module 32 areas in respect of the information filter to which the node 21 of the variable length decoding used decode variable length to compensate for the distortion imposed on the region by the use of the Wiener filter.

Although in the example of Fig. 8 loop filter 25, in which the storage device 31 of frames is set as its first stage, is shown in the case of performing a closed filtering process for each macroblock, the loop filter can have such a structure that the storage device 31 frames located as its first stage, is removed, as shown in Fig. 9, and the module 32 classification of areas retrieves an evaluation value of each of the areas that constitute the decoded image macroblock.

In this case, the device coding of images has to perform the filtering process for each macroblock of the nez�isimo.

Next is explained the operation of the device decode images.

When receiving the bit stream from the encoding device of the image node 21 of the variable length decoding applies decoding variable length compressed data, information filter and the parameters for signal generation forecasting, which are included in the bitstream.

When receiving parameters for generating signals prediction module 22 generates a prediction signal of the prediction of the parameters for generating signals prediction. In particular, when receiving the motion vector as a parameter for generating signals of forecasting, the forecasting module generates the prediction signal from the motion vector and the reference image signal stored in the memory device 26.

When receiving the compressed data from the node 21 to the variable length decoding module 23 of decoding errors of prediction performs inverse quantization for compressed data and then performs an inverse DCT process for the compressed data is inversely quantized in such a way as to calculate an error signal of the prediction signal corresponding to the prediction error output from the forecasting module 2 shown in Fig. 1.

After the module 23 of decoding errors of prediction calculates the error signal foresight�of debugger, the adder 24 adds together the signal of the prediction error and the prediction signal generated by the module 22 predict, to calculate the decoded signal of the image signal corresponding to the local decoded image output from the adder 5 shown in Fig. 1.

When receiving the signal of the decoded image from the adder 24, a loop filter 25 performs a filtering process to compensate for the distortion superimposed onto the decoded signal of the image, and outputs a decoded image, filtered thus beyond the device for decoding pictures in the signal quality of the filtered decoded image when you save the signal filtered decoded image in the memory device 26 as a reference image signal.

Hereinafter is specifically explained a process performed by a loop filter 25.

Fig. 10 is a block diagram of the sequence of operations ways, showing the process performed by a loop filter 25 of the device decoding images in accordance with option 1 implementation of the present invention.

First, the storage device 31 of the frame loop filter 25 stores only one frame of the decoded signal of the image displayed �z adder 24.

When the flag (frame_filter_on_off_flag) included in the information filter, activated (shows that the filtering process is performed) (step ST31), the module 32 classification of areas retrieves an evaluation value of each of the areas that constitute the decoded image shown by one frame signal of the decoded image, stored in the memory device 31 frames, and classifies each area of the class to which the region belongs according to the evaluation value, similar to module 12 of the classification of the area shown in Fig. 2 (step ST32).

When receiving the information of the filter node 21, the variable length decoding module 33 processing filter generates a Wiener filter that is applied to the class to which each of the regions classified by the classification module 32 regions, belongs to, in respect of the information filter (step ST33).

For example, when the number of Wiener filters (the number of classes, each of which one or more regions belong) is expressed as N, the number of taps of each filter Wiener is expressed as L×L, and the values of the coefficients of the Wiener lter can be expressed as wi11, wi12, ..., wI 1l, ..., wiL1, wiL2, ..., wiLLN filters WiWiener (i=1, 2, ..., N) are shown as follows.

Wi =(wi11wi12wi1Lwi21wi22wi2LwiL1wiL2wiLL)(3)

After the formation of the N filters WiWiener module 33 processing filter compensates for the distortion superimposed onto a single frame of the decoded signal of the image, through the use of these Wiener filters and outputs a decoded image with the distortion compensation beyond the device decoding images in the storage device 26 as a signal filtered decoded image (step ST34).

The signal s"hat" filtered decoded image is expressed through the� following equation (4).

s^=SWid(s)(4)

The matrix S is a group of reference signals in the L×L pixels, and includes a signal s decoded image that is the target for filtering, and id(s) is the number (the number of the filter) of a class that is defined by a module 32 classification of regions and which region that includes the signal s belongs.

In the aforementioned filtration process module 33 filter treatment refers to the flag (block_filter_on_off_flag) included in the information filter, and when the flag (block_filter_on_off_flag) is set to 1 (enabled) refers to the information of the block size information included in the filter, and then identifies the set of blocks (K) which is part of the decoded image, and then performs a filtering process with respect to the information included in the information filter and showing it, or not to perform the filtering process for each block (K).

More specifically, when the flag (block_filter_on_off_flag) is set to 1 (enabled), the module 33 processing filter performs the filtering process for the decoded signal of the image�Oia in the block (K), in which the filter module should perform the filtering process of the blocks that constitute the decoded image by using the Wiener filter of the class to which the region including the block (K) belongs, in the derivation of the signal to be filtered decoded image in the block (K), in which the filter module should not perform the filtering process, beyond the device decoding images in the storage device 26 as a signal filtered decoded image.

On the contrary, when the flag (block_filter_on_off_flag) is set equal to 0 (enabled), the processing module filter performs the filtering process for each of all signals decoded image in the frame currently being processed, by using a filter corresponding to the class to which each of the regions classified by the classification module 32 regions.

When the flag (frame_filter_on_off_flag) included in the information filter is deactivated (the filtering process is not performed) (step ST31), the module 33 processing filter performs the filtering process for the frame currently being processed, and output each signal of the decoded image output from the adder 24, beyond the device image decoding and storage ustroystvo in the signal quality of the filtered decoded image as is (step ST35).

As can be seen from the above description, in the apparatus of encoding images in accordance with this option 1 implementation of the loop filter 6 includes a module 12 of the classification of fields to extract estimates each of the areas that make up the local decoded image shown by the signal of the local decoded image that is output by the adder 5 to classify each of the fields on the class to which the region belongs according to the evaluation value, and a module 13 of the compilation and processing of the filter for each class to which one or more areas of the areas that make up the local decoded image, belong, the formation of the Wiener filter that minimizes the sum of squared errors occurring between the image signal that is the target that needs to be encoded and the local decoded image in each of the one or more regions belonging to the class to compensate for a distortion superimposed onto one or more areas, through the use of the Wiener lter. Therefore, the encoding device image implements the filtering process according to the local properties of the image, thereby allowing to improve the accuracy of enhancing image quality�.

In addition, the device decoding images in accordance with this option 1 implementation of the loop filter 25 includes a module 32 classification of areas for the extraction evaluation value of each of the areas that constitute the decoded image shown by the decoded signal of the image displayed via the adder 24 to classify each of the fields on the class to which the region belongs according to the evaluation value, and a module 33 processing filter to filter, to which the node 21 of the variable length decoding applied to the variable length decoding to generate a Wiener filter which is applied to the class which each region is classified by the classification module 32 regions, belongs, and to compensate for distortion imposed on the region by the use of the Wiener filter. Consequently, the device image decoding implements a filtering process according to the local properties of the image, thereby allowing to improve the accuracy of improving the quality of images.

Option 2 implementation

In the above embodiment 1 of implementation of the loop filter is shown in which the module 13 of the compilation and processing of the filter generates a Wiener filter for each class, the cat�rum one or more regions belong, and performs the filtering process for each of the blocks (K) which is part of the local decoded image by using the Wiener filter of the class to which the region including the block (K) belongs. Alternatively, for each of blocks of the loop filter may filter the Wiener filter that minimizes the sum of squared errors occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in the block (K), Wiener filter, loop filter which generates for each class to which one or more regions belong, and can compensate for a distortion superimposed onto the block (K), through the use of the Wiener lter is chosen.

Specifically, the loop filter of this embodiment operates as follows.

Fig. 11 is a block diagram of the sequence of operations ways, showing the process performed by loop filter 6 of a device for encoding images in accordance with option 2 implementation of the present invention.

Module 13 of the compilation and processing of the filter generates a Wiener filter for each class to which one or more regions belong, similar to the module for compiling and processing the filter in the solicitation�accordance with the above mentioned option 1 implementation (steps ST2-ST8).

In accordance with option 2 implementation of the module for compiling and processing the filter does not use a flag (block_filter_on_off_flag) showing the fact, or not to perform the process of choosing whether or not to perform the filtering process for each block in the frame currently being processed, and uses the flag (block_filter_selection_flag) showing the right to choose whether or not the filter that should be used for each block in the frame currently being processed. In addition, a flag (block_filter_selection_flag) initially deactivated in step ST40 and is activated only when the step ST46 is performed.

As mentioned below, only when the flag (block_filter_selection_flag) is activated, the block size and information filter selection for each block of information included in the filter.

After the formation of the Wiener filter for each class to which one or more regions belong, the module 13 of the compilation and processing of the filter chooses the optimal process (e.g. a process that minimizes the sum of squared errors occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in the block (K)), of the process of performing a filtering process for each of the blocks (K) which is part of the local decoded image, by selecting the Wiener lter �W Wiener filters, who module compose and filter treatment forms for each class to which one or more regions belong, and the process of failure of the filtering process for each of the blocks (steps ST9 and ST41-ST47).

More specifically, in the case of forming four filters W1, W2, W3and W4Wiener and performing the filtering process using each of these four filters Wiener, a module for compiling and processing the filter selects filter W3Wiener, which minimizes the sum E of the squared errors for the block (K), if the sum E of square errors in the block (K) has the following inequality between the four filters.

EW3<EW2<EW4<EW0<EW1

where EW0shows the sum E of the squared errors when the filtering process is not executed.

Fig. 12 is an explanatory drawing showing an example of choice of Wiener filter for each of the blocks (K) which is part of the local decoded image. For example, the filter W2Wiener is selected for the block (1) and the filter W3Wiener is selected for the block (2).

When determining to perform the filtering process for the frame currently being processed, through the use of a Wiener filter selected module 13 of the compilation and processing of the filter sets a flag (frame_filter_on_off_fag) is equal to 1 (activated) and performs the filtering process which minimizes costs at the stages ST1-ST9 and ST40-ST47, and outputs a local decoded image filtered thus, in the storage device 7 as the reference image signal (steps ST17-ST20).

In contrast, when determining not to perform the filtering process of the entire frame currently being processed (steps ST17-ST18), the module compilation and processing of the filter sets a flag (frame_filter_on_off_flag) is equal to zero (disabled) and outputs the signal to be filtered local decoded image in the storage device 7 as the reference image signal (steps ST21-ST22).

After the formation of the Wiener filter and then executing the filtering process by the above method, the module 13 of compiling and processing the filter outputs information of the filter of the Wiener filters in node 8 encoding of variable length.

Flag (frame_filter_on_off_flag) showing the fact, or not to perform the filtering process in the frame currently being processed is included in the information filter.

When this flag is activated (shows that the filtering process is performed), the information as shown below, information included in the filter.

(1) the Number of Wiener filters (the number of classes, each of which one or more regions belong to)

- The number of Wiener filters may be different for each frame.

(2) In�armacia (index) number of taps of each filter Wiener

When all filters are common in the frame, the total number of taps is included.

- When the number of taps is different for each filter, the number of taps of each filter is enabled.

(3) Information on the coefficients actually used Wiener filter (Wiener filter for each class to which one or more regions belong),

- Even if the Wiener filter is formed, the Wiener lter is not enabled when the Wiener filter is not actually used.

(4) select the filter for each block and information block size

Flag (block_filter_selection_flag) showing the right to choose whether or not the filter for each block in units of frames.

- Only when block_filter_on_off_flag activated, information block size (index) and information select for each block is included.

In this variant of implementation, an example in which pieces of information (1) to (4) are included in the information filter is shown. The number of Wiener filters, the number of taps of each filter Wiener and the block size can be stored by the device for encoding images, and by means of a device for decoding pictures in the quality of the information identified jointly in the device-based image coding device and image decoding instead of encoding and transmission of data fragments between them.

Loop filter 25 in the device image decoding performs the following process.

Fig. 13 is a block diagram of the sequence of operations ways, showing the process performed by a loop filter 25 of the device decoding images in accordance with option 2 implementation of the present invention.

First, the storage device 31 of the frame loop filter 25 stores only one frame of the signal decoded image output from the adder 24.

When the flag (frame_filter_on_off_flag) included in the information filter, activated (shows that the filtering process is performed) (step ST31), and when the flag (block_filter_selection_flag) included in the information filter is deactivated (step ST51), the module 32 classification of areas retrieves an evaluation value of each of the areas that constitute the decoded image shown by one frame signal of the decoded image, stored in the memory device 31 frames, and classifies each area of the class to which the region belongs according to the evaluation value (step ST32), similar to the module classification areas in accordance with the above option 1 implementation.

On the contrary, when the flag (frame_filter_on_off_flag) included in the information filter, activated (shows that the process of filtration.� is performed) (step ST31), and when the flag (block_filter_selection_flag) included in the information filter, activated (step ST51), classification module fields accesses the information on the size of each block that is a unit for selection, and information filter selection for each block from pieces of information included in the information filter, and performs classification of a class for each block (step ST52).

After the module 32 classification of areas classifies each area (each unit) on the class to which the region belongs, the module 33 filter treatment refers to the information of the filter output from the node 21 to the variable length decoding, and generates a Wiener filter that is applied to the class to which each area (each unit), classified by the classification module 32 regions, belongs (step ST33), similarly to the processing module of the filter in accordance with the above option 1 implementation.

After the formation of the Wiener lter, which is applied to each class when (block_filter_selection_flag) is deactivated, the module 33 processing filter performs the filtering process for each of all signals decoded image in the frame currently being processed, formed through the use of Wiener filters and output each signal of the decoded image, filtered thus�Ohm, beyond the device decoding images in the storage device 26 as a signal filtered decoded image (step ST53), as if the flag (block_filter_on_off_flag) is deactivated in the above embodiment 1 of implementation.

On the contrary, when (block_filter_selection_flag) is activated, the module 33 of the processing of the filter compensates for the distortion superimposed onto the decoded signal of the image in each block by the use of the Wiener filter, which is chosen for the unit after the formation of the Wiener lter, which is applied to each class, and outputs a decoded image, filtered thus beyond the device decoding images in the storage device 26 as a signal filtered decoded image (step ST53).

The signal s"hat" filtered decoded image at this time is expressed by the following equation (5).

s^=SWid_2(bl)(5)

The matrix S is a group of reference signals in the L×L pixels, and includes a signal s decoded image, �which is the target for filtering.

- id_2(bl) is the information of the selection filter in the block bl, in which the signal s decoded image is included, i.e. the class number (the number of the filter) of the block bl.

- id_2(bl)=0 indicates the unit for which in General is not in the process of filtration. Therefore, the filtering process is not executed in the block.

As can be seen from the above description, since the structure of the coded image in accordance with option 2 implementation has such a structure that each of the blocks (K) which is part of the decoded image, a contour filter selects the Wiener filter that minimizes the sum of squared errors occurring between the image signal that is the target that needs to be coded and the decoded signal of the image block (K), Wiener filter, loop filter which generates for each class to which one or more regions belong, and compensates for distortion, superimposed on the block (K), through the use of the Wiener lter is selected thus, it provides the additional advantage of increasing the accuracy of improving the quality of images in comparison with the above option 1 implementation.

Option 3 implementation

In the above embodiment 2 the embodiment shown the method of selecting, from the process to follow�in termination of the filtering process for each block (K), which constitute the decoded image, by using one of Wiener filters, which are formed for each class to which one or more regions in the frame currently being processed belongs, and the process of failure of the filtering process for each block, a process that minimizes the sum of squared errors occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in the block (K). Alternatively, the preparation of the one or more filters Wiener in advance and use one of the one or more Wiener filters, which are prepared in advance, the process of using one of Wiener filters, which are formed for each class to which one or more regions in the frame currently being processed belongs, and the process of failure of the filtering process for each block, the loop filter may choose a process that minimizes the sum of squared errors occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in the block (K).

Fig. 14 is a block diagram of the sequence of operations ways, showing the process performed posredstvennoj filter 6 in the device for encoding images in accordance with option 3 implementation of the present invention.

Since this option 3 implementation provides a wider choice of Wiener filter compared to the selection in the above option 2 implementation, the probability that the optimal Wiener filter is selected, is increased compared with the above mentioned option 2 implementation.

Since the selection method of the Wiener lter is identical to the method shown in the above option 2 implementation, explanation of the method further lowered.

Since the process performed by the decoding device image is identical to the process in accordance with the above option 2 implementation, explanation of the process further down.

Option 4 implementation

In the above embodiment 2 the embodiment shown the method of selecting, from the process of performing a filtering process for each of the blocks (K) which is part of the decoded image, by using one of Wiener filters, which are formed for each class to which one or more regions in the frame currently being processed belongs, and the process of failure of the filtering process for each block, a process that minimizes the sum of squared errors occurring between the image signal that is the target, which must be encoded, and the signal locallog� decoded image in the block (K). Alternatively, from the process of using one of Wiener filters, which are formed for each class to which one or more regions in the frame currently being processed belongs, the process of using one of Wiener filters, which are used for already encoded frame, and the process of failure of the filtering process for each block, the loop filter may choose a process that minimizes the sum of squared errors occurring between the image signal that is the target, which must be encoded, and the signal of the local decoded image in the block (K).

Fig. 15 is a block diagram of the sequence of operations ways, showing the process for the first frame, which is performed through a loop filter 6 of a device for encoding images, and is identical to the block diagram of the sequence of operations of the method shown in Fig. 11 in the above embodiment 2 of the implementation.

Fig. 16 is a block diagram of the sequence of operations ways, showing the process for the second frame and subsequent frames, which runs through a loop filter 6.

As a reference how to access the Wiener filter, which is used for already encoded frame, for example, reference methods, as shown below, could�t be provided.

Method (1) appeal to the Wiener filter, which is used for the block in the position shown by a characteristic of the motion vector, which is computed in the block that is the target for filtering.

Method (2) appeal to the Wiener filter, which is used for that block in the frame which is closest in time to the block that is the target for filtering and is located at the position identical to the position of the target block.

Method (3) appeal to the Wiener filter, which is used for the block having the largest cross-correlations between blocks in an already coded frame.

In the case of method (3) is identical to the search process unit must be performed by the device for video coding device and image decoding.

Since this option 4 implementation provides a wider choice of Wiener filter compared to the selection in the above option 2 implementation, the probability that the optimal Wiener filter is selected, is increased compared with the above mentioned option 2 implementation.

Since the selection method of the Wiener lter is identical to the method shown in the above option 2 implementation, explanation of the method further lowered.

Since the process performed by the device decterov�Oia images is identical to the process in accordance with the above option 2 implementation, explanation of the process further down.

Industrial applicability

Encoding device, image decoding device, image encoding images and method of decoding images in accordance with the present invention can improve the accuracy of improving the quality of imaging. Device for encoding image and method of encoding images are suitable for use as the device for encoding images, etc. and method of encoding images, etc. for encoding with compression and transmission of images, respectively, and the device image decoding and method for decoding images are suitable for use as the device for decoding images, etc. and method of decoding images, etc. for decoding encoded data transmitted by the device for encoding image to restore the image respectively.

1. Method of decoding image that contains:
- processing step of decoding variable length, which is used for the decoding of variable length by means of module decoding variable length to the entered coded stream Bitola obtain a parameter for generating signals prediction compressed differential image and filters; and
- stage filtration treatment, which is performed by filtering module, the filtering process on the decoded image obtained by summing the image prediction and differential image, wherein the specified picture prediction is formed by using the specified parameter for generating signals predict and indicated a differential image is obtained by decoding the specified compressed difference image;
at this stage of processing of decoding variable length specified module decoding variable length uses variable length decoding to the information for identifying the class of each of the blocks that constitute the decoded image specified; and
at this stage of filtration treatment specified filtering module accesses information to identify the class of each block to determine the class of each block, and performs the filtering process on the said decoded image on the basis of a certain class and filters.

2. Device for encoding images, comprising:
- module (6) filter to perform the filtering process on the local decoded image obtained by summerbay�m image prediction and differencing image, in this case the specified prediction image generated through the use of a parameter for generating signals predict and specified differential image is obtained by compression and decoding a differential image, which is the difference between the image and the specified image prediction; and
- module (8) coding of variable length to use the variable length encoding to the specified parameter for generating signals forecasting specified compressed difference image and the filter for each of the classes used in the implementation of the filtering process;
wherein said module (6) of the filter determines the class of each of the pixels that comprise the specified local decoded image in accordance with the evaluation value of each of pixels, and performs the filtering process on the local decoded image on the basis of each filter corresponding to each of the defined classes.

3. The device decoding images that contain:
module decoding variable length to use the variable length decoding for the coded bit stream to obtain a parameter for generating signals prediction, compressed differential image and filter for ka�Dogo of the classes; and
a filtering module for performing filtering process on the decoded image obtained by summing the image prediction and differential image, wherein the said prediction image generated through the use of the specified parameter for generating signals predict and indicated a differential image obtained by decoding the specified compressed difference image;
wherein said filtering module defines a class for each of pixels that compose the decoded image specified, and performs the filtering process on the decoded image on the basis of each filter corresponding to each of the defined classes.



 

Same patents:

FIELD: physics, video.

SUBSTANCE: invention relates to controlling filtering and particularly to controlling deblocking filtering on block boundaries in a video frame. A block-specific filter decision value is calculated for a pixel block in a video frame. If the block-specific filter decision value is below a block-specific threshold, each line or column in the block is individually processed in order to select between a strong and a weak deblocking filter. A respective line-specific filter decision value is thereby calculated for each row or column in the block and compared to a line-specific threshold. If the line-specific filter decision value calculated for a row or column is below the line-specific threshold a strong deblocking filter is selected for the row or column, otherwise a weak deblocking filter is instead selected to combat any blocking artefacts.

EFFECT: high efficiency of deblocking filtering by eliminating or reducing blocking artfacts.

11 cl, 11 dwg

FIELD: radio engineering, communication.

SUBSTANCE: device contains a receiver receiving and syntactically analyzing a bit flow of a coded image; a processor forming a coding unit which is included into a maximum coding block which has hierarchical structure by means of use of the information which indicates this hierarchical structure, syntactically analyzed from the accepted bit flow and forms one sub-block for prediction of a coding block from the coding block, by means of use of information on blocks of a prediction of the named coding block, an image recovery decoder.

EFFECT: higher efficiency of decoding of high-resolution images by determining the depth of the coding unit and the operating mode of the encoding tool according to data characteristics of the image.

5 cl, 8 tbl, 23 dwg

FIELD: information technology.

SUBSTANCE: in the method, after using a modified LBP technique, calculation of LBP code values and search for equivalent LBP code values are performed in rank and domain regions, formed by the same number of pixels located on a circle, where the radius of the circle of the domain region is greater than the radius of the circle of the rank region; the number of pixels, the radius of the circle and the coordinates of the position of the centre pixel for the rank and domain regions are stored.

EFFECT: faster encoding through selection of image characteristics which describe the domain and rank regions.

4 dwg

FIELD: physics.

SUBSTANCE: adaptation is performed by rearranging fragments of discrete cosine transform (DCT) coefficients obtained after two-dimensional DCT on the time axis and subsequent one-dimensional DCT such that the total number of non-zero transform coefficients after three-dimensional DCT is less than the number of non-zero DCT coefficients obtained after three-dimensional DCT without rearranging two-dimensional DCT fragments. In the disclosed method, after forming a domain measuring n×n×n pixels, DCT coefficients are calculated on spatial coordinates x and y for each fragment of the domain. The fragments are then rearranged in the form of a rearrangement vector and a time DCT operation is performed. The DCT coefficients are sampled, encoded and transmitted over a communication channel with the rearrangement vector. At reception, said procedures are performed in reverse order and the original video stream is restored.

EFFECT: high degree of compression of video data with a given image reconstruction error at reception owing to adaptation to variation of static properties of images.

3 cl, 9 dwg

FIELD: information technologies.

SUBSTANCE: method of coding of video data is offered which comprises the obtaining from the coded stream of bits of one or more units of network abstraction level (NAL) for each component of view from a set of components of view of the coded video data where each component of view from the set of components of view corresponds to the common temporary location and where one or more NAL units encapsulate at least a part of the coded video data for the respective components of view and comprise an information specifying the sequence of decoding of the respective components of view. The method also comprises the received information separate from NAL units specifying the relations between the view identifiers for these views and sequence of decoding of components of view. One or more NAL units also comprise the information specifying, whether the first view component of the first view as the reference for prediction between the views of the second component of view for the second different view is used.

EFFECT: coding efficiency improvement.

68 cl, 18 tbl, 12 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. A method of coding video data comprises maintaining a plurality of context models for entropy coding transform coefficients of the video data, wherein the plurality of context models includes one or more context models that are each used for a different transform unit size and at least one joint context model used for two or more transform unit sizes; selecting the joint context model shared by a first transform unit and a second transform unit; selecting contexts for the transform coefficients associated with one of the first transform unit or the second transform unit according to the joint context model; and entropy coding the transform coefficients of said one of the transform units using context adaptive binary arithmetic coding (CABAC) based on the selected contexts.

EFFECT: reduced amount of memory needed to store contexts and probabilities on video coders and decoders.

34 cl, 9 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to image processing means. The method includes creating a plurality of frames of a picture and related prediction reference frames; for each frame and related prediction reference frame, calculating the intensity value and the colour value in a first colour domain; for each frame and related prediction reference frame, calculating weighted prediction gains; if said gains are non-negative, determining that a global transition with zero offset is occurs in a second colour domain; and if not all of said gains are non-negative, determining that a global transition with gradual change in illumination does not occur.

EFFECT: high efficiency of an image display means when encoding and processing video.

28 cl, 16 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to the field of digital signal processing and, in particular, to the field of video signal compression using the movement compensation. The coding method includes the obtaining of target number of movement information predictors to be used for the coded image section and generation of the set of movement information predictors using the obtained target quantity. The set is generated by means of: obtaining of the first set of movement information predictors, each of which is connected with the image section having the pre-set spatial and/or time ratio with the coded image section; modifications of the first set of movement information predictors by removal of the duplicated movement information predictors for obtaining of the reduced set of movement information predictors containing the first number of movement information predictors, and each movement information predictor from the reduced set differs from any other movement information predictor from the reduced set; comparisons of the first number of movement information predictors with the target quantity obtained, and if the first quantity is less than the target quantity, obtaining of the additional movement information predictor and its addition to the reduced set of movement information predictors.

EFFECT: decrease of spatial and time redundancies in video flows.

26 cl, 8 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to a broadcasting system for transmitting a digital television program, particularly a transmission device and a transmission method, in which content which meets needs can be acquired. A server generates a script PDI-S for obtaining a user side PDI-A representative of an answer of a user to a question about user preferences; generates launch information for executing the PDI-A; and transmits the launch information and PDI-S in response to the delivery of broadcast content, and transmits to the client in response to the delivery of reference content a provider side PDI-A representative of an answer set by a provider to the question. The client executes the PDI-S based on detection of launch information and carries out matching between the user side PDI-A and the provider side PDI-A to determine acquisition of reference content delivered by the server.

EFFECT: facilitating delivery of content to a client which satisfies the needs thereof at that time.

10 cl, 48 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. An apparatus for encoding an image using intraframe prediction comprises a unit for determining an intraframe prediction mode, which determines the intraframe prediction of the current unit to be encoded, wherein the intraframe prediction mode indicates a defined direction from a plurality of directions, wherein the defined direction is indicated by one number dx in the horizontal direction and a constant number in the vertical direction and a number dy in the vertical direction and a constant number in the horizontal direction; and a unit for performing intraframe prediction, which performs intraframe prediction applied to the current unit in accordance with the intraframe prediction mode, wherein the intraframe prediction includes a step of determining the position of adjacent pixels through a shift procedure based on the position of the current pixel and one of the parameters dx and dy, indicating the defined direction, wherein adjacent pixels are located on the left side of the current unit or on the upper side of the current unit.

EFFECT: high efficiency of compressing images through the use of intraframe prediction modes having different directions.

9 cl, 21 dwg, 4 tbl

FIELD: physics.

SUBSTANCE: adaptation is performed by rearranging fragments of discrete cosine transform (DCT) coefficients obtained after two-dimensional DCT on the time axis and subsequent one-dimensional DCT such that the total number of non-zero transform coefficients after three-dimensional DCT is less than the number of non-zero DCT coefficients obtained after three-dimensional DCT without rearranging two-dimensional DCT fragments. In the disclosed method, after forming a domain measuring n×n×n pixels, DCT coefficients are calculated on spatial coordinates x and y for each fragment of the domain. The fragments are then rearranged in the form of a rearrangement vector and a time DCT operation is performed. The DCT coefficients are sampled, encoded and transmitted over a communication channel with the rearrangement vector. At reception, said procedures are performed in reverse order and the original video stream is restored.

EFFECT: high degree of compression of video data with a given image reconstruction error at reception owing to adaptation to variation of static properties of images.

3 cl, 9 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to the field of digital signal processing and, in particular, to the field of video signal compression using the movement compensation. The coding method includes the obtaining of target number of movement information predictors to be used for the coded image section and generation of the set of movement information predictors using the obtained target quantity. The set is generated by means of: obtaining of the first set of movement information predictors, each of which is connected with the image section having the pre-set spatial and/or time ratio with the coded image section; modifications of the first set of movement information predictors by removal of the duplicated movement information predictors for obtaining of the reduced set of movement information predictors containing the first number of movement information predictors, and each movement information predictor from the reduced set differs from any other movement information predictor from the reduced set; comparisons of the first number of movement information predictors with the target quantity obtained, and if the first quantity is less than the target quantity, obtaining of the additional movement information predictor and its addition to the reduced set of movement information predictors.

EFFECT: decrease of spatial and time redundancies in video flows.

26 cl, 8 dwg

FIELD: physics.

SUBSTANCE: method of decoding an image comprises steps of: determining hierarchically structured encoding units for decoding an image, a prediction unit and a transformation unit; obtaining transformation coefficients from a bitstream through analysis and restoring encoded data of at least one prediction unit by performing entropy decoding, inverse quantisation and inverse transformation of the transformation coefficients obtained by analysis; performing intra-prediction or mutual prediction of the restored encoded data and restoring the encoded video.

EFFECT: high efficiency of compressing, encoding and decoding images.

4 cl, 18 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to video encoding techniques. Disclosed is a method of encoding a 3D video signal. The method includes a step of providing at least a first image of a scene as seen from a first viewpoint. The method also includes providing rendering information for enabling a decoder to generate at least one rendered image of the scene as seen from a rendering viewpoint, different from the first viewpoint. The method also includes providing a preferred direction indicator defining a preferred orientation of the rendering viewpoint relative to the first viewpoint.

EFFECT: high quality of generating images from different viewpoints by generating a preferred direction indicator.

13 cl, 4 dwg

Brightness meter // 2549605

FIELD: instrumentation.

SUBSTANCE: brightness meter contains an opaque light filter attached to a piezoelectric element which is connected to a frequency divider output, a lens, a pyramidal mirror octahedron with four external smooth surfaces and four disk photodetectors, each with two photoreception sectors. Photoreception sectors are fitted with colour light filters. The output of each photoreception sector is connected to the input of an analogue-digital converter. Each analogue-digital converter comprises the pulse amplifier to the output of which pulse light-emitting diodes are connected. Radiation from each light-emitting diode enters the group of eight identical photodetectors, each of which has on the reception side a neutral light filter with a ratio respectively of the register digit weight to which the output of each photodetector is connected.

EFFECT: possibility of synchronous receiving of brightness codes of eight colour components of the spectrum.

2 dwg, 1 tbl

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to image compression systems and methods. The method of compressing a digital image in a computing device comprises steps of dividing an image into a plurality of image subregions; selecting from a catalogue which includes a plurality of predetermined template forms, wherein each template form comprises a plurality of elements, properties and image variables, such as colour, colour gradient, gradient direction or reference pixel, and wherein each said form is identified by a code, the template form of each subregion best corresponding to one or more image elements of said subregion; and generating a compressed data set for the image, wherein each subregion is represented by a code which identifies the template form selected therefor.

EFFECT: improved compression of image data, thereby reducing the amount of data used to display an image.

22 cl, 4 dwg

FIELD: physics.

SUBSTANCE: this method comprises memorising the input raster video image as a flow of frames in the line input buffer. Said frames are splitted to micro blocs. The latter are compressed and stored in external memory. For processing, said micro blocs are retrieved from external memory, unclasped and written to internal memory. Raster macro blocs are formed and processed by appropriate processors.

EFFECT: efficient use of internal memory irrespective of processing algorithm type.

22 cl, 2 dwg

FIELD: physics, photography.

SUBSTANCE: invention relates to an image processing device and method, which can improve encoding efficiency, thereby preventing increase in load. The technical result is achieved due to that a selection scheme 71 from a prediction scheme 64 by filtering selects a motion compensation image for generating a prediction image at a high-resolution extension level from key frames at a low-resolution base level. The filter scheme 72 of the prediction scheme 64 by filtering performs filtration, which includes high-frequency conversion and which uses analysis in the time direction of a plurality of motion compensation images at the base level, selected by the selection scheme 71, in order to generate a prediction image at the extension level.

EFFECT: reducing load in terms of the amount of processing owing to spatial increase in sampling frequency at the base level for encoding the current frame.

19 cl, 26 dwg

FIELD: information technology.

SUBSTANCE: method of compression of graphic file by fractal method using ring classification of segments, in which the graphic file is split into rank regions and domains, and for each rank region the domain and the corresponding affine transformation is found, that best approximates it to the appropriate rank region, and using the obtained values of the domain parameters, comprising their coordinates, the coefficients of the affine transformations, the values of brightness and contrast, the archive is formed, and classification of domains and rank regions are introduced, based on the allocation in them of the "rings" and the calculation of the mathematical expectation of pixel intensity of these "rings", which enables to reduce the complexity of the phase of correlation of the segments and to accelerate compression.

EFFECT: reduced time of compression of the graphic file by fractal method.

3 dwg

FIELD: physics.

SUBSTANCE: method comprises making each array element in an image sensor from one "R, G, B radiation colour brightness to code" converter, which performs parallel synchronous conversion of radiation of three colours analogue video signals R, G, B into three codes. The frame image digitisation apparatus includes an objective lens, an image sensor comprising an array of elements, three switch units, three register units and a control signal generator, wherein each switch unit includes the same number of encoders as converters.

EFFECT: reduced cross dimensions of array elements in an image sensor, which enables to reduce the frame format size or increase resolution of the image sensor.

6 dwg, 1 tbl

FIELD: systems for encoding and decoding video signals.

SUBSTANCE: method and system for statistical encoding are claimed, where parameters which represent the encoded signal are transformed to indexes of code words, so that decoder may restore the encoded signal from aforementioned indexes of code words. When the parameter space is limited in such a way that encoding becomes inefficient and code words are not positioned in ordered or continuous fashion in accordance with parameters, sorting is used to sort parameters into various groups with the goal of transformation of parameters from various groups into indexes of code words in different manner, so that assignment of code word indexes which correspond to parameters is performed in continuous and ordered fashion. Sorting may be based on absolute values of parameters relatively to selected value. In process of decoding, indexes of code words are also sorted into various groups on basis of code word index values relatively to selected value.

EFFECT: increased efficiency of compression, when encoding parameters are within limited range to ensure ordered transformation of code word indexes.

6 cl, 3 dwg

Up!