Video encoding/decoding methods, video encoding/decoding devices and programmes therefor

FIELD: physics, video.

SUBSTANCE: invention relates to video encoding/decoding techniques which employ a loop filter which reduces blocking noise. The technical result is achieved due to that a video encoding/decoding device, which encodes or decodes video using a loop filter, includes a deviation calculating unit which calculates deviation between a target noise cancellation pixel and a neighbouring pixel of the target pixel using a decoded image. A pattern form establishing unit limits the pattern form such that the less the deviation from the maximum deviation in the decoded image, the smaller the pattern form. When removing target pixel noise, using a weight coefficient in accordance with the degree of similarity between the pattern of the target pixel and the pattern of each search point in the form of a search and a weighted sum of pixel values at search points, the loop filter compares patterns using the limited pattern form and removes the target pixel noise.

EFFECT: reduced computational complexity of the noise cancellation filter, thereby preventing deterioration of encoding efficiency.

5 cl, 19 dwg

 

The technical field TO WHICH the INVENTION RELATES

The present invention relates to techniques for encoding/decoding video using a contour filter that reduces the noise, blockiness, etc.

Priority is claimed on patent application of Japan No. 2011-051444 submitted on March 09, 2011, the contents of which is incorporated into this description by reference.

The LEVEL of TECHNOLOGY

Used in this description of the concept is defined as follows.

- The "search form": a set of search points around the target pixel comparison of the patterns or the shape formed by this set.

- "Form template": a group of pixels used to calculate the degree of similarity between the target pixel and each point of the search when you perform a comparison of the patterns or the shape formed by the group of pixels. The same form is used for a group of pixels around the target pixel and the pixels around each point of the search, and the values of pixels at positions with the same relative mutual position are compared with each other.

In image processing technologies, the method of reducing noise in the image and reduce damage damaged image, there have been proposed various noise reduction filters. Among other things, it is known that noise reduction filters according to the method of nonlocal averages (see Non-patent Document 1) demonstrate a high noise canceling effect. Hereinafter, a noise reduction filters in accordance with the method of nonlocal averages are referred to as filters NLM.

Fig. 18 is a diagram describing the NLM filter. Fig. 18 one square cell is the seek point, and the set of points of the search is the search form.P0is the target pixel for noise reduction, andPSis a pixel search points in the search target.T0andTSare forms of the template, and the form of the formT0source template comparison is exactly the same as that of the formTStemplate search targets.

In the NLM filter corresponding to the pixels in the formT0source template comparison, and the formTS template search targets compared with each other, and calculates the degree of similarity between templates. In General, computing a degree of similarity between the template uses the sum of squared differences (SSD) or sum of absolute differences (SAD).

Fig. 19 is a diagram illustrating the inputs and the output of the execution module of the NLM filter. Essentially, the module 1000 of execution of the NLM filter introduces four pieces of information, including: destination image denoising, shape search, shape, pattern, and noise reduction coefficient, and generates the resulting image with suppressed noise. As a ratio of the noise variance is how a typical value is given, when available, the original image is applied to the noise, and the corresponding value is set by the user when the original image is missing.

Module 1000 of execution of the NLM filter calculates a pixel value with a suppressed noise for each pixel as follows. The following will be described an example that uses SSD for calculating the degree of similarity between the templates.

(1) Initial value of the variableSWthe sum of weighting coefficients is set equal to 0, and the initial value of the variableSP the sum of the pixel values is set equal to 0.

(2) the Following processes are repeated for all search points within the search form.

(2-1) as the degree of similarity between the templates is calculated SSD.

(2-2) a Weighting factorW=exp(-SSD/ratiosquelch)

(2-3) the Sum of weight coefficientsSW=CImmaineCoinsxKoeffandCandentoinSW+ineCoinoyKoeffandCandentW

(2-4) the sum of the pixel values ofSP=CImmaznahenandypandKCelISP+ineCoinoyKoeff andCandentW×(znahenandepandKCelItohKandpoandCKa)

(3) Upon completion of the processes (2) for all search points within the search form on the pixel value with denoised target pixel noise reduction is obtained using the following equation:

(the value of the pixel with denoised)=sum of valuespixel SP/sum of weights SW

Module 1000 of execution of the NLM filter performs a noise reduction process using a single value and a single form for all of the pixels of the target image denoising where the same value is set as each of the following: the input of the noise reduction coefficient, input the search form and the input form template, and performs the noise reduction process along with switching values and forms for each relevant point, if you select the group of pieces of data corresponding to each pixel.

Moreover, in order to remove the distortion from encoding, filtercomponent with the deblocking filter is set to "HM", what is a test model is the high efficiency Video Coding standards video coding for next generation, for which the "Expert Group on the Moving Images (MPEG) and the Expert Group on Video Coding (VCEG), is currently implementing measures for international standardization (see non-Patent Document 2).

The PRIOR art DOCUMENTS

Non-PATENT DOCUMENTS

Non-patent Document 1: A. Buades, B. Coil, and J. M. Morel, "A non-local algorithm for image denoising", Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, vol. 2, pp. 60-65, June, 2005.

Non-patent Document 2: Thomas Wiegand, Woo-Sin Han, Benjamin Bross, Jens-Rainer Ohm and Gary S. Sullivan, "WD1: Working Draft 1 of High-Efficiency Video Coding", ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 3rd Meeting: Guangzhou, CN, 7-15 October, 2010.

Summary of the INVENTION

TASKS SOLVED WITH the HELP of the INVENTION

As described above, to remove the distortion from encoding, "HM" "the high efficiency Video Coding in video coding standards next generation with the core filter to the noise deblocking filter, however, we can assume that the encoding efficiency can be improved in comparison with conventional filters remove blockiness, if HM is introduced above the NLM filter.

However, the huge computational complexity of the NLM filter, so there is a possibility that the calculation time, Tr�required for decoding, greatly increase if the decoder device will calculate the NLM filters for all pixels.

As described above, template matching is performed for each target pixel noise reduction and each of the search points inside an arbitrary shape using a shape template to calculate the degree of similarity between the templates. As a result, if we assume that, for example, a form template is a block ofN×Nand the search form isM×Mthen the computational complexity of the order ofN2×M2required to perform the calculation of noise reduction for a single pixel. Consequently, to use the NLM filter in the decoder device, etc., required technology to reduce the computational complexity.

The purpose of the present invention is to solve the above problems and to provide technology to reduce the computational complexity of the noise reduction filter, suppressing the decrease in coding efficiency. It should be noted that here as coding efficiency is used, for example, the ratio of BD to�th is one of the international valuation techniques and are calculated from the image quality and the number of bits.

MEANS FOR SOLVING PROBLEMS

To solve the above problems, the present invention performs the following processes filtering during video encoding or decoding video using a loop filter, which performs template matching between a template, which is a source of comparison for the target pixel noise in the decoded image, and a template for each of the search points, which are the targets of a search within a search form in the decoded image, and removes noise of the target pixel by using the weighting factors in accordance with the degrees of similarity between the templates and a weighted sum of pixel values in pixels of the search.

(1) the extent of the deviation between the target pixel and the surrounding pixel of the target pixel is computed using the decoded image.

(2) the Degree of deviation is used as the index used to constrain the shape of the template, and the process constraints of the form template is executed in such a way that the lower the degree of deviation relative to the maximum value of the degree of deviation within the decoded image, the less the form of a template.

Moreover, additionally, it may be added, the following process (3).

(3) is Detected unnecessarily assigned area in the form of a template, limited process (2), and performed stand�ornago setting form template, to further limit the shape of the template.

As described above, despite the fact that usually form unique template sets for the entire frame as a fixed value, it is possible to reduce the computational complexity comparison of patterns through the introduction process constraints of the form template for each pixel using the above processes (1) and (2). Additionally, you can further reduce the computational complexity by an additional execution of the process (3).

Accordingly, it is possible to reduce the computational complexity of the loop filter, while suppressing decrease in coding efficiency of video encoding and decoding.

The BENEFICIAL EFFECTS of the INVENTION

In accordance with the present invention, it is possible to reduce the computational complexity, thus suppressing a decrease in encoding efficiency, through the introduction process constraints of the form template to reduce the number of pixels in the template, between which a comparison is in the process of contour filter when encoding/decoding the video.

BRIEF description of the DRAWINGS

Fig. 1 is a diagram illustrating an example configuration of a video encoding device to which is applied a variant implementation of the present invention.

Fig. 2 is a block diagram of a processing device for video encoding.

Fig. 4 is a diagram illustrating an example configuration of a video decoding device to which is applied a variant implementation of the present invention.

Fig. 5 is a flowchart of processing in the device for video decoding.

Fig. 6 is a flowchart of processing in the device for video decoding.

Fig. 7 is a diagram illustrating a first example configuration of the processing module denoising.

Fig. 8 is a flowchart of processing of the processing module denoising.

Fig. 9 is a diagram describing an example of constraints of the form template module install the template form.

Fig. 10 is a diagram illustrating a second example configuration of the processing module denoising.

Fig. 11 is a flowchart of processing of the processing module denoising.

Fig. 12A is a diagram describing an example of a method of detecting the degree of deviation.

Fig. 12B is a diagram describing an example of a method of detecting the degree of deviation.

Fig. 13 is a diagram illustrating a histogram of the degrees of deviation, as, indeed, the relationship between threshold values and setting the fields.

Fig. 14A is a diagram describing an example of setting the number of samples in accordance with corresponding�achenium integration.

Fig. 14B is a diagram describing an example of setting the number of samples in accordance with the ratio of integration.

Fig. 15A is a diagram describing the Sobel operator in the example of a method of detecting the direction of the border.

Fig. 15B is a diagram describing the indicators in accordance with the directions in example of a method of detecting the direction of the border.

Fig. 15C is a diagram describing a method for assigning indicators example of a method of detecting the direction of the border.

Fig. 16A is a diagram describing the target area calculation in the example of a way to re-install the template form.

Fig. 16B is a diagram describing a symptom noise blocking in the example of a way to re-install the template form.

Fig. 16C is a diagram describing the installation of the four corners in the example of a way to re-install the template form.

Fig. 17 is a diagram illustrating an example configuration of a system when the embodiment of the present invention is implemented using software.

Fig. 18 is a diagram describing the NLM filter.

Fig. 19 is a diagram illustrating the inputs and the output of the execution module of the NLM filter.

Embodiments of the INVENTION

Hereinafter, embodiments of the present invention will be described with reference to the line�I. First will be described the examples of the video encoding device and video decoding device, which are embodiments of the present invention. It should be noted that as described below the video encoding device and video decoding device are examples of devices to which applied embodiments of the present invention, and embodiments of the present invention is not necessarily limited to the configurations of these devices.

[EXAMPLE of APPLICATION TO the VIDEO ENCODING DEVICE]

Fig. 1 is a diagram illustrating an example configuration of a video encoding device to which is applied a variant implementation of the present invention. In shown in Fig. 1 device, video encoding module 101 store the original image is a storage module for all images of the input sequence that is the target of coding, and he consistently displays a target image frame encoding.

In the video encoding device in Fig. 1 illustrates an example in which the module 113 processing denoising stores as a reference the search form and the reference form template as fixed values. Moreover, illustrates an example in which the noise reduction coefficient determined by the coefficient W�of moodaley, optimal encoding efficiency in comparison with what is present in the original image, which is encoded into the bit stream as an overhead factor of noise reduction. When both the search form and the shape of the template or one of them served from the outside, through, for example, a user setting, you must send the forms to the decoder device, but the installation from the outside can be realized by coding these forms as service data is similar to noise reduction coefficient, and therefore the description is omitted in the application example of the present encoding device and in the application example to the decoder device.

The module 102 determine the block size defines the block size, which is the separation of a predetermined Unit of Encoding and encoding is performed, and outputs the target block and the block size. The module 103 for determining the size of the prediction determines the size of a prediction unit which predicts pixel values of the target block, and outputs the target block size and prediction. The determination module 104 methods of prediction determines the method that provides the highest coding efficiency of methods, including: intra-frame prediction and �icedove prediction, when the predicted pixel value of the target block, and outputs the prediction block and the prediction information when using this technique. To form a differential unit calculates a difference between the target block, which is output by the module 103 for determining the size of the prediction, and the prediction block.

The module 105 to determine the size of the transform determines the size of the transform, which is the separation of the differential unit, and outputs a divided difference blocks of the size of the transform, as, indeed, the size of the transform. The module 106 discrete cosine transform applies a discrete cosine transform (DCT) to difference blocks, and outputs DCT coefficients. Module 107 quantization quantum DCT coefficients, and outputs the quantized DCT coefficients.

The module 108 inverse quantization performs inverse quantization on the quantized DCT coefficients to restore the DCT coefficients. Module 109 inverse discrete cosine transformation applies the inverse discrete cosine transform the DCT coefficients, and outputs the decoded difference block. The decoded difference block added to the block prediction for the formation of a partial decoded image. The module 110 storing the decoded image is mo�UHL storage to store the decoded partial image and images to which also may be an appeal to the decoder device. Module 111 intraframe prediction refers to the decoded partial image stored in the module 110 storing the decoded image, and outputs the prediction block and the prediction information.

The module 112 for determining the coefficient of the noise determines the noise reduction coefficient, which provides an optimal encoding efficiency, referring to the decoded image and the original image, and outputs the noise reduction coefficient.

Module 113 processing noise reduction filter is a filter that removes noise of the decoded image using the method of image processing, which performs template matching between a template, which is a source of comparison for the target pixel noise in the decoded image, and a template for each of the search points, which are the aim of finding with reference to the decoded image as the target image noise reduction, and removes noise of the target pixel by using the weighting factors in accordance with the degree of similarity between the templates and a weighted sum of pixel values in pixels of the search. This filtering process generates a filtered decoded image, in which noise reduction�about the distortion from encoding. This module 113 processing denoising, in particular, differs from the prior art. Its detailed implementation option described below.

The module 114 processing adaptive loop filter (ALF) carries out the filtering process on the filtered decoded picture so that it was closer to the original image, and outputs the decoded image after ALF and ALF coefficients. The buffer 115 of the frame is a storage module for storing the decoded image after ALF. The module 116 interframe prediction refers to the buffer 115 of the frame and outputs the prediction block and the prediction information.

Module 117 coding sequence encodes information unique to the input sequence, such as the number of pixels in the vertical direction and the horizontal direction of the video, and then outputs the service data information sequence into the module 125 to store bit stream. The module 118 encoding block size takes the size of the block from the module 102 determine the size of the unit, performs encoding, and then outputs the service data unit size. Module 119 encoding amount of prediction takes the size of the prediction module 103 for determining the size of a prediction, performs encoding, and then outputs the service data time�EPA predictions. The module 120 encoding information of the prediction receives information of a prediction unit 104 determine the method of prediction, performs encoding, and then outputs the service data information of the prediction.

The module 121 encoding size of the transform takes the size of the transform unit 105 for determining the size conversion, performs encoding, and then outputs the service data size conversion. The module 122 coding quantized DCT coefficient takes the quantized DCT coefficients from module 107 quantization, performs encoding, and then outputs the service data DCT. Module 123 the encoding noise reduction coefficient takes the noise reduction coefficient, which is defined by the module 112 can determine the noise reduction coefficient, performs encoding, and then outputs the service data of the coefficient of noise reduction. The module 124 encoding coefficient ALF takes the ALF coefficients, performs encoding, and then outputs the service data ALF. The module 125 to store bit stream is a storage module for storing each service data and outputs the bit stream as a result of encoding, after encoding the whole sequence.

The module 126 storing information encoding is a storage module for storing information coding, to appeal to�Torah can also be carried out by the decoding device. The appeal to this information encoding, which is stored in the module 126 storing information coding is carried out from the side, and it is used by module 113 processing noise reduction filter and other modules.

[FLOW of PROCESSING of the VIDEO ENCODING DEVICE]

Fig. 2 and Fig. 3 illustrates a block diagram of the processing of the video encoding device shown in Fig. 1. The video encoding device performs the following processes.

First, in step S101, the input sequence is stored in the module storage 101 of the source image.

Next, in step S102, the information sequence is encoded and stored in the module 125 to store the bitstream.

Next, in step S103, over all target frames encoding performs cyclic processing up to the step S118.

Next, in step S104, over all Blocks of the Encoding (CU) target image coding is a cyclical processing up to the step S114.

Next, in step S105, the block size is determined, coded and stored in the module 125 to store the bitstream.

Next, in step S106, the size of the prediction is determined, coded and stored in the module 125 of the bitstream.

Next, in step S107, calculates a difference between the target block and the technique that provides the highest coding efficiency among the block before�precautionary statements intra-frame prediction block and a prediction interframe prediction.

Next, in step S108, the prediction information is stored in the module 125 to store the bitstream.

Next, in step S109, the size of the transform is determined, coded and stored in the module 125 to store the bitstream.

Next, in step S110, and performs discrete cosine transform (DCT).

Next, in step S111, the quantization is performed, and the quantized DCT coefficients are encoded and stored in the module 125 to store the bitstream.

Next, in step S112, performs inverse quantization and inverse discrete cosine transform.

Next, in step S113, the block prediction is applied in step S107, it seems to the decoded difference block after the inverse transform.

Next, in step S114, a partial decoded image obtained by adding together, stored in the module 110 storing the decoded image.

- Upon completion of looping for all CU target image encoding, in step S115, the calculated noise reduction coefficient, which provides an optimal encoding efficiency using the decoded image and the original image.

Next, in step S116, on the decoded image is performed, the filtering process for noise reduction, with the help of this methodology, using the calculated coefficient of sunopta�tion, and the noise reduction coefficient is encoded and stored in the module 125 to store the bitstream.

Next, in step S117, marks the adaptive loop filter (ALF), and the ALF coefficients are encoded and stored in the module 125 to store the bitstream.

Next, in step S118, the decoded image after ALF is stored in the buffer 115 of the frame.

- Upon completion of the cyclic processing for all target frame encoding, in step S119, the output bit stream, and processing terminates.

[EXAMPLE of APPLICATION TO the VIDEO DECODING DEVICE]

Fig. 4 is a diagram illustrating an example configuration of a video decoding device to which is applied a variant implementation of the present invention. Hereinafter will be described the video decoding device shown in Fig. 4. Module 201 storing the bit stream is a storage module for the input bit stream and outputs each piece of the service information as needed. The module 202 decode the information sequence receives the service data information sequence and decodes information that is unique to the sequence, such as the number of pixels in the vertical direction and the horizontal direction of the video.

Module 203 decoding the size of the unit receives the service data of size �Loka and decodes the information specifies the block size, which is the separation of a predetermined Unit of Encoding and encoding. The module 204 decodes the size of the prediction receives service data size prediction, and outputs the size of the prediction, which was used in the video encoding device.

The module 205 decoding of the information of the prediction receives service data of the prediction information, and outputs information of the prediction. The module 206 decoding of the size of the transform receives the service data of the size of the transform, and outputs the size of the transform that was applied in the video encoding device. Module 207 decoding quantized DCT coefficient takes the size of the transform and data service DCT, and outputs the quantized DCT coefficients. The module 208 of decoding noise reduction coefficient receives the service data of the noise reduction coefficient, and outputs the noise reduction coefficient. Module 209 decoding coefficient ALF takes overhead ALF and outputs the ALF coefficients.

The module 210 of the inverse quantization performs inverse quantization on the quantized DCT coefficients to restore the DCT coefficients. Module 211, the inverse discrete cosine transformation applies the inverse discrete cosine transform to a friction coefficient close to�there DCT, and outputs the decoded differential signal. Module 212 of the formation of block prediction receives information of the prediction, the decoded partial image and the reference frame, and generates a prediction block. Module 213 of the formation of the decoded partial image stacks block prediction to the decoded differential signal to form a partial decoded image. Module 214 storing the decoded image is a storage module for storing the decoded partial image.

The module 215 of the processing of the noise reduction filter is a filter that removes noise of the decoded image using the method of image processing, which performs template matching between a template, which is a source of comparison for the target pixel noise in the decoded image, and a template for each of the search points, which are the aim of finding with reference to the decoded image as the target image noise reduction, and removes noise of the target pixel by using the weighting factors in accordance with the degree of similarity between the templates and a weighted sum of pixel values in pixels of the search and the filtering process generates a filtered decoded image, which reduced the distortion at the edges of the block. This module 215 filter treatment sunopta�t, in particular, differs from the prior art. Its detailed implementation option described below.

Module 216 processing ALF takes the ALF coefficients, performs filter processing on the filtered decoded image, so that it was close to the original image, and outputs the decoded image after ALF. It should be noted that the decoded image after ALF becomes the final decoded image by decoding. The buffer 217 of the frame is a storage module for storing the decoded image after ALF. Module 218 storing information encoding is a storage module for storing information encoding, and reference to this information encoding stored in the storage 218 of the information coding is carried out from the side, and it is used by module 215 processing noise reduction filter and other modules.

[FLOW of PROCESSING of the VIDEO DECODING DEVICE]

Fig. 5 and Fig. 6 illustrates a block diagram of the processing of the video decoding device shown in Fig. 4. The video decoding device performs the following processes.

First, in step S201, the input bit stream is stored in the module 201 storing the bitstream.

Next, in step S202, the decoded information sequence.

Next, in step S203, above all� target frames the decoding is done looping, up to the step S214.

Next, in Step S204, over all Blocks of the Encoding (CU) target image decoding performs cyclic processing up to the step S211.

Next, in step S205, the decoded block size.

Next, in step S206, the size of the decoded prediction.

Next, in step S207, the decoded prediction information and a prediction block.

Next, in step S208, the decoded size of the transform.

Next, in step S209, decoded, quantized DCT coefficients.

Next, in step S210, performs inverse quantization and inverse discrete cosine transform.

Next, in step S211, is formed of the decoded block, using block prediction in step S207 and the result of step S210, and stored in the module 214 storing the decoded image.

- Upon completion of looping over all the CU target image decoding, in step S212, the decoded coefficient noise reduction and partial decoded image is performed, the filtering process of noise reduction, using this method.

Next, in step S213, the decoded coefficients ALF, and marks the ALF process.

Next, in step S214, the decoded image after ALF is stored in the buffer 217 of the frame.

- Upon completion of the cyclic processing for all target frames Deco�of debugger, in step S215, the frames from the buffer 217 of the frame displayed in the sequence frame number to form the output sequence, and processing terminates.

[EXAMPLE 1 of the PROCESSING MODULE DENOISING]

Fig. 7 is a diagram illustrating a first example configuration of the processing module denoising. Module 30-processing noise reduction filter shown in Fig. 7 is a contour filter, which is used as the above-described module 113 of the processing of the noise reduction filter in the video encoding device shown in Fig. 1. In addition, the module 30 of the processing of the noise reduction filter is also used as the above-described module 215 of the processing of the noise reduction filter in the video decoding device shown in Fig. 4.

Module 30-processing noise reduction filter module 301 setting the coefficient of the noise reduction module 302 of execution of the NLM filter, the module 303 storage search module 304 storing the form template, the module 305 detect the degree of deviation and the module 306 of the plant the shape the template.

Module 301 equation generates a noise reduction noise reduction coefficients, which correspond to the pixel positions of the target image denoising using a predetermined reference coefficient of noise reduction and information privacy�tion. In General, range from medium speed to low speed, the noise becomes noticeable blocking artifacts at the edges of the processing block of the prediction Block Prediction and treatment unit conversion Unit Conversion, and therefore the coefficients of the noise in the pixel positions near these edges are set so that they were higher than those that are used inside the block using the information encoding.

Module 303 storing the search form and the module 304 storing the template form are the storage modules to store each form as a fixed value. Module 305 detection of the deviation and the module 306 of the installation form of the template provided for the introduction of process constraints of the form template on a per-pixel basis. In conventional filters NLM shape of the template, usually set as a fixed value for the entire frame. In contrast, in the present version of the implementation, as a pre-processing filter by NLM module 302 of execution of the NLM filter module 305 detect the degree of deviation detects the degree of deviation from surrounding pixels for each pixel of the decoded image which is the target image denoising, and the module 306 of the plant the shape the template categorizes the degree of deviation detected by the module 305 detection �Tapani deviation, across multiple levels and sets the template form (from large to small) of each of pixels in accordance with the degree of deviation (high to low).

The module 302 of execution of the NLM filter type in the target image denoising, noise reduction coefficients for the pixels set by the module 301 setting the coefficient of noise reduction, the shape of the pattern for each pixel, determined by the module 306 install the template form and the search form specified module 303 storing a search form, and outputs the image denoised. The processing performed by the module 302 of execution of the NLM filter, similar to that performed by the conventional NLM filter disclosed in non-Patent Document 1.

As a reference of a noise reduction coefficient, supporting the search form and the reference form template information set by the user, may be introduced into the video encoding device or a video decoding device, or fixed values can be save in advance and values can be used.

This version of the implementation illustrates an example that introduces the noise reduction coefficient and stores the search form to the form template in the modules 303 and 304 storing in advance as fixed values. When the search form and the form template is arbitrarily set by a user, the device coding wee�SW encodes these pieces of information similar to the noise reduction coefficient, and the video decoding device decodes these pieces of information, thereby realizing the user's installation.

Fig. 8 is a block diagram of the processing module 30 processing noise reduction filter shown in Fig. 7. First, in step S301, the module 30 processing denoising is shown a noise reduction coefficient, the target image denoising and information encoding, which are introduced from the outside. Destination image denoising is introduced from the storage module 110 of the decoded image (in the case of encoding) or module 214 storing the decoded image (in the case of decoding), and encoding information is entered from the module 126 storing information encoding (if encoding) or module 218 storing information encoding (in the case of decoding).

In step S302, the module 301 factor setting unit sets the squelch noise reduction coefficients, which correspond to the pixel positions of the target image noise input reference noise ratio, and introduces information encoding.

Next, in step S303, the module 305 detect the degree of deviation enters the target image denoising, calculates the degree of deviation between the target pixel and the surrounding pixels, and outputs the group of degrees of deviation, which corresponds to each p�xelu. In step S304, the module 306 of the plant the shape the template sets and displays a limited form template that corresponds to each pixel using the group of degrees of deviation and the predetermined shape of the template, which is stored in the module 304 storing the template form. When the form template is limited, the input form template is considered as the maximum form, and a restriction is applied so that the lower the degree of deviation, the less the form of a template.

Finally, in step S305, the module 302 of execution of the filter takes the NLM, the NLM filter in accordance with the set information, and outputs the resulting filtered image.

Fig. 9 is a diagram describing an example of constraints of the form template via module 306 install the template form. For example, it is assumed that the input or the form template is block 5×5, as shown in Fig. 9(A). This form template is limited in accordance with the degrees of deviation, as shown, for example, Fig. 9(B). When SSD is used as the degree of similarity between the templates, the operations comprising 24 addition, 25 substractions and 25 multiplications are required for the initial shape of the template; in contrast, by limiting the shape of the template, as shown nafig. 9(B) requires only operations, which includes 12 summations, 13 subtractions and 13 multiplications, and therefore the computational complexity is reduced by almost 50%.

Processing by the module 30 processing denoising aims at removing noise from the encoding, and it does not involve the removal of noise, which is uniformly applied to the entire frame, from the frame, as in conventional image processing. The noise from the encoding (encoding distortion) can be roughly classified into the following types:

1. The disappearance of the pattern due to the smoothing

2. Mosquito noise around the border because DCT

3. Noise blocking, formed on the edges of the Block Prediction and Transform Block

From these types of this variant implementation is not directed at the restoration of a pattern is lost by smoothing, as it is difficult to do. If the NLM filter is applied to this smoothed area in which there is no change in the pixel value, it is still required computational complexity, despite the fact that there is almost no variation between the pixel signal before the calculation and the pixel signal after the calculation. This version of the implementation computes the degree of deviation from surrounding pixels to reduce computational complexity, which is also distributed on such smoothed region.

[EXAMPLE 2 MODULE ABOUT�of ABODE NOISE reduction FILTER]

Fig. 10 is a diagram illustrating a second example configuration of the processing module denoising. Module 31-processing noise reduction filter shown in Fig. 10 is a contour filter, which is used as the above-described module 113 of the processing of the noise reduction filter in the video encoding device shown in Fig. 1. Moreover, the module 31 of the processing of the noise reduction filter is also used as the module 215 of the processing of the noise reduction filter in the video decoding device shown in Fig. 4.

Module 31 of the processing of the noise reduction filter module 311 setting the coefficient of the noise reduction module 312 of execution of the NLM filter, module 313 storage search module 314 storage form template, module 315 detect the degree of deviation module 316 setup form template, module 317 of the detection direction of the border and module 318 to re-install the template form.

Module 31 of the processing of the noise reduction filter is different from the above-described module 30 processing noise reduction filter shown in Fig. 7, that is provided by the module 317 of the detection direction of the border and the module 318 to re-install the template form. Other modules have the same functions as those provided in the module 30 of the processing of the noise reduction filter.

The module 317 detection direction� " s borders detects the directions of the boundaries of the target image noise reduction, and outputs a group of border areas, corresponding pixels in the destination image denoising. As an example of the directions of the boundaries of the present group of indicators or set of angles, which are numbered in accordance with the directions.

The module 318 to re-install the template form reinstalls to additional constraints of the form template that was overly dedicated module 316 install the template form with reference to: a group of border areas, which are detected by the module 317 of the detection direction of the border; a group of degrees of deflection, which is detected by the module 315 detect the degree of deviation; and information encoding. In particular, further restricted the form of a template, which is characterized by the excessive secretion surroundings for noise blocking, the formation of which is peculiar to the edges of the Block Prediction and Transform Block.

Fig. 11 is a block diagram of the processing module 31 of the processing noise reduction filter shown in Fig. 10. First, in step S311, the module 31 processing denoising is shown a noise reduction coefficient, the target image denoising and information encoding, which are introduced from the outside.

In step S312, the module 311 factor setting unit sets the squelch noise reduction coefficients, which correspond to the positions of the target pixel from�mapping noise from the input reference noise ratio and the input information encoding. Next, in step S313, the module 315 detect the degree of deviation enters the target image denoising, calculates the degree of deviation between the target pixel and the surrounding pixels, and outputs the group of degrees of deviation that correspond to each pixel. In step S314, the module 316 install the template form sets and displays a limited form template that corresponds to each pixel using the group of degrees of deviation and the predetermined shape of the template, which is stored in the module storage 314 of the form template. When the form template is limited, the input form template is considered as the maximum form, and a restriction is applied so that the lower the degree of deviation, the less the shape of the template. The above processes of steps S311 through S314 are exactly the same as those that are present in the module 30 of the processing noise reduction filter described with reference to Fig. 8.

In step S315, the module 317 of the detection direction of the border detects the direction of each point in the destination image noise reduction, and outputs a group of border areas. In step S316, the module 318 to re-install the template form re-establishes the template form using the information encoding, a group of border areas, which are detected by the module 317 of the detection direction of the border, and �the Rupp degrees of deviation, which are detected by the module 315 detect the degree of deviation, to reduce the excessive secretion of a form template that is installed in step S314. Finally, in step S317, the module 312 of execution of the filter takes the NLM, the NLM filter in accordance with the set information, and outputs the resulting filtered image.

[EXAMPLE of METHOD of DETECTING the DEGREE of DEVIATION]

Will be described an example of ways to detect the degree of deviation in the modules 305 and 315 detect the degree of deviation. Fig. 12A and Fig. 12B are diagrams describing an example of ways to detect the degree of deviation. Description will be given for the case where the pixel iszshown in Fig. 12A, is set as the target pixel for noise reduction, and indicators withaxhas its surrounding pixels. Here, the weighted sum obtained by multiplying the absolute differences between pixelzand the surrounding pixels (withax) on the coefficients, which take into account ZAT�the Chania depending on distance, is used as an index for the degree of deviation (DiffIntensity). I.e.DiffIntensityis calculated using the following equation.

Denote withAEin the equation are the coefficients, which are defined in advance in accordance with the distances between the target pixel and the surrounding pixels, and they are set arbitrarily. For example, to set the factors withAEcan be applied a Gaussian distribution depending on distance, etc. it Should be noted that mandatory targets are not all the coefficients withAEand it is possible to reduce the computational complexity of the filter by, for example, setting D equal to 0.

Upon completion of the calculations for allDiffIntensitymodules 305 and 315 detect the degree of deviation outputting the group of degrees of variance in subordinate modules 306 and 316 install the template form.

Alternatively, it is considered that the absolute�cozy the value of the filter Laplace, the sum of squared differences (SSD), variance, etc. can be applied to calculate the degree of deviation.

To further reduce computational complexity, we also consider the method of installation of several sampling points as targets of calculation for each block inN×Nwith an arbitrary size for the decoded image and the use of degrees of deviation typical values at positions of pixels.

Fig. 12B illustrates an example of thinning out the number of samples. In this example, the block is divided into 2×2, and the purposes of computing is the upper-left pixel and lower right pixel. When the degree of deviation in this example is designated asDiffIntensity2×2DiffIntensity2×2is calculated by the following equation.

The aboveDiffIntensity2×2used as typical values when prompted references to the degree of deviation in the positions of the pixel withAD. In this example, the computational complexity required to calculate grade�th deviation, reduced almost by half.

[EXAMPLE 1 method of MOUNTING the TEMPLATE FORM]

As an example of how to install the form template in modules 306 and 316 install the template form, will be an example of reducing the template form for all points of the reference form template in the form of block 3×3, which is set from the device using the group of degrees of deviation, which is calculated by the modules 305 and 315 detect the degree of deviation.

Fig. 13 is a diagram illustrating a histogram of the degrees of deviation, for that matter, and the relationship between thresholds and installation areas. When the goal is a natural image, the histogram of the group of degrees of deflection is shifted to lower values and the higher the degree of deviation, the lower the frequency of occurrences. Moreover, the property of the histogram is that it shifts to a lower degree of deviation with increasing values of the quantization parameter.

Modules 306 and 316 install the template form, create a histogram of the input group of degrees of deviation, divide the histogram into four parts so that the ratio between their areas were equal to each other, and sets of the formTmpShape templates for areas withαδas follows. The value of the edge between the regions ofαandβrepresented byThαthe value of the edge between the regions ofβandγrepresented byThβand the value of the edge between the regions ofγandδrepresented byThγ.

(1) If the degree of deviation is less than the threshold valueThα,TmpShape=Nandhto(the number of elements R�VNO 0).

WhenTmpShapestill nothing, template matching is not performed.

(2) If the degree of deviation is greater than or equal to the threshold value ofThαand less thanThβ,TmpShape=TohKa(the number of elements equal to 1).

WhenTmpShapeanyway Point is, SSD is calculated only for the target pixel and the pixel point of the search.

(3) If the degree of deviation is greater than or equal to the threshold value ofThβand less thanThγ,TmpShape=KpeCt(the number �of elements is 5).

WhenTmpShapeequal Cross, the comparison is performed using the template form of five pixels including the target pixel and the four (top, bottom, left, and right) of the pixel.

(4) If the deviation is greater than or equal toThγ,TmpShape=BloK3×3(the number of elements equal to 9).

WhenTmpShapeequal toBloK3×3the comparison is performed for all points of the form template in the form of block 3×3.

In comparison with the implementation methodology of comparison for all points of the form template in the form of block 3×3 and for all search points by entering this methodology �might be received the following computational complexity.

Regionα: the computational complexity of 0

Regionβ: computational complexity 1/9

Regionγ: computational complexity 5/9

Regionδ: computational complexity 1

Because each area is ¼ of the entire frame, the total computational complexity is 5/12, theoretical value of the computational complexity can be reduced almost to a little less than 1/2.

The following is the reason why the shape of the template is determined in accordance with the size of the degree of deviation in this way. Signal with a high degree of deviation, as a rule, occurs in the vicinity of strict boundaries, while a faint border, mosquito noise due to the DCT, the noise during image creation, etc. are manifested in the quality of the signal with a low degree of deviation to the average degree of deviation. Because the NLM filter is that it is effective, in particular, in the vicinity of the border, this version of the implementation allocates a large form template for the region (δ), where the degree OTK�of onania high, thereby suppressing the deterioration of the noise canceling effect.

[EXAMPLE 2 WAYS to INSTALL the TEMPLATE FORM]

As another example of how to install the form template in modules 306 and 316 install the template form will give an example of reducing the template form for all points of the reference form template in the form of the unitN×Nthat is set from the device using the group of degrees of deviation, which is calculated by the modules 305 and 315 detect the degree of deviation.

Fig. 14A and Fig. 14B are diagrams describing an example of setting the number of samples in accordance with the ratio of integration. Modules 306 and 316 install the template form, create a histogram of the group of degrees of deviation and set the form template using the ratio between the integrated values of the functionf(DiffIntensity) (Fig. 14B), representing a distribution of degrees of deviation, where the deviation of the target pixel noise in the histogram is denoted asThIntensity.

I.e. when the reference shape template set as a unitN×Nas shown in Fig. 14A, the reduced form of the template set in the form of a circle � its diameter is defined as follows.

Accordingly, it is possible to perform an effective comparison of the samples through the use of small form template, when the degree of deviation is low, and use great form template, when the degree of deviation is high.

[EXAMPLE 1 DETECTION of the DIRECTION of the BORDER]

As one example, the method of detecting the direction of the border in module 317 of the detection direction of the border will be given of an example in which a Sobel filter is applied to the decoded image, to detect the direction of the border, and displays an indicator that is assigned in accordance with the direction.

Fig. with 15A through 15C are diagrams describing an example of the detection direction of the border (the Sobel filter and indicators in accordance with the directions). As shown in Fig. 15A, the Sobel operator is applied to surrounding pixels, and the performance ofVecIndex(0 to 10) shall be appointed in accordance with the detected directions of the border, as shown in Fig. 15B. In this case, set one threshold valueTh, 0 is assigned asVecIn dexif the absolute sum of the components along the axisxand axisy(dxanddy) is less than the threshold valueThsince it is believed that the target pixel is not present strict boundaries, and indicators are displayed as a group of border areas.

Fig. 15C illustrates the algorithm used to assign indicator.

- If|dx|+|dy|<Th,VecIndex=0

- If the above condition fails anddy=0,VecIndex=6.

- If the conditions above n� performed and dx/dy<8.0,VecIndex=6.

- If the above conditions are not met anddx/dy<2.0,VecIndex=7.

- If the above conditions are not met anddx/dy<1.0,VecIndex=8.

- If the above conditions are not met anddx/dy<0.5,VecIndex=9.

- If the above conditions are not met anddx/dy<0.125, VecIndex=10.

- If the above conditions are not met anddx/dy<0.125,VecIndex=1.

- If the above conditions are not met anddx/dy<0.5,VecIndex=2.

- If the above conditions are not met anddx/dy<1,VecIndex=3.

- If the above conditions are not met anddx/dy<2.0,VecIndex=4.

- If the above conditions are not met and dx/dy<8.0,VecIndex=5.

- If none of the conditions above is no,VecIndex=6.

To reduce the impact of mosquito noise due to the discrete cosine transform (DCT) and noise at the time of creation of the image and to reduce the computational complexity, it is also effective to apply the Sobel filter to the image obtained by reducing the scale of the input target image noise reduction to1/Nin the vertical direction and the horizontal direction, rather than to the input target image denoising, which is not reduced in scale.

When the Sobel filter is applied to a reduced in scale to1/Nthe image, then output the calculation result of the Sobel filter is considered as typical values of the group of the plurality of pixels used to zoom out.

[EXAMPLE 2 DETECTION of the DIRECTION of the BORDER]

As others�about the example of a method of detecting the direction of the border in module 317 of the detection direction of the border will give an example, which applies a Sobel filter to the target image denoising for detection of the direction of the border and displays the angle in radians (Radian). As described above, the Sobel operator shown in Fig. 15A, is applied to surrounding pixels, and the angle in radians is calculated from the component along the axisxand axisy(dxanddyusing the inverse tangent (π/2toπ/2). In this case, when the absolute amountdxanddyless than the threshold valueThthen set the value (EXatanfor example, 100) outside the range of the output values of the arc tangent ( π/2toπ/2).

That is the angle in radians (Radian) is defined as follows.

- If|dx|+|dy|<Th,Radian=EXatan.

- If the above condition fails anddy=0,Radian=π/2.

- Otherwise,Radian=arctan(dx/dy).

[EXAMPLE of a WAY to RE-INSTALL the TEMPLATE FORM]

As one example of a way to re-install the template form to module 318 to re-install the form template will be described an example in which the shape of the template set by module 316 install the template form, showing�tion in Fig. 9, re-installed using the group of degrees of deviation, which is output by the module 315 detect the degree of deviation shown in Fig. 12B, a group of border areas shown in Fig. 15B and Fig. 15C, and information coding, to further reduce the size of the template form.

Fig. 16A of Fig. 16C are diagrams which describe a way to re-install the template form. First, the Block size of the Transform, which is a block of the conversion process such as DCT is derived from information encoding, andNpixels near the outer boundaries of each block are set as the target region, as shown in Fig. 16A. As for other areas, the result of the installation of the above-described module 316 install the template form is used without modification.

When used as a group of border areas, as shown in Fig. 16B, it is:VecIndexas a rule , are shifted by 1 in the upper and lower parts of the external border of the block where the noise is significant blockiness;VecIndexas a rule, moves to 6 in the left and right parts� external boundaries of the block, and a high degree of variance found in these parts of the region. Moreover, in areas with high quantization parameter, where there is noticeable noise blocking, tends to be relatively flat area inside of the Unit Conversion.

I.e. in the range from the field with an average speed to the low speed, whereQPquantization is large, if the sign, as shown in Fig. 16B, is satisfied when setting the target area in Fig. 16A, there is a high probability that there is a noise blocking. As for the position of a pixel on the edges of the block, whereVecIndexshift to 1 or 6, the result of the calculation, use the template form is almost the same as when using small form template. However, a large form of the template set if on the edge of the block revealed a sharp boundary, which leads to inefficiency. As a consequence, the module 318 to re-install the template form corrects the shape of the template to solve inefficiency.

As specific methods of solution are possible, for example, the following method. Following the installation is performed in each �osili pixel, using a pre-defined threshold valueTh1QPandTh2QPwhereTh1QP<Th2QP) forQPquantization and a predetermined threshold valueTh1DiffandTh2DiffwhereTh1Diff<Th2Difffor the degree of deviationDiffIntensity. Through this, we can reduce the computational complexity, while suppressing the deterioration of the noise reduction. It should be noted that it is assumed thatTmi> h1DiffandTh2Diffchange in accordance withQPthe quantization.

If the following condition, then the formTmpShapethe template is installed in accordance with the following branching algorithm to be reinstalled.

[Condition 1]:VecIndexin the target pixel positions in the pixel positions when installing the target area in the form of upper and lower parts of the outer boundary of the block is equal to 1 (horizontal border) or 0 (flat region), andVecIndexin adjacent pixel positions within the block is equal to 0 (flat region).

[Condition 2]: OrVecIndexin the target pixel positions when mounting the target area in the form of left and right parts of the external border of the block is equal to 6 (vertical border) or 0 (flat region), andVecIndexin adjacent pixel positions within the block is equal to 0 (flat region).

An EXAMPLE of "BRANCHING ALGORITHM reinstall"

(1) IfQPquantize > thresholdTh2QPshapeTmpShapethe template isBloK3×3or Cross, and the degree ofDiffIntensitydeviation > thresholdTh1DiffthenTmpShapere-set as a Point (the number of elements equal to 1).

(2) If the above condition is not met, theQP quantize > thresholdTh1QPshapeTmpShapethe template isBloK3×3and the degree ofDiffIntensitydeviation > thresholdTh2DiffthenTmpShapere-set as a Point (the number of elements equal to 1).

(3) If the above conditions are not met, theQPquantize > thresholdTh1QPshapeTmpShapethe template isBl oK3×3and the degree ofDiffIntensitydeviation > thresholdTh1DiffthenTmpShapere-installed as the Cross (cardinality 5).

(4) If the above conditions are not met, theQPquantize > thresholdTh1QPshapeTmpShapepattern is the Cross, and the degree ofDiffIntensitydeviation >Th1DiffthenTmpShapere-set as a Point (the number of elements equal to 1).

(5) If�conditions (1) through (4) above are not met, the process is completed without re-installation,TmpShape.

BecauseVecIndexgenerated noise blocking, cannot be predicted for areasE,F,GandHin the four corners in Fig. 16C, the form template is re-set for all the pixel positions within the areas described above using branches with (1) through (5) re-installation if the following conditions are true.

- Area ofE: there is a pixel that satisfies the above condition, one of a group of pixels of the region ofAwhich is adjacent to the right for this area, and groups of pixels of the region ofCwhich is adjacent the bottom for this field.

- About�part b F: there is a pixel that satisfies the above condition, one of a group of pixels of the region ofAwhich is adjacent to the left for the field, and groups of pixels of the region ofDwhich is adjacent the bottom for this field.

- Area ofG: there is a pixel that satisfies the above condition, one of a group of pixels of the region ofBwhich is adjacent to the right for this area, and groups of pixels of the region ofCwhich is adjacent the top for this area.

- Area ofH: there is a pixel that satisfies the above condition, one of a group of pixels of the region ofBwhich is adjacent to the left for the field, and groups of pixels of the region ofDwhich is adjacent the top for this area.

This vari�NT implementation demonstrates the example, in which the computational complexity is reduced for the group of pixels at the edges of the block; in contrast, it is also possible implementation whereTmpShapeis set equal to Nothing for the entire group of pixels within the block, and the noise reduction filter is applied only to the edges of the block to thereby significantly reduce the computational complexity.

[The EFFECT of REDUCING the COMPUTATIONAL COMPLEXITY]

It was confirmed that in applying the noise reduction filter in accordance with the present technique to encode standard video, you can significantly reduce the required computational complexity with respect to the above filter, while suppressing the deterioration of the relationship of the peak signal-to-noise ratio (PSNR), in comparison with the method which includes the conventional NLM filter (noise reduction filter in accordance with the method of the nonlocal means), in the video coding standards of the next generation.

[EXAMPLE CONFIGURATION WHEN using the PROGRAM SOFTWARE]

The processes of the above image processing, video encoding and video decoding can also be implemented by a computer and software program, and the program can� to be recorded on a computer-readable recording medium, and the program may be provided over the network.

Fig. 17 illustrates an example configuration of a system when the embodiment of the present invention is implemented by means of a computer and program software.

The present system is configured so that the Central processing unit (CPU) 50, which executes the program, a memory 51, such as a mass storage device with random access (RAM) that stores the program and data to which access may be implemented by the CPU 50, the module 52 storage of the video signal, which stores the target video encoding or video signal decoded images, the module 53 program storage that stores a program for directing the CPU 50 to execute the processes described in embodiments of the present invention, and the storage module 54 of the encoded stream, which stores the bit stream as a result of encoding or decoding target bit stream, are connected to each other via a bus.

Module 53 program storage stores one of the following: program 531 video encoding for encoding the video signal using a variant implementation of the present invention, and the program 532 video decoding to decode the encoded bit stream using option implementation of the present and�gaining. The module 53 of the storage programs does not necessarily store both these programs.

Moreover, when the present system is used as the device for encoding video, the program 531 video encoding is loaded into the memory 51, the CPU 50 sequentially fetches and executes instructions of the program 531 encode the video loaded in the memory 51, encodes the video signal stored in the module 52 storage of the video signal, using the methodology described in the embodiment of the present invention, and stores the bitstream as a result of coding in the module 54 storing the encoded stream. Alternatively, the bitstream may be output to an external device through an interface such as a network adapter.

Moreover, when the present system is used as the device for decoding video, the program 532 video decoding is loaded into the memory 51, the CPU 50 sequentially fetches and executes instructions of the program 532 decode the video loaded in the memory 51, decodes the bit stream stored in the module 54 storing the coded stream, using the methodology described in the embodiment of the present invention, and stores the video signal as a result of the decoding module 52 storing the video signal. Alternatively, a video signal as a result of zakodirovana� appears on the external playback device.

Although embodiments of the present invention have been described in detail above with reference to the drawings, the specific configuration is not limited to these options for implementation, and also included performance, etc. (addition, omission, replacement and other modifications of the configuration), which do not depart from the essence of the present invention. The present invention is not limited to the above description, and is limited only by the appended claims of the invention.

INDUSTRIAL APPLICABILITY

The present invention can be used, for example, in the contour filter that reduces the noise, blockiness, etc. for encoding/decoding video. The present invention can reduce the computational complexity of the noise reduction filter, while suppressing a decrease in encoding efficiency.

DESCRIPTION DIGITAL SYMBOLS

30, 31, 113, 215, the processing module denoising

301, 311 module setting the coefficient of the noise reduction

302, 312, the execution module filter NLM

305, 315 detection module to the degree of deviation

306, 316 module installed form template

317 module detection direction of the border

318 module to re-install the form template

1. Method for encoding/decoding video that encodes or decodes video using a contour filter,
thus �Antony filter is a filter which removes the noise of the decoded image using the method of image processing, which performs template matching between a template, which is a source of comparison for the target pixel noise in the decoded image, and a template for each of the search points, which are the purposes of searching in the search form decoded in the image and removes noise of the target pixel by using the weighting factor in accordance with the degree of similarity between the templates and the weighted sum of pixel values in pixels of the search
when this method executes:
the stage at which calculates the degree of deviation between the target pixel and the surrounding pixel of the target pixel, using the decoded image; and
the stage at which constrain the shape of the template so that the lower the degree of deviation relative to the maximum value of the degree of deviation in the decoded image, the less the shape of the template using the degree of deviation as the index used to constrain the shape of the template mentioned templates.

2. Method for encoding/decoding video according to claim 1, wherein the method performed by:
the stage at which calculates the direction of the border, using the decoded image which is the goal of noise reduction; and
the stage, which is further limiting�Ute form template one or many pixels on the outer boundary of the target block of the encoding target block of decoding in a limited form template based on the computed direction of the border quantization parameter of the target block of the encoding target block of decoding and degree of deviation.

3. Device for encoding/decoding video that encodes or decodes video using a contour filter,
this loop filter is a filter that removes noise of the decoded image using the method of image processing, which performs template matching between a template, which is a source of comparison for the target pixel noise in the decoded image, and a template for each of the search points, which are the purposes of searching in the search form decoded in the image and removes noise of the target pixel by using the weighting factor in accordance with the degree of similarity between the templates and the weighted sum of pixel values in pixels of the search
the device contains:
the module calculating the degree of deviation, which calculates the degree of deviation between the target pixel and the surrounding pixel of the target pixel, using the decoded image; and
the module install the template form, which limits the shape of the template so that the lower the degree of deviation relative to the maximum value of the degree of deviation in the decoded image, the less the shape of the template using the degree of deviation in casestainless, used to constrain the shape of the pattern templates.

4. Device for encoding/decoding video according to claim 3, wherein the device further comprises:
the detection module of the direction of the border, which calculates the direction of the border, using the decoded image which is the goal of noise reduction; and
the module reinstalled the template form, which further restricts the form of the template one or many pixels on the outer boundary of the target block of the encoding target block of decoding in a limited form template based on the computed direction of the border of the quantization parameter of the target block of the encoding target block of decoding and degree of deviation.

5. A computer-readable recording medium containing a program for encoding/decoding video, which instructs the computer to perform a method of encoding/decoding video according to claim 1 or 2.



 

Same patents:

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to technology of automatic selection of extra data, for example, ad, guide data, extra data, data on operating performances. Thus, processing, storage and/or transmission resources can be saved. This device for automatic selection of extra data to be included in content comprises classifier connected with user profile and selection means connected with extra data base. Extra data of definite category is placed in appropriate or contrasting context depending on used interest in thus goods category. Profiles of user are automatically classified as profiles with either pronounces or weak interest in this category.

EFFECT: adapted selection of extra data to be included in the content for twofold decrease in total volume of extra data.

11 cl, 2 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to means of encoding and decoding images with prediction. The method includes receiving accessibility information of reference units of a current image and determining if the reference units are accessible for intra prediction according to the accessibility information. In the method, the accessibility information includes an indication of whether the reference unit is located within the image boundaries where the current image unit is located; whether the reference unit is located in the same layer as the current image unit; and whether the reference unit has already been encoded or decoded. In the method, reference units of the current image unit include a left side unit, an upper side unit and a upper left unit of the current image unit.

EFFECT: high efficiency of predicting an image unit.

16 cl, 8 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to techniques for encoding and decoding video images. Disclosed is a method of encoding image information containing motion data by selecting a motion vector from a group of at least three possible motion vectors for at least one current unit of a current image to be encoded. The method includes a step of determining an optimum selection subgroup comprising part of the possible motion vectors. Further, the method includes selecting a motion vector from the vectors of the optimum selection subgroup and inputting into said information data on allocation of a motion vector selected from the vectors of the optimum selection subgroup.

EFFECT: high efficiency of encoding and decoding video images by determining an optimum selection subgroup containing part of possible motion vectors.

12 cl, 8 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computations, particularly, to display control systems. This device comprises imaging section capture of dynamic image in present range relative to image display direction. Besides, it comprises image analysis section for analysis of dynamic image caught by imaging section and calculation of the position of a particular user from multiple users. Note here that it comprises system optimisation section to computer system control data for system optimisation proceeding from said position of a particular user computed by image analysis section.

EFFECT: optimised state of imaging device for particular user.

7 cl, 23 dwg

FIELD: physics.

SUBSTANCE: proposed process comprises the steps that follow. construction of blocs in space relative to current block of forecasts. Note here that current forecast block is arranged inside current unit of coding. Accessible adjacent blocs are defined relative to current block in compliance with the type of coding current unit separation. Note here that accessible adjacent blocs are located outside the current coding unit. Motion vector predictors are obtained from accessible adjacent blocs in preset sequence in compliance with predictors of accessible adjacent blocs. Said obtained predictors are added to the list of motion vectors.

EFFECT: higher efficiency of compression in coding.

16 cl, 10 dwg

FIELD: physics.

SUBSTANCE: method for motion compensation in digital dynamic video, wherein during motion compensation in frames of a video stream using a video codec, a combination is used of a search algorithm for motion compensation of frame fragments with approximation of movement of frame fragmental projections approximated by physical laws of motion of real captured mobile objects corresponding to said projections. Owing to fragmental approximation of motion in the image using a timer, real-time approximation of fractional values of velocities and positions of predictions of part of the mobile fragments of the frame is performed, and further refinement of the positions of said found preliminary approximation predictions is performed using the motion compensation search algorithm, but with smaller sizes of the prediction search regions and shift of the centres of said regions by the found approximation shift vectors.

EFFECT: higher average throughput frame capabilities of video codecs operating in real time, with insignificant decrease in average code volumes and quality of the decoded images.

2 cl, 17 dwg, 3 tbl

FIELD: physics, video.

SUBSTANCE: invention relates to video data processing means. The method includes generating an entry point table; defining entry points in the video data stream with a certain a distance in time from each other to facilitate trick play; storing the defined entry points in the entry point table by storing addresses thereof. In the method, the video data stream comprises a plurality of sub-streams representing one stream of 3D video data and comprises a 2D sub-stream which comprises an independently encoded 2D version of the 3D video data and an auxiliary sub-stream comprising a dependently encoded part of the 3D video data. In the method, defining the entry points comprises associating the entry point table with 3D video data by defining main entry points in the 2D sub-stream and auxiliary entry points in the auxiliary sub-stream.

EFFECT: high quality of display during 3D trick play.

24 cl, 11 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to coding of images for processing of image signal compression, said signals consisting of multiple colour components. Image coding system for application of compressive processing to image input signal including multiple colour components coded data produced by independent processing of input image signal coding and parameter indicating which colour component corresponds to coded data is multiplexed with the flow of bits. Image decoding system for input of bits flow wherein image signal is compressed for processing of decoding, processing of decoding of coded data of every colour component is performed with application of parameter designating which colour component corresponds to coded data.

EFFECT: application of coding processing to three colour components using the 4:0:0 format to allow identical time info or identical coding modes for colour components.

8 cl, 25 dwg, 1 tbl

FIELD: physics.

SUBSTANCE: disclosed is a method of coding video data. The method includes a step of coding syntax elements of a set of parameters which identify candidate long-term reference pictures. One or more of the candidate long-term reference pictures belong to a reference picture set of a current picture, and one or more of the candidate long-term reference pictures do not belong to the reference picture set of the current picture. The reference picture set identifies reference pictures that can potentially be used for inter-predicting the current picture and can potentially be used for inter-predicting one or more pictures following the current picture in decoding order. Further, the method includes coding syntax elements that indicate which candidate long-term reference pictures, identified in the parameter set, belong to the reference picture set of the current picture.

EFFECT: high efficiency of encoding images by providing simple initialisation of a list of reference pictures.

37 cl, 10 dwg, 8 tbl

FIELD: information technologies.

SUBSTANCE: device comprises a module for production of unit size for a target unit subject to coding, a module to set unit size for an image quantisation control unit, a module of quantisation parameter coding corresponding to the target unit subject to coding, when the produced unit size is more than or equal to the unit size set by means of the setting module for coding of the general quantisation parameter, when the produced unit size is smaller than the set unit size, and the general quantisation parameter was not coded.

EFFECT: making it possible to manage image quality.

38 cl, 25 dwg

FIELD: information technology.

SUBSTANCE: device includes an input lens, a memory unit, a bus, a video sensor, a movement estimation unit, a frequency divider and video sensor resolution selection unit, a unit for single-step enhancement of the whole frame, a clock frequency generator unit, a unit for selecting faces of people, a unit for iterative enhancement faces of people, an image combining unit and a conflict resolver.

EFFECT: high quality of facial images contained in frames of an input video sequence, high quality of moving and fixed regions of the image by increasing spatial resolution of the image and reducing the blurring effect arising from relative movement of the imaging device and the captured scene.

2 dwg

FIELD: information technologies.

SUBSTANCE: method includes performance of the following operations: digital copy of initial printed document is produced in colour space of RGB, brightness difference is detected, and direction of maximum gradient is determined, current count of image is classified for its affiliation to area of brightness difference or uniform area without sharp changes of brightness, Gauss smoothening of current count is made, if it is classified as belonging to uniform area without sharp changes of brightness, current count is smoothened in anisotropic manner, if it is classified as belonging to the area of brightness difference.

EFFECT: invention makes it possible to carry out fast single-stage descreening of screen-type pattern images with preservation of contour differences and increased accuracy.

5 cl, 9 dwg

FIELD: information technology.

SUBSTANCE: first band pass (BP) is determined based on initial image data; a matrix of filter coefficients (FC) is calculated to obtain frequency characteristics corresponding to limitation of frequency band (FB) using the first BP; data of the first filtered image are generated by filtering data of the initial image using the matrix of first FC; an estimate value of the objective image quality of data of the first filtered image is obtained and the distribution coefficient (DC) is calculated, which is used to determine the optimum BP based on the estimate value of objective image quality; the optimum BP corresponding to the calculated DC is determined using a table in which the corresponding relationship between DC and optimum BP is defined; a matrix of optimum FC is calculated to obtain frequency characteristics corresponding to limitation of FB using the optimum BP; and data of the optimally filtered image is generated by filtering data of the initial image using the matrix of optimum FC.

EFFECT: adaptive image filtering process for providing high-quality image.

3 cl, 11 dwg

FIELD: physics; image processing.

SUBSTANCE: present invention can be used, for instance, in resolution transformation. In the device and method for processing images, gradient direction v1 of the edges with the largest gradient of pixel values is detected, as well as direction v2 of the edges, orthogonal to the gradient direction v1 of the edges. Processing is done for improving and smoothing out in the gradient direction v1 and direction v2 of the edges, respectively, so as to generate image output data D2.

EFFECT: prevention of loss of high-frequency components and appearance of stepped images.

10 cl, 9 dwg

FIELD: information technology.

SUBSTANCE: pixels data of the current video sequence frame is recorded to the buffer of current frame, and the pixels data of the previous video sequence frame is recorded to the buffer of previous frame; motion between current and previous video sequence frames is determined in the differences calculation block; the calculated motion between previous and current frames is recorded to the motion data buffer; the data from the current frame buffer is recorded to the previous frame buffer; partial derivatives for every channel of the current frame are determined in the partial derivative calculation blocks for X-direction and Y-direction; spatial averaging of directional derivatives for each channel of the current frame, as well as channels smoothing in the first and the second smoothing blocks are performed; direction of the edge for every pixel position of the current frame is calculated in the arctangent calculator; anisotropic filter for every pixel position of the current frame is formed and recorded to the memory of filter values; anisotropic filtering of all pixels of the current frame, their motion exceeding predefined limit in the anisotropic filtering block, is done.

EFFECT: increased quality of dynamic video picture due to changes in anisotropic filtering with adaptive mask.

7 cl, 2 dwg

FIELD: information technology.

SUBSTANCE: pixels data of the current video sequence frame is recorded to the buffer of current frame, and the pixels data of the previous video sequence frame is recorded to the buffer of previous frame; motion between current and previous video sequence frames is determined in the differences calculation block; the calculated motion between previous and current frames is recorded to the motion data buffer; the data from the current frame buffer is recorded to the previous frame buffer; partial derivatives for every channel of the current frame are determined in the partial derivative calculation blocks for X-direction and Y-direction; spatial averaging of directional derivatives for each channel of the current frame, as well as channels smoothing in the first and the second smoothing blocks are performed; direction of the edge for every pixel position of the current frame is calculated in the arctangent calculator; anisotropic filter for every pixel position of the current frame is formed and recorded to the memory of filter values; anisotropic filtering of all pixels of the current frame, their motion exceeding predefined limit in the anisotropic filtering block, is done.

EFFECT: increased quality of dynamic video picture due to changes in anisotropic filtering with adaptive mask.

7 cl, 2 dwg

FIELD: physics; image processing.

SUBSTANCE: present invention can be used, for instance, in resolution transformation. In the device and method for processing images, gradient direction v1 of the edges with the largest gradient of pixel values is detected, as well as direction v2 of the edges, orthogonal to the gradient direction v1 of the edges. Processing is done for improving and smoothing out in the gradient direction v1 and direction v2 of the edges, respectively, so as to generate image output data D2.

EFFECT: prevention of loss of high-frequency components and appearance of stepped images.

10 cl, 9 dwg

FIELD: information technology.

SUBSTANCE: first band pass (BP) is determined based on initial image data; a matrix of filter coefficients (FC) is calculated to obtain frequency characteristics corresponding to limitation of frequency band (FB) using the first BP; data of the first filtered image are generated by filtering data of the initial image using the matrix of first FC; an estimate value of the objective image quality of data of the first filtered image is obtained and the distribution coefficient (DC) is calculated, which is used to determine the optimum BP based on the estimate value of objective image quality; the optimum BP corresponding to the calculated DC is determined using a table in which the corresponding relationship between DC and optimum BP is defined; a matrix of optimum FC is calculated to obtain frequency characteristics corresponding to limitation of FB using the optimum BP; and data of the optimally filtered image is generated by filtering data of the initial image using the matrix of optimum FC.

EFFECT: adaptive image filtering process for providing high-quality image.

3 cl, 11 dwg

FIELD: information technologies.

SUBSTANCE: method includes performance of the following operations: digital copy of initial printed document is produced in colour space of RGB, brightness difference is detected, and direction of maximum gradient is determined, current count of image is classified for its affiliation to area of brightness difference or uniform area without sharp changes of brightness, Gauss smoothening of current count is made, if it is classified as belonging to uniform area without sharp changes of brightness, current count is smoothened in anisotropic manner, if it is classified as belonging to the area of brightness difference.

EFFECT: invention makes it possible to carry out fast single-stage descreening of screen-type pattern images with preservation of contour differences and increased accuracy.

5 cl, 9 dwg

FIELD: information technology.

SUBSTANCE: device includes an input lens, a memory unit, a bus, a video sensor, a movement estimation unit, a frequency divider and video sensor resolution selection unit, a unit for single-step enhancement of the whole frame, a clock frequency generator unit, a unit for selecting faces of people, a unit for iterative enhancement faces of people, an image combining unit and a conflict resolver.

EFFECT: high quality of facial images contained in frames of an input video sequence, high quality of moving and fixed regions of the image by increasing spatial resolution of the image and reducing the blurring effect arising from relative movement of the imaging device and the captured scene.

2 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to video encoding/decoding techniques which employ a loop filter which reduces blocking noise. The technical result is achieved due to that a video encoding/decoding device, which encodes or decodes video using a loop filter, includes a deviation calculating unit which calculates deviation between a target noise cancellation pixel and a neighbouring pixel of the target pixel using a decoded image. A pattern form establishing unit limits the pattern form such that the less the deviation from the maximum deviation in the decoded image, the smaller the pattern form. When removing target pixel noise, using a weight coefficient in accordance with the degree of similarity between the pattern of the target pixel and the pattern of each search point in the form of a search and a weighted sum of pixel values at search points, the loop filter compares patterns using the limited pattern form and removes the target pixel noise.

EFFECT: reduced computational complexity of the noise cancellation filter, thereby preventing deterioration of encoding efficiency.

5 cl, 19 dwg

Up!