Deblocking method, deblocking apparatus, deblocking programme and computer-readable recording medium recorded with said programme

FIELD: radio engineering, communication.

SUBSTANCE: disclosed is a deblocking which includes: a step for detecting direction of an edge, which indicates the direction of change in pixel value of each block, a step for determining direction of a deblocking filter to be applied to a block boundary in accordance with the detected direction of an edge, a target process block containing a block boundary to be deblocked, and a block in contact with the target process block, and a step for applying the deblocking filter to the block boundary in accordance with the determined direction.

EFFECT: providing deblocking, wherein texture is preserved in inclined directions which must be retained in the image, and block noise can be efficiently reduced, and improved efficiency of encoding all video information.

9 cl, 44 dwg

 

The technical FIELD TO WHICH the INVENTION RELATES.

The present invention relates to a method for deblocking used in the device of video coding and decoding device video that implement the coding on a block basis, the device for him, the program for deblocking used to implement the above-mentioned method of deblocking, and machine-readable recording medium recorded by the above-mentioned program.

The application claims the priority filing date of Japanese patent application No. 2008-271496, filed October 22, 2008.

PRIOR art

When encoding video, the inter-coding with prediction (motion compensation) to perform predictions between different frames uses the decoded image as a reference image. Therefore, when encoding is performed at a low bit rate, is a block distortion, which degrades the decoded image, and, accordingly, there is a problem of increasing deterioration of image quality due to references to the degraded image.

Therefore, proposed and presented contour filter for reducing block distortion standards for encoding video. In addition, even in MPEG (Expert group is of virushost image) ISO (international organization for standardization) and VCEG (ITU Expert group on the issues of encoding image) ITU-T (telecommunication Sector of the International telecommunication Union) made many suggestions regarding filters, including filter and postfilter and contour filters, which are now widely discussed. Unlike prefilters and postfilters, if you use a contour filter, the quality of the filtered image is improved, and also improves impact on subsequent frames that refer to this image, because it is possible to improve the quality of the whole video (improving coding efficiency). So much improvement expected contour filters.

Current standards of video coding, such as MPEG-1, MPEG-2, MPEG-4, H.261, H.262 and H.264 for encoding video image is divided into M*N blocks (M and N are multiples of 2, for example, 4, 8 or 16) and then encode. After the division receives the difference between the block which must be processed and a reference block (decoded block within a frame or between frames, the differential signal is subjected to orthogonal transformation, quantization is performed, and applies entropy encoding, and the resulting signal is output as binary data.

Using the fact that human vision is insensitive to high frequency, when quantization is performed, the high-frequency components of the image are removed. On this floor is PHE, since high-frequency components are removed block block by block in the decoded image are boundary blocks and block noise is superimposed. In particular, if the number of encoding secreted by the video signal is small (i.e., low bit rate), then there is a high level block noise.

In inter-coding with prediction type, motion compensation, to reduce redundancy in the time domain transmitted differential signal between the previous or subsequent reference frame and the frame that must be processed, and the motion vector (amount of movement). In this inter-coding with prediction type, motion compensation, when the image is referenced, includes noise block form and, consequently, degrades, the differential signal which is obtained by calculating the difference between the reference frame and the frame that need to be processed increases, which results in deterioration of the coding efficiency.

Accordingly, in H.264/AVC, when the reference frame, that is, the decoded image is stored in memory for frames, filter is used to reduce block noise generated when encoding. This filter is called a filter for deblocking.

It should be noted that the above description raskrytb patent document 1 Sakae Okubo, Shinya Kadono, Yoshihiro Kikuchi, Teruhiko Suzuki, "Revised edition, H.264/AVC textbook", Impress, 2006, p.140-144 and str.

On figa shows the position of the filter for deblocking in the encoding process, and figv depicts the position of the filter for deblocking in the decoding process.

The process of using a filter for deblocking is applied to the boundary of each block of 16 blocks (4×4), which are obtained by dividing one macroblock (hereinafter referred to in this document abbreviated as MB). In addition, in the case of borders MB, if present adjacent MB, can be obtained from the pixels that are required to filter, and therefore applies the same process.

On figa and figv shows the specific items that must be processed. Here at figa shows the position of the filter relative to the boundaries of the block in the vertical direction. In addition, figv shows the position of the filter relative to the boundaries of the block in the horizontal direction. Note that, essentially, is processed only part indicated by solid lines, if MB, which should be treated, is treated as blocks of 8*8 luma signal, while processed both parts indicated as solid lines and dashed lines, if the MB, which should be treated, is treated as blocks of 4×4 luminance signal and a color signal.

Depending on the characteristics of the image there is a position where block distortion is formed easily, and the position where block distortion is not generated easily, and accordingly, the process filter for deblocking is applied adaptive. In particular, the process is changed in accordance with the following conditions.

The stability boundary (value Bs)

The absolute value of the difference between pixels on the border

The stability boundaries are defined in the following Table 1.

Table 1
At least one of the two blocks is intra-coded (intra mode) and is located on the border of the MB.Bs=4
Any of the two blocks is the intra-mode, but not on the boundary MB.Bs=3
None of the unit is not in intra-mode, and any of the blocks has a coefficient of orthogonal transformations.Bs=2
None is found in intra-mode, none has a conversion factor, keyframes yavlyayutyasya, non reference frames are different and the values of the motion vectors are different.Bs=1
None is found in intra-mode, none has a conversion factor and keyframes and values of the motion vector are identical.Bs=0

As shown in figa and FIGU, assuming that the pixel value of one unit is equal to pm(0≤m<4: the position closer to the border of the block, the smaller the value of the subscript), and the pixel value of another block is qn(0≤n<4: the position closer to the border of the block, the smaller the value of the lower index filter for deblocking is activated if the following two conditions.

1. Bs>0

2. |p0-q0|<α && |p1-p0|<β && |q1-q0|<?

Here, α and β are determined uniquely, depending on the quantization parameter (QP), which is set for encoding. In addition, using two parameters, check slice_alpha_c0_offset_div2 and check slice_beta_offset_div2 included in the slice header, the user can also adjust α and β.

Furthermore, in addition to the above two parameters, the filter for deblocking can be controlled at three levels, as shown below, by means of two parameters, and the time deblocking_filter_controlpresent_flag and disable_deblocking_filter_idc, in the parameter set of the image (part of title)

1. Filter for deblocking is applied to the block boundaries and borders MB.

2. Filter for deblocking is applied only to the borders of the MB.

3. Filter for deblocking is not applied.

It should be noted that, despite the fact that they are not filter for deblocking, offers various schemes to improve intra-prediction. The authors present invention also propose a scheme to improve intra-prediction by assigning weights in accordance with the texture in the image and the fulfillment of the predictions (see Shohei Matsuo, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima: "A Study on weighted intra prediction", Picture Coding Symposium of Japan, PCSJ2007, non-Patent document 3).

The INVENTION

PROBLEMS THAT ARE SOLVED BY the INVENTIONS

Standard filter for deblocking always uses the pixels in the direction (90°), perpendicular to the edge of the block, and the adaptive filter is applied in accordance with the two conditions, namely (i) the stability boundaries and (ii) the absolute value of the difference between the selected pixels.

However, as in the standard technologies are only processed pixels in the perpendicular direction, then, if the image to be encoded contains the texture inherent in the downslope direction (e.g. the R, the inclined pattern or a line), then there exists a probability that this texture is blurred.

In other words, despite the fact that the process standard filter for deblocking is applied adaptive, it contains no mechanism for processing with respect to the direction of texture, originally included in the image. Therefore, in the conventional technology, the filter is applied even to the texture, which should essentially be saved, and accordingly, there is a probability that the oblique component of the texture is smoothed, resulting in deteriorating the subjective quality of the image.

The present invention is made in view of such circumstances, and its objective is to provide new technology deblocking, which stores textures in inclined directions, which should be saved in the image, and the block noise can be effectively reduced. In addition, as a result of its task is not only to improve the subjective image quality of the same image, but also improve the performance of inter-coding prediction by referring to the image with improved image quality, so that it may be improved coding efficiency of the whole video.

MEANS FOR solving the above MENTIONED PROBLEMS

For solving the above mentioned problems suggestions the way deblocking in accordance with the present invention, which is a method of deblocking to reduce block distortion in the encoding scheme of the video signal to perform coding with prediction block and the decoding scheme of the video signal for decoding a video signal encoded by the above encoding scheme of the video signal, and this method includes: a detection step for detecting, for each block, the direction in which to change the value of the pixel, which is represented by an edge that indicates the direction of change in pixel value in each block, the step of determining, by definition, the direction in which the filter for deblocking must be applied to the border of the block, on the basis of the direction of the edge detected for the unit, which must be processed, which includes the edge of the block subjected to removal of the blocking, and based on the direction of the edge detected for a block contacting the block to be processed, and a step of filtering by applying a filter for deblocking to the boundary of the unit in accordance with a specific direction.

In the method of deblocking in accordance with the present invention, for each block, the detection step can be detected component in the horizontal direction of change in pixel value in each block, the set of detected component in the vertical direction of change in pixel value in each block, and can be detected edge direction on the basis of the detected component in the horizontal direction and the detected component in the vertical direction.

In the method of deblocking in accordance with the present invention, for each block, the detection step can be obtained information on the prediction mode that is used when intra-coding each block, and can be detected edge direction on the basis of the obtained information on the prediction mode.

The method of deblocking in accordance with the present invention may also include the step of calculating for calculating, for each block, the stability region on the basis of the detected component in the horizontal direction and the detected component in the vertical direction, and the detection phase can be compared to the stability of the region, calculated for the block, which should be treated with a pre-defined threshold value, and when the stability region is less than or equal to a predefined threshold value, which may change the direction in which it should be applied as a filter for deblocking, which is defined on the basis of the directions of the edges, in the direction orthogonal to the border of the block.

In the method of deblocking in accordance with the tvii with the present invention, when the information mode prediction for the block to be processed is a prediction mode using the average value of the pixel signal of the prediction, detection phase can be changed to a direction in which it should be applied as a filter for deblocking, which is defined on the basis of the directions of the edges, in the direction orthogonal to the boundary of the block.

In the method of deblocking in accordance with the present invention, the step of determining may determine the direction in which it should be applied as a filter for deblocking, according to the data stored in the means for storing that stores information that describes the correlation between the edge direction of the block which must be processed, the edge direction of the block contacting the block to be processed, and the direction in which it should be applied as a filter for deblocking, with the direction of the edge detected for a block which must be processed, and the direction of the edge detected for block, contact block, which should be treated as a key.

Device for deblocking in accordance with the present invention is a device for deblocking to reduce block distortion occurring is in the encoding scheme of the video signal to perform coding with prediction block and the decoding scheme of the video signal for decoding video signal encoded by the above encoding scheme of the video signal, and the device includes: a detection tool to detect, for each block, the direction in which to change the value of the pixel, which is represented by an edge that indicates the direction of change in pixel value in each block, means for determining for determining the direction to the border of the block should be applied filter for deblocking, based on the direction of the edge detected for a block which must be processed, which includes the edge of the block subjected to removal of the blocking, and based on the direction of the edge detected for a block that is in contact with a block which must be processed and filter tool for application to the edge of the filter unit for deblocking in accordance with a specific direction.

Device for deblocking in accordance with the present invention may also include means for storing information that describes the correlation between the edge direction of the block which must be processed, the edge direction of the block contacting the block to be processed, and the direction in which it should be applied as a filter for deblocking, and the means for determining determines the direction in to the m should be applied filter for deblocking, according to the data stored in the means for storing, with the direction of the edge detected for a block which must be processed, and the direction of the edge detected for a block that is in contact with the block, which should be treated as a key.

Program for deblocking in accordance with the present invention is a program for deblocking for execution of the above-mentioned methods of deblocking on the computer.

The computer-readable recording medium in accordance with the present invention is a machine-readable recording medium on which is recorded a program for deblocking for execution of the above-mentioned methods of deblocking on the computer.

As described above, from the point of view of the image containing many edges inclined in directions that are not stored standard filter for deblocking, and which are considered as causing deterioration of image quality, the present invention can reduce block noise that is present at the boundary of the block, preserving the textures in the sloping areas that should be preserved. Accordingly, it can be improvement in the subjective quality of the image.

In addition, in accordance with the present invention, the decoded image is high the e image quality, and, accordingly, the present invention can reduce the difference signal in the inter-coding with prediction, which refers to this image, the result can be improved coding efficiency.

BRIEF DESCRIPTION of DRAWINGS

In the drawings:

Figure 1 depicts a diagram illustrating the basic principles of the present invention;

Figure 2 depicts the scheme of allocating land;

Figure 3 depicts the block diagram, which shows a device for deblocking in accordance with the first embodiment of the present invention;

Figure 4 depicts a block diagram of the sequence of operations performed by the device for deblocking in accordance with the first embodiment;

Figure 5 depicts a block diagram of the sequence of operations performed by the device for deblocking in accordance with the first embodiment;

6 depicts a block diagram of the sequence of operations performed by the device for deblocking in accordance with the first embodiment;

7 depicts a block diagram of the sequence of operations performed by the device for deblocking in accordance with the first embodiment;

Fig depicts a block diagram of the sequence of operations performed by the device for removing block the activity in accordance with the first embodiment;

Fig.9 depicts a block diagram of the sequence of operations performed by the device for deblocking in accordance with the first embodiment;

Figure 10 depicts a process diagram of the detection areas of the region;

11 depicts a process diagram of the detection areas of the region;

Fig depicts the schema of the types of edges that are classified areas of the region;

Fig depicts a diagram illustrating the process of determining the filtered pixels;

Figa depicts a diagram illustrating the process of determining the filtered pixels;

Figv depicts a diagram illustrating the process of determining the filtered pixels;

Figs depicts a diagram illustrating the process of determining the filtered pixels;

Fig.14D depicts a diagram illustrating the process of determining the filtered pixels;

File depicts a diagram illustrating the process of determining the filtered pixels;

Figa depicts a diagram illustrating the process of determining the filtered pixels;

Figv depicts a diagram illustrating the process of determining the filtered pixels;

Figs depicts a diagram illustrating the process of determining the filtered pixels;

Fig.15D depicts a diagram illustrating the process of determining the filtered pixels;

File depicts a diagram illustrating the process of determining the Oia filtered pixels;

Fig.15F depicts a diagram illustrating the process of determining the filtered pixels;

Fig.15G depicts a diagram illustrating the process of determining the filtered pixels;

Fig depicts a diagram illustrating the process of determining the filtered pixels;

Figa depicts a diagram illustrating the process of determining the filtered pixels;

Figv depicts a diagram illustrating the process of determining the filtered pixels;

Figs depicts a diagram illustrating the process of determining the filtered pixels;

Fig.17D depicts a diagram illustrating the process of determining the filtered pixels;

File depicts a diagram illustrating the process of determining the filtered pixels;

Fig.17F depicts a diagram illustrating the process of determining the filtered pixels;

Fig.17G depicts a diagram illustrating the process of determining the filtered pixels;

Fig depicts an explanatory diagram of information stored in the storage device for information that defines the filtered pixels;

Figa depicts a diagram of the results of the experiment carried out to test the effectiveness of the first variant implementation of the present invention;

Figv depicts a diagram of the results of the experiment carried out to test the effectiveness of the first variant of implementation of this image is etenia;

Fig depicts a block diagram of a device for deblocking in accordance with the second embodiment of the present invention.

Fig depicts a block diagram of the sequence of operations performed by the device for deblocking in accordance with the second embodiment;

Fig depicts a block diagram of the sequence of operations performed by the device for deblocking in accordance with the second embodiment;

Fig depicts a block diagram of the sequence of operations performed by the device for deblocking in accordance with the third embodiment;

Figa depicts a diagram, which shows the position at which the filter for deblocking implemented in the encoding process;

Figv depicts a diagram, which shows the position at which the filter for deblocking implemented in the encoding process;

Figa depicts a diagram, which shows the position of the filter for deblocking in relation to the boundaries of the block in the vertical direction;

Figv depicts a diagram, which shows the position of the filter for deblocking in relation to the boundaries of the block in the horizontal direction.

Embodiments of the INVENTIONS

First, prior to description of embodiments of the present invention,describes a fundamental principle of the method of deblocking, device for deblocking and programs for deblocking, which include the present invention. In the method of deblocking, the device for deblocking and program for deblocking, which include the present invention, the pixels are converted when the adaptive change of direction of the filter in accordance with the direction of the textures included in the image, in addition to the direction perpendicular to the boundary of the block. The result can effectively reduce block noise, preserving the texture inherent in the image, so you can filter for deblocking, which improves the subjective quality of the image.

As shown in part (a) of figure 1, it is assumed that the block that is to be encoded, there is texture in the downward direction.

In this case, if you use the standard filter for deblocking, as shown in part (b) of figure 1, the block noise at the boundary of the block can be reduced, but inclined texture is smoothed, depending on the circumstances, so that the texture can be affected.

Accordingly, the filtering process is performed in the downward direction in accordance with the grain direction, as shown in part (c) figure 1 so that the block noise is reduced to maintain texture. Consequently what those may be effected improvement in the subjective quality of the image, and can be realized in the reduction of the difference signal with the reference to the processed image, so you can also improve coding efficiency.

After that, the described configuration of the device for deblocking for implementation of the above.

For the implementation of the reduce block distortion occurring in the encoding scheme of the video signal to perform coding with prediction block and the decoding scheme of the video signal for decoding a video signal encoded in accordance with the encoding scheme of the video signal, a device for deblocking includes (1) a detection means for detecting, for each block, the direction in which to change the value of the pixel, which is represented by an edge that indicates the direction of change in pixel value of each block, (2) a means of determining to determine the direction in which the filter for deblocking is applied to the boundary of the block, on the basis of the direction edge detected for a block which must be processed, which contains the boundary of the block is subjected to removal of the blocking, and the direction of the edge detected for a block contacting the block to be processed (block contacting the block to be processed, in direction of the making up, down, right, left and/or diagonally), and (3) the filter tool to filter for deblocking to the border of the block subjected to removal of blocking in accordance with the direction defined by the tool definition.

Can also be provided with the means for storing to store information that describes the correlation between the edge direction of the block which must be processed, the edge direction of the block contacting the block to be processed, and the direction in which you apply the filter for deblocking. In this case, the means of determining accesses data stored in the means for storing, when using the directions of the edge detected for a block which must be processed, and for the block in contact with the block, which should be treated as a key, and determines the direction in which it should be applied as a filter for deblocking.

When using this configuration, the detection means may detect, for each block component in the horizontal direction changes the pixel value of each block and the component in the vertical direction changes the pixel value of each block and can detect the direction of the edge detected based on the detected components in the horizontal direction and an upright direction.

In this case, there may be provided a computing device for calculating, for each block, the stability region on the basis of the components in the horizontal direction and in the vertical direction detected by the detection tool. When provided with computing means, the means for determining may compare the stability of the region, calculated for the block which must be processed by the computing means with a predefined threshold value, and if the stability region is less than or equal to a predefined threshold value, the direction in which it should be applied as a filter for deblocking, the direction determined by the direction of the edge detected by the detection tool, can be changed to a direction orthogonal to the boundary of the block, which is then subjected to removal of blocking.

In addition, when using this configuration, the detection means may obtain, for each block, information on the prediction mode, which is used when intra-encoding is performed for each block, and can detect the edge direction on the basis of the obtained information on the prediction mode.

In this case, if the information on the prediction mode for the block to be processed is a prediction mode using the average value of the pixel signal of the prediction, the tool definitions can change the direction in which you apply the filter for deblocking, the direction determined by the direction of the edge detected by the detection means, in a direction orthogonal to the boundary of the block, which is then subjected to removal of blocking.

The method of deblocking, through operations of the above processing can also be implemented through computer program. This computer program may be recorded on a suitable machine-readable media or provided over a network, so that, when implemented method of deblocking, the computer program is installed on your computer and is controlled by means of control, such as CPU (Central processing unit), for implementing the method of the deblocking.

When using this configuration, the filter for deblocking can be applied even in an inclined direction relative to the boundaries of the block. The block noise that is present at the boundary of the block can be reduced while maintaining the texture in the downward direction, which should be saved so that you can improve the subjective quality of the image.

In addition, as the quality of the decoded image is high, can be reduced to the differential the output signal in the inter-coding with prediction which refers to this image, the result can also be implemented to improve coding efficiency.

For comparison, a standard filter for deblocking can change only the pixels in the direction perpendicular to the boundary of the block. Accordingly, when the block noise that is present at the boundary of the block is smoothed, textures in inclined directions, which initially included in the image become unclear, which may cause deterioration of the subjective quality of the image.

Further, the present invention is described in detail in accordance with its variants implementation.

Later in this document, "land" means the direction in which to change the brightness signal, and this direction is perpendicular to the grain direction, as shown in figure 2.

The FIRST OPTION EXERCISE

It describes the device 1 for deblocking in accordance with the first embodiment of the present invention.

Figure 3 shows the configuration of the device 1 for deblocking in accordance with the first embodiment of the present invention.

The device 1 for deblocking implemented in the device of video coding and decoding device of the video signal, as shown in figa and figv. As shown in figure 3, the device is in 1 for deblocking in accordance with the present embodiment includes a host 10 edge detection to highlight the components of the edge in the block, used by host 40 define pixels, and to detect the direction of the edge in the block, the storage node 20 for information about the direction of the region for storing the result of detection of the node 10 of the detection region, the storage node 30 for information that defines the filtered pixels, for storing information used to determine the pixels of the filtered pixels that must be processed by the filter for deblocking, the node 40 definitions of pixels to determine the pixels that should actually be filtered based on the direction of the edge detected by the node 10 edge detection, according to the information stored in the storage node 20 for information about the edge direction and the storage node 30 for information that defines the filtered pixels, the node 50 decision-making about the use of the filter for a decision on whether to use or not to use filtering using pixels defined by the node 40 definitions of pixels, the node 60 filter for filtering pixels defined by the node 40 define pixels, and the node 70 of the decision on conclusion of the process for deciding whether to complete or not to complete the process, by defining the boundaries of the final block MB.

It should be noted that the information used to determine the filtered pixels stored in Sapmi the surrounding node 30 for information determining the filtered pixels are described in detail with reference to Fig.

As shown in figure 3, the node 10 of the edge detector includes a host 11 allocation component region in the direction along the x-axis to highlight the part edges in the horizontal direction of each block in the MB, the node 12 selection component region in the direction along the y-axis to highlight the part edges in the vertical direction of each block in the MB and the node 13 determine the direction of the edges to determine the direction of the edges of each block by using the components of the territory allocated by the node 11 allocation component region in the direction along the x-axis and node 12 selection component region in the direction along the y-axis, and save the directions of edges in the storage node 20 for information about the direction of the edge.

In addition, as shown in figure 3, the node 40 definitions of pixels includes the node 41 checking the edges of the adjacent block to check the direction of the edges belonging to each boundary block in the MB, according to the information stored in the storage node 20 for information about the direction of the edge, and node 42 definition of filtered pixels to determine pixels that must be processed by the filter for deblocking based on the directions of the edges of the scanned node 41 checking the edges of the adjacent block, according to the information stored in the storage node 3 for information determining the filtered pixels.

Figure 4-9 shows an example of a flowchart executed when the device 1 for deblocking under this option, the exercise performed, as described above, processes the boundaries of the block of 4×4 in one MB.

Next detail the process performed by the device 1 for deblocking under this variant implementation, in accordance with these block diagrams.

It should be noted that hereinafter in this description, the block size is 4×4, and the aim of treatment is the brightness signal, unless otherwise noted. In addition, the direction of the edge is assumed to be four directions (horizontal (0°), vertical (90°) and oblique (45° and 135°)).

1 flowchart executed in the present embodiment

1-1 General block diagram

Figure 4 shows a General block diagram of the sequence of steps of the method performed by the device 1 for deblocking under this option implementation.

Here is executed following the process in the node macroblock (MB) to apply it to the macroblock (MB)included in the image, one after the other.

As shown in the block diagram in figure 4, in step S101, the device 1 for deblocking under this variant of the first implementation detects the direction of all edges of the sixteen blocks of 4×4 MB, and maintaining the t information in the storage node 20 for information about the direction of the edge. The method for detecting edge directions are described below in steps S201-S203 (see the block diagram in figure 5).

Next, at step S102, the system chooses the direction of filtering based on the directions of the edges obtained in step S101. After choosing the direction of the filter are determined by 8 pixels, the required filter for deblocking process. The way to select the direction of filtration and determination pixels is described below in steps S601 and S602 (see the block diagram in Fig.9).

Next, at step S103, selects one unselected block, and decides whether the selected block is used to filter for deblocking. For this decision are the standard conditions for decision-making, defined in H.264/AVC. If the filter is to be used, then the process goes to step S104. If the filter should not be used for processing of the next block, then the process goes to step S105 to decide as to whether the position to be processed, of the destination block.

Next, at step S104, the filtering process is executed actually using 8 pixels selected at step S102.

Next, at step S105, a decision is made whether to be processed next block. If the processed block is not the final block, the next block to be processed, so that PR is a process for the processing returns to step S103. If the treated leaf block, then the process ends.

1-2 the details of the process at step S101

1-2-1 Overall process of step S101

The following describes the details of the process performed at step S101, in accordance with the block diagram in figure 5.

After entering the process at step S101, represented in the block diagram in figure 4, at step S201, the device 1 for deblocking under this option the implementation of the first component allocates edge in the direction of x-axis (horizontal direction)as shown in the block diagram in figure 5. Next, at step S202, the allocated component of the edge in the direction of y-axis (vertical direction). Next, at step S203, is determined by the edge direction, included in the unit, on the basis of the edge corresponding to directions from steps S201 and S202.

Next in order is described details of the processes at steps S201, S202 and S203.

1-2-2 the details of the process at step S201

The details of the process performed at step S101 described in accordance with the block diagram in Fig.6.

After entering the process at step S201, represented in the block diagram in figure 5, phase spider S301 demonstration, the device 1 for deblocking under this variant of the first implementation applies the filter fx={-l,1} (see figure 10) to the target block in the horizontal direction, and the resulting matrix is defined as the matrix x (EMx) edges (see figure 10), as shown in the block scheme is e figure 6.

Next, at step W302, calculates the sum of the EMx, which are obtained on the phase spider S301 demonstration. This value is denoted as Sx. Sx represents the component edges in the horizontal direction, and a positive value means that the brightness value tends to increase from the left side to the right side, and a negative value means that the value of the brightness tends to decrease from the left side to the right side.

Figure 10 depicts a diagram of the principle of the process performed at step S201. Figure 10, each value of the matrix element x edge (EMx)with block size 4×4, denoted EMx(i,j), where i is an integer greater than or equal to 1, and represents the position in the direction along the x-axis, and j is an integer greater than or equal to 1, and represents the position in the direction along the y-axis. As shown in figure 10, for example, as a component of the EMx, 30-28=2 is obtained by applying filter fx={-l,+1} to the two pixels having pixel values 28 and 31. Also, as Sx, 31 is obtained by computing the sum of the 12 pillars EMx.

1-2-3 the details of the process at step S202

The details of the process performed at step S202 described in accordance with the block diagram in Fig.7.

After entering the process at step S202 in the flowchart in figure 5, at step S401, the device 1 for deblocking under this variant of the first implementation applies the filter fy={-l,1} (with the. 11) to the target block in the vertical direction, and the resulting matrix is defined as the matrix y edge (EMy) (see 11), as shown in the block diagram in Fig.7.

Next, at step S402, it calculates the sum of EMy, which are obtained in step S401. This value is denoted as Sy. Sy represents the component edges in the vertical direction, and a positive value means that the brightness value tends to increase from top to bottom, while a negative value means that the value of the brightness tends to decrease from top to bottom.

11 depicts a diagram of the principle of the process performed at step S202. Figure 11, each value of the matrix element y edge (EMy), with block size 4×4, denoted by EMy(i,j), where i is an integer greater than or equal to 1, and represents the position in the direction along the y-axis, and j is an integer greater than or equal to 1, and represents the position in the direction along the y-axis. As shown figure 11, for example, as a component of EMy, 33-30=3 is obtained by applying filter fy={-1,+1} to the two pixels having pixel values 30 and 33. Also, as Sy, 28 is obtained by computing the sum of the 12 pillars EMy.

1-2-4 details of the process at step S203

The details of the process performed at step S203 described according to the flowchart on Fig.

After logging in to the process of step S203 in the block diagram in figure 5, this is ne S501, the device 1 for deblocking under this variant of the first implementation uses Sx, obtained in step W302, and Sy obtained at step S402, the following equation for obtaining the angle D of the edge included in the block, as shown in the block diagram on Fig.

D=Sy/Sx

Next, at step S502, the determined edge direction based on the angle D region obtained in step S501. For example, when there are four directions of edges, the edge direction (edge type) is determined in accordance with the classification in Table 2.

Table 2
The range DEdge type
D≤-2,4143
-2,414<D≤-0,4144
-0,414<D≤0,4141
0,414<D≤2,4142
2,414<D3

As shown in Fig, D=0,414" means that the angle D of the edge is equal to 22.5° (obtained from tg 22,5°≈0,414), D=2,414" means that the angle D of the edge is equal to 67.5° (obtained from tg 67,5°≈2,414), D=-2,414" means that the angle D of the edge is equal to 112.5° (obtained from tg 112,5°≈-2,414), and D=-0,414" means that the angle D of the edge is equal to 17.5° (obtained from tg 157,5°≈-0,414).

Therefore, type 3 region represented by "D≤-2,414, 2,414<D"in Table 2 means that the angle D of the edge is in the range from 67.5° to 112.5° (from of 247.5° to 292,5°) (representative angles are 90° and 270°), as shown in Fig. Also, type 4 region represented by "-2,414<D≤-0,414"in Table 2 means that the angle D of the edge is in the range from 112,5° to 157,5° (292,5° to 337,5°) (representative angles are 135° and 315°), as shown in Fig. Also, the type 1 region represented by "-0,414<D≤0,414"in Table 2 means that the angle D of the edge is in the range from 157,5° to 202,5° (from 337,5° to 22.5°) (representative angles are 0° (=360°) and 180°), as shown in Fig. Also, the type 2 region represented by "0,414<D≤2,414", are presented in Table 2 means that the angle D of the edge is in the range from 22.5° to 67.5° (202,5° of 247.5°) (representative angles are 45° and 225°), as shown in Fig.

For example, in accordance with the classification in Table 2, in blocks, depicted in figure 10 And 11, since Sx=31 & Sy=28, we get D=0,90, and, accordingly, the type of the edge is defined as a type 2 region, with representative angles of 45° and 225°, in accordance with the process of step S502.

1-3 the details of the process at step S102

The following describes the details of the process performed at step S102, in accordance with the block diagram in Fig.9.

After logging in process this is PA S102 in the flowchart in figure 4, as shown in the block diagram in figure 9, in step S601, the device 1 for deblocking under this option implementation first gets information about the directions of the edges in respect of all units in MW, obtained in step S101, according to the information stored in the storage node 20 for information about the edge direction, is shown in figure 3, and checks in each interface unit, the direction in which should be filtered.

Next, at step S602, 8 pixels that must be processed (filtered pixels are determined along the direction of the filter, tested in step S601.

As shown in Fig, pixels that must be processed (filtered pixels), mainly determined by selecting pixels arranged in a straight line, orthogonal to the edge direction. In other words, as assessed that the texture line, and the like) of an image present in a direction perpendicular to the edge, then filtered pixels are selected so that the filter was applied along this direction.

Next, figa-17G depict specific examples that show how determined 8 filtered pixels in accordance with the edge direction in the block, which should be treated, and the edge direction in the block, located around this block, when it should be treated the boundary of the block in the horizontal direction.

In the specific examples depicted in FIGU-14E, are described, as are 8 of the filtered pixels in accordance with the edge direction in the block, located near the above-mentioned unit, when the edge direction of the block which must be processed is the type 1 edges.

In other words, when the edge direction of the block which must be processed is the type 1 edges, and, when the edge direction of a block adjacent to the block which must be processed in the upward direction, is a type 1 region or type 3 edges, as shown in figa, 8 filtered pixels are determined the same way as in the conventional technology.

Here, at a time when the edge direction of a block adjacent to the block which must be processed in the upward direction, is a type 3 edges, the filtered pixels are not selected in accordance with the basic configuration shown on Fig, because the pixels that are in line in the horizontal direction cannot be selected. Accordingly, in this case, 8 filtered pixels are determined the same way as in the conventional technology.

Also, when the edge direction of the block which must be processed is a type 1 edge, the edge direction of a block adjacent to the block which must be processed in the upward direction is a type 2 edge, and the edge direction is Loka, adjacent to the block which must be processed in the upward direction to the right is a type 2 edge 8 of the filtered pixels are determined in accordance with the method depicted in FIGU.

In addition, when the edge direction of the block which must be processed is a type 1 edge, the edge direction of a block adjacent to the block which must be processed in the upward direction is the type of the 4 edges, and the edge direction of a block adjacent to the block which must be processed in the upward direction to the left is a type 4 edges, 8 filtered pixels are determined in accordance with the method depicted in figs.

Also, when the edge direction of the block which must be processed is a type 1 edge, the edge direction of a block adjacent to the block which must be processed in the upward direction is a type 2 edge, and the edge direction of a block adjacent to the block which must be processed in the upward direction to the right is a type 3 edges, 8 filtered pixels are determined in accordance with the method depicted in fig.14D.

In addition, when the edge direction of the block which must be processed is a type 1 edge, the edge direction of a block adjacent to the block which must be processed in the upward direction is the type of the 4 edges, and the edge direction of a block adjacent to the block that is be processed, in the upward direction to the left, is a type 3 edges, 8 filtered pixels are determined in accordance with the method depicted in figa.

In the specific examples depicted in FIGU-15G described, as are 8 of the filtered pixels in accordance with the edge direction of the block, located about a block which must be processed, when the edge direction of the block, which should be treated is type 2 edges.

Figa depicts a specific example in which the edge direction of the block, which should be treated is type 2 edges, and the edge direction of a block adjacent to the block which must be processed in the upward direction is a type 3 region or type 4 edges. As shown in figa, in these cases, 8 filtered pixels are determined the same way as in the conventional technology.

In addition, figv depicts a specific example in which the edge direction of the block, which should be treated is type 2 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction is the type 1 edges, and the edge direction of a block adjacent to the block which must be processed in the left direction is a type 2 region.

Also, figs depicts a specific example in which the edge direction of the block which must be processed, is what ipom 2 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction is a type 2 edge, the edge direction of a block adjacent to the block which must be processed in the upward direction to the right is a type 2 edge, and the edge direction of a block adjacent to the block which must be processed in the left direction is a type 2 region.

In addition, fig.15D depicts a specific example in which the edge direction of the block, which should be treated is type 2 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction is a type 2 edge, the edge direction of a block adjacent to the block which must be processed in the upward direction to the right, is a type 3 edges, and the edge direction of a block adjacent to the block which must be processed in the left direction is a type 2 region.

Also, five depicts a specific example in which the edge direction of the block, which should be treated is type 2 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction, is the type 1 edges, and the edge direction of a block adjacent to the block which must be processed in the direction to the left, is a type 3 edges.

In addition, fig.15F depicts a specific example in which the edge direction of the block is a, which should be treated is type 2 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction is a type 2 edge, the edge direction of a block adjacent to the block which must be processed in the upward direction to the right, is a type 2 edge, and the edge direction of a block adjacent to the block which must be processed in the direction to the left, is a type 3 edges.

Fig.15G depicts a specific example in which the edge direction of the block, which should be treated is type 2 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction is a type 2 edge, the edge direction of a block adjacent to the block which must be processed in the upward direction to the right, is a type 3 edges, and the edge direction of a block adjacent to the block which must be processed in the direction to the left, is a type 3 edges.

In the specific examples depicted in Fig described, as are 8 of the filtered pixels in accordance with the edge direction of the block, located about a block which must be processed, when the edge direction of the block which must be processed is a type 3 edges.

As shown in this drawing, when the edge direction of the block which must be processed is a type 3 edges, 8 Phil is truemy pixels are defined the same way, as in standard technology, regardless of whether the edge direction of a block adjacent to the block which must be processed in the upward direction, the type 1 region, type 2 edges, type 3 region or type 4 edges.

In the specific examples depicted in FIGU-17G described, as are 8 of the filtered pixels in accordance with the edge direction of the block, located adjacent to the block which must be processed, when the edge direction of the block which must be processed is a type 4 edges.

Figa depicts a specific example in which the edge direction of the block which must be processed is a type 4 edges, and the edge direction of a block adjacent to the block which must be processed in the upward direction, is a type 3 region or type 2 edges. As shown in figa, in these cases, 8 filtered pixels are determined the same way as in the conventional technology.

Also, figv depicts a specific example in which the edge direction of the block which must be processed is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction, is the type 1 edges, and the edge direction of a block adjacent to the block which must be processed in the right direction, is a type 4 edges.

In addition, figs depicts specific in the er, in which the edge direction of the block which must be processed is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction, is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction to the left, is the type of the 4 edges, and the edge direction of a block adjacent to the block which must be processed in the right direction, is a type 4 edges.

Also, fig.17D depicts a specific example in which the edge direction of the block which must be processed is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction, is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction to the left, is a type 3 edges, and the edge direction of a block adjacent to the block which must be processed in the right direction, is a type 4 edges.

In addition, five depicts a specific example in which the edge direction of the block which must be processed is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction, is the type 1 edges, and the edge direction of a block adjacent to the block which must be processed in the right direction, is a type cry.

Also, fig.17F depicts a specific example in which the edge direction of the block which must be processed is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction, is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction to the left, is the type of the 4 edges, and the edge direction of a block adjacent to the block which must be processed in the right direction, is a type 3 edges.

In addition, fig.17G depicts a specific example in which the edge direction of the block which must be processed is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction, is a type 4 edges, the edge direction of a block adjacent to the block which must be processed in the upward direction to the left, is a type 3 edges, and the edge direction of a block adjacent to the block which must be processed in the right direction, is a type 3 edges.

Therefore, at step S102 in the flowchart in figure 4, the filtered pixels that must be processed by the filter for deblocking, determined in accordance with the methods depicted in FIGU-17G, based on the directions of the edge blocks in MB during the execution of the block diagram in Fig.9.

It should be noted that, on figa-17G, t is typical specific examples depicted in accordance with the condition (frequency) generated textures however, in other specific examples, the filtered pixels that must be processed by the filter for deblocking, can also be defined similarly to the specific examples depicted in FIGU-17G.

For example, when the edge direction of a block adjacent to the block which must be processed in the upward direction to the right, depicted on FIGU, is not a type 2 edges, while the type 1 edges, 8 filtered pixels are determined the same way as in the conventional technology. The reason is that, as the texture is interrupted at the boundary between the block adjacent to the block which must be processed in the upward direction, and the block adjacent to the block which must be processed in the upward direction to the right, then decides that it is better not to apply a filter for deblocking.

In contrast, for example, when the edge direction of a block adjacent to the block which must be processed in the upward direction to the right, depicted on FIGU, is not a type 2 region, and type 4 edges, texture extends upward to the right and then forcibly guided in a downward direction to the left. In other words, the texture is bent in the form of "Λ". It should be noted that, in this case, the edge direction of a block adjacent to the block which must be processed in the right direction, is determined. It is expected that there will be a large number of States (frequency)in which the texture has changed, including when the texture is bent in the form "<" or ">", however, the filter for deblocking can be applied even in these cases.

In short, with the assumption that the texture line, and the like) in the image appears in the direction perpendicular to the edge, a situation in which the interrupted line segment texture, can cause deterioration in efficiency, and therefore, such a situation is excluded from the candidates, which must be applied filter for deblocking.

Here at figa-17G described, as are 8 of the filtered pixels in accordance with the edge direction of the block which must be processed, and the directions of the edges of the blocks adjacent to the block which must be processed, when it should be treated the boundary of the block in the horizontal direction, however, even when should be treated the boundary of the block in the vertical direction, 8 filtered pixels are determined similarly. In other words, when should be treated the boundary of the block in the vertical direction, can be treated similarly as when figa-17G rotated by 90°.

As described above, at step S102 in the flowchart in figure 4, 8 filtered pixels are determined by the methods depicted is on figa-17G, based on the directions of the edges of the block which must be processed, and blocks adjacent to the block which must be processed (blocks adjoining the block which must be processed in the direction of up, down, right, left and/or obliquely) in the performance of the block diagram of figure 9. For this process definition is provided a storage node 30 for information that defines the filtered pixels, is shown in figure 3.

Fig depicts an example of the structure of the data storage node 30 for information that defines the filtered pixels provided for determining the filtered pixels.

As shown in this drawing, when the boundary of the block which must be processed is in the horizontal direction, the storage node 30 for information that defines the filtered pixels, and stores information about the position, which describes the position of the pixel determined as the filtered pixel, for each of combinations of values of type edge direction, edge) of the block which must be processed, and value types of the region surrounding blocks. Also, when the boundary of the block to be processed is in the vertical direction, the storage node 30 for information that defines the filtered pixels, and stores information about the position, which describes the position of the pixel defined by the AK filtered pixel, for each of the combinations of values of type edge of the block, which should be treated, and type values in the region surrounding blocks.

Provided the storage node 30 for information that defines the filtered pixels having the above data structure, at step S601, the information stored in the storage node 20 for information about the edge direction, is shown in figure 3, refer to identify the type of edge of the block which must be processed, and the types of edge blocks located around the block, which should be treated, and, at step S602, defined 8 filtered pixels according to the information stored in the storage node 30 for information that defines the filtered pixels, using the information about the identified types of the region and information about the direction the boundaries of the unit, which should be treated as a key.

1-4, the Processes of steps S103 and S104

At step S103 decides whether to use the filter for deblocking. When making this decision are the standard conditions for decision-making, defined in H.264/AVC.

When at step S103 decided that you should use filter for deblocking, the next step S104, to the filtered pixels identified at the step S102, use the standard filter for deblocking, particularly the second in H.264/AVC.

Although the standard filter for deblocking can be changed only by four sets of 8 pixels in the direction perpendicular to the edge of the block, a variant of implementation of the present invention described above provides the ability to filter for deblocking in the downward direction. In the result, it is possible to solve problems that must be solved by the present invention, and it is expected to improve the subjective image quality and coding efficiency.

Although the above description refers to blocks of 4×4, the principle of the present invention can also be applied to the sizes different from the block of 4×4. It can also be applied not only to the luminance signal, but also to the chrominance signal is identical to the method.

2. Regarding the experiment carried out to test the effectiveness of this option exercise

The following describes the results of an experiment performed to test the effectiveness of this option implementation.

In this experiment, we present an implementation option is implemented for standard images Foreman used for standardization, and examined the image quality. Specific conditions of the experiment are as follows.

Software used: KTA (Key Technical Area) version 1.8

Image type: Foreman

Image size: QCIF (176×144)

GOP structure: III... (All intra-coded)

The quantization parameters: 37 (fixed)

The direction of edges: 4

Number of frames: 10

Skipping frame: 1

It should be noted that the image and the software is available from the following.

http://media.xiph.org/video/derf/ (standard image)

http://iphome.hhi.de/suehring/tml/download/KTA/ (software)

Figa depicts the values of PSNR (peak signal-to-noise ratio) for objective image quality for the respective frames which have been obtained by an experiment in respect of the standard image Brigadier, and figv depicts the graphical data. Here at figv, the horizontal axis presents the number of processed frame, and the vertical axis represents the objective quality (PSNR) for the corresponding frame. In addition, designation Standard means a standard filter for deblocking, and EADF (adaptive edge filter for deblocking) means a filter according to the present variant implementation.

According to the above experimental results, it can be confirmed that the image quality is improved when using this option, implementation, and effectiveness of this option, the implementation is tested.

The SECOND VA IS IANT IMPLEMENTATION

The following describes the device 1 for deblocking in accordance with the second embodiment of the present invention.

In the first embodiment, the present invention uses a structure in which a filter for deblocking is performed using pixels arranged in a straight line, orthogonal to the edge direction, and, accordingly, can be improved image quality, which includes a slanted edge, however, the image quality, including complex texture, can be degraded in a specific frame.

Accordingly, with regard to this point, in the second embodiment, a structure is used, in which the stability region of each block and if the block which must be processed, the resulting stability region is high, in other words, an image with a stable (acute) inclined edge, then use the filter for deblocking, identical to the filter in the first embodiment, for execution of the process of deblocking, on the contrary, if the block which must be processed, the resulting stability region is low, in other words, the image comprising a complex texture, or the image with weak (invisible) region, the process of deblocking is performed is by using a filter for deblocking in accordance with standard technology.

Fig depicts the configuration of the device 1 for deblocking in accordance with the second embodiment of the present invention.

In the device 1 for deblocking under this option the implementation structure is used in which is provided a node 13α determine the direction of the edge, instead of node 13 determine the direction of the edges provided in the first embodiment, and the storage node 20α for information about the stability of the direction of the edge, instead of the storage node 20 for information about the edge direction provided in the first embodiment, and the node 40 define pixels, which is provided in the first embodiment, also provided the node 43 to determine the sustainability of edge and node 44 determine the final filtered pixel.

Site 13α determine the direction of the edge determines the edge direction of each block using the edge components, the selected node 11 allocation component region in the direction along the x-axis and node 12 selection component region in the direction along the y-axis, calculates the stability of the region, and stores it in the storage node 20α for information about the stability of the direction of the edge.

In addition, the node 43 to determine the sustainability of the region receives stability region of each block in the MB when accessing information stored in remember the m node 20α for information about the stability of the direction of the edge, and compares it with a predefined threshold value to determine whether the stability region of each block in MB high.

In addition, in respect of the block, the stability of the edges of which, as defined by the node 43 to determine the sustainability of the region is low, the node 44 determine the final filtered pixel replaces the filtered pixels defined by the node 42 determine the filtered pixels of the filtered pixels used in the standard technology (filtered pixels formed by pixels arranged in a straight line, orthogonal to the direction of the borders of the block).

Fig-22 depict an example of a flowchart executed by the device 1 for deblocking under this option implementation to filter the boundaries of the block of 4×4 in one MB.

Next detail the process performed by the device 1 for deblocking under this variant implementation, in accordance with these block diagrams.

It should be noted that hereinafter in this description, the block size is 4×4, and the aim of treatment is the brightness signal, unless otherwise noted. In addition, it is assumed that the edge directions are four directions (horizontal (0°), vertical (90°) and oblique (45° and 135°)).

As shown in the block diagram on Fig, at step S701, the device 1 for deblocking paragraph is really a variant of the first implementation detects the direction and stability of all edges in sixteen blocks of 4×4 MB, and stores them in the storage node 20α for information about the stability of the region. This version of the implementation differs from the first variant implementation that also provided the allocation process stability region. The way to detect the direction of territory and sustainability of the region are described below according to steps S801-S804 (block diagram in Fig).

Next, at step S702, the system chooses the direction-based filtering directions of edges obtained in step S701. After selecting the direction of filtration are 8 pixels required to control the filter for deblocking. Details of the method of choosing the direction of the filter and determine which pixels have already been described in the steps S601 and S602 (block diagram figure 9).

Next, at step S703, selects one block which must be processed (the order of the items in the MB, applies a filter based on H.264/AVC), and the stability region of the selected block selected at step S701, compared with a predefined threshold value. If the resistance region is higher than a threshold value, decides that the region is stable, and the texture appears in the downward direction, therefore, the filtered pixels identified at the step S702, defined as the final filtered pixels, and processing is transferred to step S705 without execution of the process this is PA S704.

On the contrary, if the stability region is less than or equal to the threshold value, decides that the region is unstable, and the texture does not appear in the downward direction, and, accordingly, the process proceeds to step S704 to replace the filtered pixels identified in step S702. At step S704, the same way as in the conventional technology, the pixels arranged in a straight line, orthogonal to the direction of the border of the block is determined as the final filtered pixels, 8 pixels required to process filter for deblocking, defined. In other words, the direction of the filter selected in step S702, is changed to a direction orthogonal to the boundary of the block.

After completion of the processes in steps S703 and S704, then in step S705, the decision about whether to use a filter for deblocking. When making this decision are the standard conditions for decision-making, defined in H.264/AVC. If the filter is to be used, then the process proceeds to step S706. If the filter should not be used for processing the next block, the process proceeds to step S707 to decide as to whether the position to be processed, of the destination block.

Furthermore, in respect of a block that has not been processed for this is e S704, at step S706 is the actual filter using 8 pixels identified in step S702 (i.e. filtering in an inclined direction, as in the first embodiment). On the contrary, in respect of the block, which is processed at step S704, at step S706 is executed the actual filter using 8 pixels identified in step S704 (i.e. filtering only in the direction perpendicular to the edge of the block, as in the standard technology).

Next, at step S707, the decision about whether to be filtered the next block. If the processed block is not the final block, for processing the next block, the process returns to the process of step S703. If the target block has already been processed, then processing ends.

The following describes the details of the process performed at step S701, in accordance with the flowchart on Fig.

After logging in to the process of step S701 in the block diagram on Fig, at step S801, the device 1 for deblocking under this option the implementation of the first component allocates edge in the direction of x-axis (horizontal direction)as shown in the block diagram on Fig. The details of this process are already described in detail according to the stages of spider S301 demonstration-W302 first variant implementation.

Next, at step S802, the allocated component of the edge in the direction of y-axis (vertical healthy lifestyles is tion). The details of this process are already described in detail according to the steps S401-S402 first variant implementation.

Next, at step S803, the angle D of the edges contained in the block is obtained based on the amount of Sx horizontal edge components obtained in step S801 and the amount Sy vertical edge components obtained in step S802, and on the basis of the determined edge direction. The details of this process have already been described in the steps S501-S502 first variant implementation.

Next, at step S804, the sum Sx horizontal edge components obtained in step S801, and the amount Sy vertical edge components obtained in step S802, apply to the following equation to calculate the stability of the M edges contained in the block.

M=(Sx2+Sy2)1/2

It should be noted that the threshold value with which to compare the stability region, which is used in step S703 in the block diagram on Fig may be determined, for example, receiving frames from multiple video signals to be encoded, and calculates an average value of endurance edges contained in these frames.

In accordance with the present embodiment, as described above, you can create a filter for deblocking corresponding to the edge direction, with the stability of the region. As a result, this method is effective, in particular, is when there is a strong texture in the downward direction, and there is an intention to change the direction of the filtered pixels only in this part, compared with the first embodiment, in which the structure is used in which the direction of the filter is determined based on the direction of the edge without taking into account the stability of the region, despite the fact that the stability region is weak.

A THIRD OPTION EXERCISE

The following describes the device 1 for deblocking in accordance with the third embodiment of the present invention.

The differences between this embodiment and the second embodiment lies in the method of detecting the direction of the edge corresponding to the step S701, and the method of determination using the stability region corresponding to the step S703. It should be noted that the configuration of the device for deblocking in accordance with the present embodiment is identical to the configuration of the device for deblocking in accordance with the second embodiment (see Fig).

First describes in detail the change in the step S701 in accordance with the flowchart on Fig. In the present embodiment, the edge direction is detected using the information intra-prediction.

It should be noted that hereinafter in this description, the block size is 4×4, and order processing is the fast luminance signal, if not stated otherwise. In addition, it is assumed that the edge directions are four directions (horizontal (0°), vertical (90°) and oblique (45° and 135°)).

Upon detection of the direction of the edge, as shown in the block diagram on Fig, the device 1 for deblocking under this variant implementation, at step S901, first determines encoded Lee MB, which should be treated with intra-prediction. If coding with intra-prediction is performed, the processing proceeds to step S902 to obtain information of the prediction mode intra prediction in MB.

On the contrary, if the MB, which should be treated, not encoded with the intra-prediction, and encoded with the inter-prediction, the processing proceeds to step S903 to perform intra-prediction MB, which should be treated. After completion of the step S903, the process next moves to step S904 to obtain information of the prediction mode obtained when the intra-prediction.

Next, at step S905, the edge direction is determined in accordance with the information of the prediction mode obtained in step S902 or S904. When determining the direction of the edge use the following Table 3, which is a table of correspondence between the predictions and the edge direction.

Table 3
Mode predictionsEdge direction
01
13
2No
32
44
54
64
72
82

For example, as a vertical prediction is selected in H.264/AVC, when the prediction mode is equal to 0, then it is expected that the pixels having the same pixel value, are distributed in a line in the vertical direction. Accordingly, as the texture image is in the vertical direction, it is expected that the edge corresponds to the horizontal direction, in other words, the type 1 region Fig in the first embodiment. In addition, because MB is predicted by the mean, then, when the prediction mode is equal to 2, it is assumed that the edge is missing or is very unstable, and, accordingly, the edge direction is determined as Not.

Therefore, the edge direction in the present embodiment, is estimated based on the prediction mode, when the MB, which should be treated, is intra-predicted.

The following details the changes in step S703.

When at step S905, the edge direction is determined as No, the device 1 for deblocking under this option, the implementation determines that the stability region is less than or equal to the threshold value, and the process proceeds to step S704 to perform the process in an identical direction as the default filter for deblocking. In other cases, the process of step S704 is not executed, so that the filter for deblocking is applied in accordance with the edge direction.

In accordance with the present embodiment described above, you can apply a filter for deblocking corresponding to the edge direction, with the stability of the region, similarly to the second variant of implementation. As a result, this method is effective, in particular when there is a strong texture in the downward direction, and the direction of the filtered pixels should be changed only in this part, compared with the first embodiment, in which the structure is used in which the direction of the filter is determined based on the direction of the edge without taking into account what ustoichivosti edge, even despite the fact that the stability region is weak.

It should be noted that a program for implementing each of the above-described processing stage can be recorded on a machine-readable recording medium and the program recorded on this recording medium may be read and executed by a computer system to perform the various processes described above in relation to the encoder signal.

The term "computer system"used in this description, may include an OS (Operating system) and hardware such as peripheral devices. In addition, the computer system uses the WWW (world wide web connection, the Internet, the computer system may include a home page providing environment (or display environment).

The computer-readable recording medium may be a flexible disk, a magneto-optical disk, ROM (Permanent memory), a rewritable nonvolatile memory such as a flash memory, a portable storage medium such as a CD (compact disk) - ROM, and hard disk built into the computer system. In addition, the computer-readable recording medium may include a device in which is stored the program for a certain period of time, for example, volatile memory (such as DRAM (DINAMIChESKOE random access memory), in a computer system which becomes a server or client when the program is transmitted through a network such as the Internet or the chain of communication, such as telephone line.

The program can also be transferred from computer system containing the storage device, etc. on which the program is stored in another computer system through a medium or by transmission signals in the transmitting medium. Here, the transmission medium used for the transfer of the program is a medium having a function of transmitting information such as a network (communication network)such as the Internet and the chain of communication (communication line)such as a telephone line. In addition, the program can be configured to implement part of the aforementioned functions. In addition, the program may be a program that performs the above functions by combining it with a program already recorded in the computer system, the so-called differential file (differential program).

Although the above described embodiments of the present invention, the present invention is not limited to the above variants of implementation, and can be done additions, omissions, substitutions or other changes without departing from the essence of the present invention. The present invention is not limited to enter the mentioned description, and is determined in accordance with the attached claims.

INDUSTRIAL APPLICABILITY

The present invention can be used for deblocking, which is used in devices of video coding and decoding devices of the video, which made coding with prediction on a block basis. When applying the present invention it is possible to reduce the block noise to maintain texture inherent in the image, respectively, with implementation of deblocking, which improves the subjective quality of the image.

DESCRIPTION REFERENCE POSITIONS

1: a device for deblocking

10: node edge detection

11: site selection component of the edge in the direction of x-axis

12: site selection component region in the direction along the y axis

13, 13α: node determine the direction of the edge

20, 20α: a storage node for information about the direction of the edges

30: a storage node for the data determining the filtered pixels

40: site definition pixels

41: the node checks the edges of adjacent blocks

42: site definition filtered pixels

43: the node to determine the sustainability of the region

44: node determine the final filtered pixel

50: the node decision about using filter

60: the filtering

70: the node making the decision is the end of the process

1. The method of deblocking to reduce block distortion occurring in the encoding scheme of the video signal to perform coding with prediction on a block basis and in the decoding scheme of the video signal for decoding a video signal encoded by the above encoding scheme of the video signal that contains:
phase detection, which detects, for each block in the direction in which the pixel value is changed, which is represented by an edge that indicates the direction of change in pixel value in each block, the detection phase, which determines the direction in which the filter for deblocking must be applied to the border of the block, on the basis of the direction of the edge detected for a block which must be processed, which includes the edge of the block subjected to removal of the blocking, and based on the direction of the edge detected for a block contacting the block to be processed, and the filtration stage, on which to apply the filter for deblocking to the boundary of the unit in accordance with a specific direction.

2. The method of deblocking according to claim 1, in which for each block in the detection step detects component in the horizontal direction of change in pixel value in each block, find the component in Weert the radical direction changes of pixel values in each block and detect the edge direction on the basis of the detected component in the horizontal direction and the detected component in the vertical direction.

3. The method of deblocking according to claim 1, in which for each block in the detection step receive information on the prediction mode that is used when intra-coding each block, and detects the edge direction on the basis of the obtained information on the prediction mode.

4. The method of deblocking according to claim 2, further containing the step of calculating, which is calculated for each block in the stability region on the basis of the detected component in the horizontal direction and the detected component in the vertical direction, and the detection phase compare the stability of the region, calculated for the block, which should be treated with a pre-defined threshold value, and when the stability region is less than or equal to a predefined threshold value, change the direction in which it should be applied as a filter for deblocking, which is determined on the basis of the direction of edges in a direction orthogonal to the boundary of the block.

5. The method of deblocking according to claim 3, in which, when the information on the prediction mode for the block to be processed is a prediction mode using an average pixel value as a prediction signal, the step of determining change the direction in which it should be applied, f is ltr for deblocking, which is determined on the basis of the directions of the edges, in the direction orthogonal to the boundary of the block.

6. The method of deblocking according to any one of claims 1 to 5, in which the step of determining determines the direction in which it should be applied to the deblocking filter based on the data stored in the means for storage, which stores information that describes the correlation between the edge direction of the block which must be processed, the edge direction of the block contacting the block to be processed, and the direction in which it should be applied as a filter for deblocking, by reference to the information indicating the direction in which the deblocking filter is to be applied, which is stored in the storage node, together with the direction of the edge detected for a block which must be processed, and the direction of the edge detected for a block contacting the block to be processed.

7. Device for deblocking to reduce block distortion occurring in the encoding scheme of the video signal to perform coding with prediction block and the decoding scheme of the video signal for decoding a video signal encoded by the above encoding scheme of the video signal that contains
the medium is in detection for detecting for each of the block areas, in which the pixel value is changed, which is represented by an edge that indicates the direction of change in pixel value in each block, means for determining to determine the direction in which the filter for deblocking must be applied to the border of the block, on the basis of the direction of the edge detected for a block which must be processed, which includes the edge of the block subjected to removal of the blocking, and based on the direction of the edge detected for a block contacting the block to be processed, and the filter tool to filter for deblocking to the boundary of the unit in accordance with a specific direction.

8. Device for deblocking according to claim 7 also containing means for storing information that describes the correlation between the edge direction of the block which must be processed, the edge direction of the block contacting the block to be processed, and the direction in which the filter for deblocking should be applied, and the means for determining is configured to determine the direction that should be applied filter for deblocking, by reference to the information indicating the direction in which the deblocking filter is to be applied, which is stored in upominalam node, together with the combination of the direction of the edge detected for a block which must be processed, and the direction of the edge detected for a block contacting the block to be processed.

9. The computer-readable recording medium on which is recorded a program for deblocking for execution of the method of deblocking according to any one of claims 1 to 5 on the computer.



 

Same patents:

FIELD: information technology.

SUBSTANCE: method of compressing graphics files includes operations for changing geometrical dimensions of initial frames of a graphic image with subsequent decompression of frames of the graphic image and the qualitative estimate of parameters, wherein the peak signal-to-noise ratio is predetermined, a time iter value equal to zero is assigned; performing two-dimensional wavelet transformation over the initial frame of the graphic image A(l,h), wavelet coefficients of which form a matrix Y(l,h), which is then compressed and then decompressed, after which a zero matrix is formed and the elements are replaced with corresponding elements of the decompressed matrix; the reconstructed image is then formed by performing inverse two-dimensional wavelet transformation over the zero matrix with the changed elements, after which the peak signal-to-noise ratio is determined, which characterises the quality of the reconstructed frame compared with the initial frame and, if the calculated ratio is greater than a given value, the described operations are carried out while increasing the current value of the variable iter by one and replacing the matrix of the initial graphic image (l,h) with the formed matrix Y(I,h).

EFFECT: high degree of compression of graphics files and rate of transfer thereof over data channels for a given signal-to-noise ratio peak value.

6 dwg

FIELD: information technology.

SUBSTANCE: method comprises steps where each coefficient in an 8×8 matrix of encoded coefficients is scaled with one multiplier to create a matrix of scaled coefficients, scaled one-dimensional fixed-point transforms are repeatedly performed to transform the matrix of scaled coefficients into a matrix of transformed coefficients, the transformed coefficients are shifted to the right to create a matrix of corrected coefficients, wherein each corrected coefficient in the matrix of corrected coefficients approximates the corresponding value in the matrix of values, which can be created with application of an ideal two-dimensional inverse discrete cosine transform to the matrix of encoded coefficients, an 8×8 block of pixels is displayed, wherein each pixel in the said block includes a pixel component value taking into account the corrected coefficient.

EFFECT: eliminating approximation errors for calculating an inverse discrete cosine transform using fixed-point calculations.

27 cl, 12 dwg, 3 tbl

FIELD: physics.

SUBSTANCE: method of acquiring, processing, compressing and transmitting satellite images of the Earth is carried out using equipment installed on a satellite. An image is obtained using satellite equipment. Said image is processed at an image preprocessing step. A characteristic value of said image is obtained, as well as an image that is preprocessed by recognising predetermined natural objects on the obtained image and replacing such natural objects with standard objects. Said characteristic value is compared with a table values in which each value is associated with one compression algorithm. The compression algorithm corresponding to said characteristic value is used in compression means for compressing said image. Said compressed image is transmitted to remote image receiving means using transmitting means.

EFFECT: obtaining images of any regions of the Earth and compressing said images, corresponding to the type of observed objects in order to provide low-throughput transmission and increase compression efficiency.

9 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: described techniques may analyse a plurality of quantisation levels associated with each individual coefficient to select the quantisation level for the individual coefficients that results in a lowest coding cost. Since CAVLC does not encode each coefficient independently, the techniques may compute the coding costs for each of the candidate quantisation levels associated with the individual coefficients based on quantisation levels selected for previously quantised coefficients and estimated (or predicted) quantisation levels for subsequent coefficients of a coefficient vector. The quantisation levels for each of the coefficients are selected based on computed coding costs to obtain a set of quantised coefficients which minimises a rate-distortion model.

EFFECT: method for quantisation of coefficients of video blocks through which a desirable balance of rate and distortion ca be achieved.

39 cl, 8 dwg

FIELD: information technology.

SUBSTANCE: analysis unit 621 compares image input data with image data of a previous image read from a storage unit 622. Based on the analysis result, a concealment header generating unit 623 generates a concealment header accessed by the receiver 603 during error concealment processing. Based on the concealment header, a loss analysis unit 631 performs error concealment if an error arises during transmission through suitable use of encoded data stored in the storage unit 632.

EFFECT: shorter waiting time when recovering data from packets and facilitating easy and fast processing, which can be utilised in an encoder.

14 cl, 52 dwg

FIELD: information technologies.

SUBSTANCE: device comprises a processor arranged as capable of realisation of a set of commands for calling a facility of intracycle filtration of blocking effect deletion and for universal correction of blocking effect in a decoded output signal during operation of a post-cycle filtration using the facility of intracycle filtration of blocking effect deletion, at the same time the universal correction of blocking effect includes the following: performance of an operation of strong filtration in respect to units in a decoded output signal for correction of an inherited blocking effect, at the same time units contain missed macrounits and units with a template of a coded unit, equal to zero, and inclusion of a facility of intracycle filtration of blocking effect removal for edges of a fragment of an image of fixed size, which are not arranged on the border of the unit of the appropriate intermediate macrounit, for correction of the inherited blocking effect; and a memory connected to the processor.

EFFECT: development of a method of universal correction of blocking effect, including inherited blocking effect.

19 cl, 23 dwg, 7 tbl

FIELD: information technology.

SUBSTANCE: plurality of different transforms are selectively applied to residual blocks based on the prediction mode of the video blocks. At least part of the plurality of transforms is separable directional transforms specifically trained for a corresponding prediction mode to provide better energy compaction for the residual blocks of the given prediction mode. Using separable directional transforms offers the benefits of lower computation complexity and storage requirements than use of non-separable directional transforms. Additionally, a scan order used to scan the coefficients of the residual block may be adjusted when applying separable directional transforms. In particular, the scan order may be adjusted based on statistics associated with one or more previously coded blocks.

EFFECT: ensuring effective grouping of non-zero coefficients near the front of the one-dimensional coefficient vector to improve the effectiveness of entropy coding.

49 cl, 8 dwg

FIELD: information technology.

SUBSTANCE: method and apparatus for illumination and colour compensation for multi-view video coding are disclosed. A video encoder includes an encoder for encoding a picture by enabling colour compensation of at least one colour component in prediction of the picture based on a correlation factor relating to colour data between the picture and another picture. The picture and the other picture have different view points and both correspond to multi-view content for the same or similar scene.

EFFECT: high efficiency of MVC during illumination mismatch between pairs of pictures.

81 cl, 5 dwg

FIELD: information technology.

SUBSTANCE: offered invention adapts scanning order based on statistics associated with previously encoded blocks which have been encoded in the same prediction mode, instead of using traditional line-by-line scanning. For each prediction mode, coefficient statistics is saved. This statistics indicates for instance probabilities that these coefficients are zero or non-zero. Periodic adjustment of scanning order is performed in order to ensure in a greater degree that non-zero coefficients are grouped, and zero-value coefficients are grouped. In this process, thresholds and threshold adjustments are used which can lower frequency at which scanning order adjustment is performed.

EFFECT: higher entropy encoding efficiency.

56 cl, 9 dwg

FIELD: information technology.

SUBSTANCE: method for scanning coefficients of video units is offered in which order of scanning is adapted. This order is used to scan two-dimensional coefficient unit into one-dimensional vector of coefficients on the basis of statistics data associated with one or more preliminary coded units. For example, statistics data which specify possibility that preset coefficient value in each position of two-dimensional unit is equal to zero or differ from zero can be collected for one or more preliminary coded units. Scanning order establishment can be performed for additional ensuring that non-zero coefficients will group in the front part of one-dimensional coefficient vector. Statistics data collection and establishment of scanning order can be performed individually for each possible prediction mode.

EFFECT: improved efficiency of video image coding.

54 cl, 8 dwg

FIELD: information technology.

SUBSTANCE: image composed of macroblocks measuring 16x16 is selected from a reference frame, wherein each macroblock is assigned a pixel band with width "a", which serves as field region, as a motion-compensated image and is considered as the input image for the filtration process. The value "a" is defined according to the number of filter branches with a finite impulse response. The filtration process is performed using a motion-compensated image as the input image and the predicted image measuring 16x16 pixels is transmitted to the output as the output image of the filtration process. The predicted image is added in an adder to the output image of the inverse orthogonal transformation circuit and the summation result is used as a macroblock making up the decoded frame.

EFFECT: generating a predicted image with high accuracy without increasing processor load.

6 cl, 1 dwg

FIELD: information technology.

SUBSTANCE: method involves selecting the high-frequency component of pixel values of an image component; subtracting, from the initial pixel values of the image component, corresponding values of the low-frequency component thereof; calculating mathematical expectation and mean-square deviation of the high-frequency component of all pixels of the image component; dividing the matrix of the high-frequency component into columns or rows and calculating values of mathematical expectation and mean-square deviation of each column or values of mathematical expectation and mean-square deviation of each row; correcting the values of the high-frequency component; an image with reduced noise is formed by summation of values of the low-frequency component and the corrected values of the high-frequency component.

EFFECT: reduced noise in an electronic image.

3 cl, 15 dwg

FIELD: information technology.

SUBSTANCE: deblocking filter 113 adjusts the value of disable_deblocking_filter-idc, slice_alpha_c0_offset_div2 or slice_beta_offset_div2 based on the Activity of an image calculated by an activity calculation unit 141, the total sum of orthogonal transformation coefficients of the image calculated by an orthogonal transformation unit 142, Complexity of the image calculated by the rate control unit 119, or the total sum of prediction errors of the image calculated by a prediction error addition unit 120.

EFFECT: improved image quality through correct deblocking.

8 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: method involves parallel processing of the component of each decomposition level; brightness-contract transformation parameters are determined by forming a function for correcting brightness levels and a function for correcting contrast, forming a matrix of correction factors for third level decomposition contrast using the function for correcting contrast, reconstructing the family of matrices of scaled contrast correction factors for spatial matching on each level of correction factors with values of the detail component.

EFFECT: high quality of displaying digital images.

8 dwg

FIELD: information technologies.

SUBSTANCE: method includes performance of the following operations: digital copy of initial printed document is produced in colour space of RGB, brightness difference is detected, and direction of maximum gradient is determined, current count of image is classified for its affiliation to area of brightness difference or uniform area without sharp changes of brightness, Gauss smoothening of current count is made, if it is classified as belonging to uniform area without sharp changes of brightness, current count is smoothened in anisotropic manner, if it is classified as belonging to the area of brightness difference.

EFFECT: invention makes it possible to carry out fast single-stage descreening of screen-type pattern images with preservation of contour differences and increased accuracy.

5 cl, 9 dwg

FIELD: information technologies.

SUBSTANCE: target image that forms video image is divided into multiple division areas (DA); pass band (PB) width applied to DA is determined; array of filtration ratios (FR) is calculated to realise frequency characteristics corresponding to limitation of band, with application of PB width; image data is filtered with application of FR array; error information value is produced between obtained data and data of initial image, and distribution ratio (DR) is calculated to be used to determine optimal width of PB, on the basis of produced value; optimal width of PB corresponding to DR is defined for each DA, and array of optimal FR is calculated to realise frequency characteristics corresponding to limitation of band, using optimal width of PB; image data of division area is filtered using array of optimal FR; and produced data of each DA are synthesised.

EFFECT: generation of filtered image with specified value of image quality assessment.

29 cl, 27 dwg

FIELD: information technology.

SUBSTANCE: first band pass (BP) is determined based on initial image data; a matrix of filter coefficients (FC) is calculated to obtain frequency characteristics corresponding to limitation of frequency band (FB) using the first BP; data of the first filtered image are generated by filtering data of the initial image using the matrix of first FC; an estimate value of the objective image quality of data of the first filtered image is obtained and the distribution coefficient (DC) is calculated, which is used to determine the optimum BP based on the estimate value of objective image quality; the optimum BP corresponding to the calculated DC is determined using a table in which the corresponding relationship between DC and optimum BP is defined; a matrix of optimum FC is calculated to obtain frequency characteristics corresponding to limitation of FB using the optimum BP; and data of the optimally filtered image is generated by filtering data of the initial image using the matrix of optimum FC.

EFFECT: adaptive image filtering process for providing high-quality image.

3 cl, 11 dwg

FIELD: information technology.

SUBSTANCE: filtration of noise from digital images is based on defining a local structure of the image and on non-local averaging in accordance with the defined structure. The local structure of the image is determined by successively rolling up predefined templates with neighbouring pixels and by selecting a RPC template which gives the least error after rolling up. Noise is filtered from the digital image through weighted averaging of pixel values in the search window.

EFFECT: fast filtration of noise in digital images which provides high quality of noise suppression without causing distortions.

4 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: size of coding unit relative the required printer resolution is evaluated. If the size of the unit is discernible by the human eye, the following steps are carried out: the approximate metric of discernibility of a distortion caused by the Gibbs effect is determined for each coding unit and stored in memory; the approximate metric of discernibility of block distortion is determined for each border of the unit; if the corresponding element of the approximate metric of discernibility of unit distortion exceeds a predefined threshold value, a filter which can suppress block distortions is applied to the given border of the unit; for each coding unit, if the corresponding element of the approximate metric of discernibility of distortion caused by the Gibbs effect exceeds a predefined threshold value, a filter which can suppress distortions is applied to the coding unit in order to suppress distortions caused by the Gibbs effect.

EFFECT: preservation of image sharpness.

7 cl, 6 dwg

FIELD: physics.

SUBSTANCE: invention proposes to use an imaging model with separation of effects associated with reflecting power of the surface R and effects associated with scene illumination characteristics L, for which: quality of a recorded image is evaluated and if there is need to correct the image, noise is filtered off; a smaller copy of the image is formed; borders for subsequent contrast enhancement at the correction step are defined on the smaller copy; the luminance channel of the initial image is selected and filtered; the image is corrected in accordance with an empirical equation of the LR imaging model:

where A is the lower boundary of contrast enhancement of the smaller copy of the image; Φ and ψ are lower and upper boundaries of contrast enhancement of the converted image; JB is the brightness component of the initial image after bilateral filtering; γ is a non-linear conversion parameter and JF is the brightness component of the enhanced image; and the resultant image is converted to RGB colour space.

EFFECT: higher image quality.

7 cl, 19 dwg, 2 tbl

FIELD: digital processing of images, possible use for global and local correction of brightness of digital photographs.

SUBSTANCE: system and method for correcting dark tones in digital photographs contain global contrasting module, module for conversion from RGB color system, module for determining dark tone amplification coefficient, bilateral filtration module, dark tone correction module, module for conversion to RGB color system, random-access memory block, displaying device. Global contrasting module is made with possible correction of global image contrast, module for conversion from RGB color system is made with possible conversion of image from RGB color system to three-component color system, one component of which is image brightness, and two others encode color, module for conversion to RGB color system is made with possible conversion from three-component color system, one of components of which is image brightness, and two others encode color, back to RGB color system, module for determining dark tone amplification coefficient is made with possible computation of global image brightness bar graph and can determine dark tone amplification coefficient based on analysis of signs, calculated from global image brightness bar graph, bilateral filtration module is made with possible execution of bilateral filtration of image brightness channel, dark tone correction module is made with possible correction of dark tones in image brightness channel.

EFFECT: absence of halo-effect.

2 cl, 17 dwg

Up!