Image processing apparatus, method and program

FIELD: information technology.

SUBSTANCE: deblocking filter 113 adjusts the value of disable_deblocking_filter-idc, slice_alpha_c0_offset_div2 or slice_beta_offset_div2 based on the Activity of an image calculated by an activity calculation unit 141, the total sum of orthogonal transformation coefficients of the image calculated by an orthogonal transformation unit 142, Complexity of the image calculated by the rate control unit 119, or the total sum of prediction errors of the image calculated by a prediction error addition unit 120.

EFFECT: improved image quality through correct deblocking.

8 cl, 7 dwg

 

The technical field to which the invention relates.

The present invention relates to an imaging device, method and program, and to the imaging device, method and program, which can improve the image quality.

The level of technology

Noise blocks represent the factors that lead to deterioration of image quality, encoded using the method of MPEG-2 (Expert group on the moving image, phase 2). Thus, in the device, which encodes the image using a method MPEG-4 (Expert group on the moving image, phase 4) or using the method of H.264/AVC (Advanced video encoding), a filter removing the blocks that performs processing of removing the blocks to remove noise blocks (for example, see Patent document 1). When using this treatment, remove blocks, even, for example, in an image with a low bit rate is possible to suppress the deterioration of image quality.

Patent document 1: Japanese patent No. 3489735

The invention

Technical task

However, since deletion processing units perform processing filtering low frequency boundaries between blocks, there arises a problem in that a noise block can be removed together with the loss of the detail is the first information about the structure, etc. (below also referred to as texture) of the image.

The present invention was developed taking into account the above circumstances and to improve the image quality by proper processing of removing the blocks.

Technical solution

The imaging device in accordance with an aspect of the present invention includes removal filter blocks, which performs processing of removing the blocks in the decoded image obtained by decoding the second image, which is used for motion estimation of the first image when the first image is encoded, and which has been encoded before the first image, and a means of calculating the number of parts, designed to calculate the number of parts, representing the complexity of the second image. Filter removal units performs control regarding whether or not to apply a processing of removing the blocks to the decoded image, or performs a management degree in which the processing of removing the blocks should be applied to the decoded image, based on the number of parts.

A means of calculating the number of parts can calculate the complexity of encoding the second image as the number of parts.

When the second image is a picture is of, encoded using prediction between frames, the tool will calculate the number of parts can be set as the number of parts is obtained by normalization of the difficulty of encoding the second image by using the difficulty of encoding the third image encoded by using the prediction inside the frame before encoding the second image.

A means of calculating the number of parts can divide the second image that has not been encoded into many blocks, and count the number of parts on the basis of the distribution of pixel values for each block.

A means of calculating the number of parts can divide the second image that has not been encoded into many blocks, and count the number of parts on the basis of the conversion coefficient obtained by performing orthogonal transformation for each block.

A means of calculating the number of parts can count the number of parts on the basis of the forecast error, which represents the difference between the predicted image for the second image, projected on the basis of prediction between frames, and the second image, which was not encoded.

The imaging device can encode izobrageniem using H.264/AVC (Advanced video encoding). The removal filter blocks can manage whether or not to apply a processing of removing the blocks to the decoded image, or to manage the degree to which processing of removing the blocks should be applied to the decoded image, by adjusting the values of disable_deblocking_filter_idc, slice_alpha_c0_offset_div2 or slice_beta_offset_div2.

The imaging device can encode the image using the method of MPEG-4 (Expert group on the moving image, phase 4), H.264/AVC (Advanced video encoding), or VC-1 (Video Codec 1).

The method of image processing or program in accordance with an aspect of the present invention includes the steps of calculating the number of parts, representing the complexity of the second image, which is used for motion estimation of the first image when the first image is encoded, and which has been encoded before the first image, and control whether or not to apply a processing of removing the blocks to the decoded image obtained by decoding the second image, or control the extent to which the processing of removing the blocks you want to apply to the decoded image, based on the number of parts.

In one aspect of the present invention, the number of parts, representing the complexity of the second image, which is used for motion estimation of the first image when the first image is encoded, and which encode before calculating the first image, and control whether or not to apply a processing of removing the blocks to the decoded image obtained by decoding the second image, or control the degree in which you want to apply a processing of removing the blocks to the decoded image, performed on the basis of the number of parts.

Preferred effects

In accordance with an aspect of the present invention the processing of removing the blocks can be correctly performed in accordance with the image details. In addition, in accordance with an aspect of the present invention can improve the image quality.

Brief description of drawings

Figure 1 shows a block diagram representing a variant embodiment of the device of the image processing which is applied the present invention.

Figure 2 shows a block diagram of a sequence of operations designed to explain the processing of the encoding performed by the imaging device of figure 1.

Figure 3 shows the block diagram of the sequence of operations illustrating a first variant embodiment of the processing control block removal, performed by using the processing device of the image of figure 1

Figure 4 shows a block diagram of a sequence of operations designed to explain the second embodiment of the processing control removing blocks to be performed using the processing device image in figure 2.

Figure 5 shows the block diagram of the sequence of operations designed to explain the third variant embodiment of the processing control removing blocks performed by the imaging device in figure 3.

Figure 6 shows the block diagram of the sequence of operations illustrating a fourth variant embodiment of the processing control removing blocks performed by the imaging device in figure 4.

7 shows a block diagram representing an example configuration of a personal computer.

The explanation of the non-reference position

101 - imaging device, 113 module calculate the number of parts 114 - adder 115 - module orthogonal transformations, 119 module speed control, 120 - module summation prediction error, 124 - removal filter blocks 126 - forecasting module inside of the frame 127 - assessment module and motion compensation, 141 - block activity calculation, 142 - block orthogonal transformation.

Detailed description of the invention

Below will be described variants of the embodiment of the present invention with reference to the drawings.

figure 1 shows a block diagram, representing the configuration of a variant embodiment of the device of the image processing in which the present invention is applied.

The device 101 of the image processing is a device used to encode the input image, using the method of H.264/AVC (Advanced video encoding) and output encoded image, for example, in the recording device or transmission channel, which is not shown, at a later stage.

The device 101 of the image processing executed so that it includes the module 111 A/D conversion (a/d, analog-to-digital), the buffer 112 changes the screen layout module 113 calculate the amount of detail, the adder 114, the module 115 orthogonal transformation module 116 quantization module 117 lossless encoding, the buffer 118 of the drive module 119 speed control module 120 summation of the prediction error, the module 121 remove quantization module 122 inverse orthogonal transformation, the adder 123, a filter 124 removing the blocks, the storage device 125 of the frame, the module 126 forecasting inside frame and module 127 estimation and motion compensation. In addition, the module 113 calculate the number of parts are made so that it includes the block 141 calculation of activity and unit 142 orthogonal transformations.

Module 111 a/d conversion performs analogues of the digital conversion of the analog image, input from the outside, receiving the digital image, and transmits the converted digital image (below also called an original image, when it is in the buffer 112 changes the screen layout.

The buffer 112 changes the screen layout change the layout of original image transmitted from the module 111 a/d conversion, based on the structure GRIS (GRIS, the group of images), and sequentially transmits the original image in the module 113 calculate the number of parts.

Module 113 calculate the number of parts counts the number of items representing the complexity of the original image. In addition, the module 113 calculate the amount of detail transmits to the adder 114, the module 126 prediction inside a frame and module 127 estimation and motion compensation of the original image, which was completed by counting the number of parts.

Among the elements of the module 113 counts the number of parts, unit 141 of the activity calculation divides the original image into multiple blocks and calculates the number of parts of the original image based on the variance of pixel values in each block, as described below with reference to figure 4. Block 141 calculation of activity transmits information indicating the calculated amount of detail, the filter 124 removing the blocks.

In addition, the spacecraft is described below with sipkay figure 5, block 142 orthogonal transformation divides the original image into multiple blocks, and calculates the number of parts of the original image on the basis of the conversion coefficient obtained by performing orthogonal transformation for each block. Block 142 orthogonal transformations transmits information indicating the calculated amount of detail, the filter 124 removing the blocks.

For each macroblock, the adder 114 receives from the module 126 prediction inside a frame or module 127 estimation and motion compensation is one of the image projected on the inside of the frame that is predicted based on the prediction inside the frame (forecasting inside of the frame)and the image projected between frames, predicted based on the prediction between frames (prediction between frames, the prediction motion compensation) for the original image. The adder 114 calculates, for each macroblock, a difference between the original image and the image projected on the inside of the frame, or image, projected between frames, and applies to module 115 orthogonal transformations and to the module 120 summation prediction error differential image generated from the prediction error, obtained by calculating the difference.

Module 115 is regionalnego conversion performs orthogonal transformation, such as discrete cosine transformation or conversion of karunen-Loev, a differential image for each block having a given size, and transmits the thus, the conversion module 116 quantization.

Module 116 performs quantization the quantization of the transform coefficients transmitted from the module 115 orthogonal transformations, using the quantization scale-driven module 119 speed control, and transmits the quantized transform coefficients in a module 117 lossless encoding and module 121 removal of quantization.

Module 117 lossless encoding receives information about forecasting inside the frame of the module 126 forecasting inside of the frame and receives the information about the prediction between frames from module 127 estimation and motion compensation. Module 117 lossless encoding features of the quantized transform coefficients, information about forecasting inside of the frame, information about the prediction between frames and TPV given order, and performs the processing of lossless encoding, such as coding with variable length, such as CAVLC (CZAKD, context-sensitive adaptive coding with variable length code word) or arithmetic coding, such as SAVAS (KSDK, context-sensitive binary arithmeticexception), for linked data. Module 117 lossless encoding transmits the coded data in the buffer memory 118 to store it.

The buffer-storage device 118 displays, as an image encoded by using the method of H.264/AVC, data transferred from the module 117 lossless encoding, for example, in the recording device or transmission channel, which is not illustrated, in the next step.

Module 119 speed control operates on the basis of the number of code images stored in the buffer memory 118, the bit rate, which represents the amount of code per unit of time assigned to the encoded image.

For example, the control module 119 speed controls, by using the method of speed control, defined in accordance with the MPEG-2 TestModel 5 (TM), the bit rate by controlling the value of the quantizer scale, which is a value that specifies the conversion factor when the module 116 performs quantization quantization. In addition, as described below with reference to figure 3, module 119 speed control calculates the complexity of the encoding, as the number of parts, representing the complexity of the original image, and transmits the calculated complexity of the encoding filter 124 removing the blocks.

The module 120 summation errors prog is Osinovaya counts, as described below with reference to Fig.6, the number of parts, representing the complexity of the image, on the basis of forecast errors, forming a differential image obtained from the adder 114. The module 120 summation prediction error transmits information indicating the calculated amount of detail, the filter 124 removing the blocks.

The module 121 removal removes quantization the quantization of the transform coefficients transmitted from the module 116 quantization, and transmits the conversion factors after removing the quantization module 122 inverse orthogonal transformation.

The module 122 inverse orthogonal transform performs inverse orthogonal transform such as an inverse discrete cosine transformation or inverse transformation of karunen-Loev, for conversion transmitted from the module 121 removal of quantization. Thus, the result of decoding a receive differential image. The module 122 inverse orthogonal transformation transmits the decoded difference image to the adder 123.

The adder 123 receives from module 126 prediction inside a frame or module 127 estimation and motion compensation image projected inside the frame, or image, projected between frames, which is used for Generalov is of a differential image, and sums together the differential image and the resulting image is projected on the inside of the frame, or image, projected between frames. Thus, the decoding will receive the original image. The adder 123 transmits the decoded image (below called the decoded image, when it is in the filter 124 removing the blocks.

The filter 124 removing the blocks performs processing of removing the blocks to remove noise block in the decoded image. It should be noted that, as described below with reference to Fig.3-6, the filter 124 removing the blocks performs control whether or not to apply a processing of removing the blocks to the decoded image or controls the degree to which you want to apply a processing of removing the blocks to the decoded image on the basis of the number of parts obtained from a module 119 speed control module 120 summation prediction error, module 141 calculation of activity or module 142 orthogonal transformations. The removal filter 124 transmits blocks of the decoded image which has undergone deletion processing blocks in the storage device 125 of the frame. In addition, the filter 124 removing the blocks directly transmits, as an image for use in forecasting the inside of the frame, the decoded image is of, which is not subjected to deletion processing blocks in the storage device 125 of the frame.

The storage device 125 of the frame stores, as the image is accessed in the case of performing prediction inside a frame or prediction between frames (below called reference image, when it is), the decoded image is transferred from the filter 124 removing the blocks.

The module 126 prediction inside a frame does for each macroblock, using the encoded pixel located next to the corresponding macroblock in the same frame stored in the storage device 125 of the frame, the prediction inside the frame to generate the projected image as the original image. It should be noted that, as described above, the pixel of the decoded image that is not subjected to processing block removal, is used to predict inside the frame.

Module 127 estimation and motion compensation detects for each macroblock by using the reference image in another frame stored in the storage device 125 of the frame, the motion vector of the original image relative to the reference image, and performs motion compensation for the reference image using detektirovanii the motion vector. Under satin, module 127 estimation and motion compensation performs prediction between frames to generate the image projected between frames for the original image.

In addition, the prediction mode applied to each macroblock determine, for example, using the definition of the mode, which is not represented, using the method of low complexity Mode (high speed mode). When applied to the prediction mode is a prediction mode for predicting the inside of the frame, as shown in figure 1, the storage device 125 of the frame and module 126 prediction inside the frame are connected together, and the module 126 prediction inside a frame, the adder 114 and the adder 123 are connected together. The module 126 forecasting within the frame generates the image projected on the inside of the frame based on the selected prediction mode, and transmits the generated image, the predicted inside of the frame in the adder 114 and the adder 123. In addition, the module 126 prediction inside a frame passes without losses, as information about forecasting inside a frame macroblock that has been forecasting the inside of the frame, information about the applied prediction mode and TPV module 117 encoding.

In addition, when the mode predicted the I represents the prediction mode for prediction between frames, although it is not shown in figure 1, the storage device 125 of the frame and module 127 estimation and motion compensation are connected together, and the module 127 estimation and motion compensation, the adder 114 and the adder 123 are connected together. Module 127 estimation and motion compensation generates the image projected between frames based on the selected prediction mode, and transmits the generated image, predicted between frames in the adder 114 and the adder 123. In addition, the module 126 prediction inside a frame passes without loss in the quality of the information about the prediction interframe macroblock that has been the prediction between frames, information about the applied prediction mode, detektirovanii the motion vector, the number of images (pictures), to which reference is made, etc. in the module 117 encoding.

Below will be described with reference to the block diagram of the sequence of operations shown in figure 2, the processing of the encoding performed by the device 101 of the image processing of figure 1. It should be noted that this processing starts, for example, when the image input from outside the device 101 of the image processing.

At step S1, the device 101 of the image processing starts to encode the image. Thus, in the beginning of the operations described above with reference to figure 1, done the protected individual components of the device 101, the image processing start coding the input image in accordance with the method of H.264/AVC. In addition, it also starts processing control block removal, which will be described below with reference to figure 3-6.

At step S2, the device 101 of the image processing determines whether encoded all images. Video coding performing until until it is determined at step S2 that all images entered from the outside, were coded. In case, when determining that all the images entered from the outside, were coded, the processing of coding ends.

Next, with reference to the block diagram of the sequence of operations shown in figure 3, will be described first variant embodiment of the processing control block removal performed by the device 101 of the image processing in the processing of the encoding process, which was described with reference to figure 2.

At step S21 module 119 speed control calculates the complexity of encoding Complexity. In particular, the module 119 speed control receives the encoded image from the buffer 118 of the drive. Module 119 speed control calculates the complexity of the encoding Complexity as the number of parts, representing the complexity of the image, using the following equation (1):

Here PictureGeneratedBis represents the number of the generated code image is available. In addition, PictureAverageQuant represents the average value of the magnitude of the quantizer applied to the image, and is calculated by the following equation (2):

[Expression 1]

Here, MBNum represents the number of macroblocks of the image. In addition, Quantkis the scale of the quantizer applied to the k-th macroblock in the image, and it is calculated by the following equation (3):

[Expression 2]

Here QPkrepresents the quantization parameter of the k-th macroblock in the image.

Thus, the Complexity is calculated by the equation (1)represents a value obtained by multiplying the generated amount of code of the image on the mean value of the quantizer scale. Thus, the value of Complexity becomes smaller as the number of motion in the image. In addition, the value of Complexity becomes larger with increase of the amount of motion in the image.

At step S22, the filter 124 removing blocks adjusts the parameters related to the processing of removing the blocks. In particular, the module 119 speed control transmits information indicating the calculated value Complexity in the filter 124 removing the blocks. The filter 124 removing blocks regulates values disable_deblocking_filter_idc, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 in accordance with the value of Complexity of the image, subject deletion processing blocks.

Disable_deblocking_filter_idc is an option to install, you should apply or not the processing of removing the blocks, and can be set for each slice. Disable_deblocking_filter_idc is set to 0 when the processing of removing the blocks used, and set to 1 when the processing of removing the blocks are not used, and set equal to 2 in the case where the processing of removing the blocks do not apply at the border between the slices.

Slice_alpha_c0_offset_div2 is a parameter for adjusting the degree to which the processing of removing the blocks applied to the boundary between the blocks when the slice is divided into blocks of 44 pixels, and can be set for each slice. Slice_alpha_c0_offset_div2 can be set within the range from -6 to +6. Decreasing its value to the degree to which apply the processing of removing the blocks becomes smaller. If you increase its value to the degree to which apply the processing of removing the blocks increases.

Slice_beta_offset_div2 is a parameter for adjusting the degree to which the processing of removing the blocks applied to the pixel in the block, when the slice is divided into blocks of size 4x4 pixels, and can be set for each slice. Slice_beta_offset_div2 can be set within the range from -6 to +6. For me the e reduce its value to the extent in which is used the processing of removing the blocks becomes smaller. Increasing its value to the degree to which apply the processing of removing the blocks becomes greater.

At step S22, for example, in the case when the Complexity is less than the specified threshold value Thc, disable_deblocking_filter_idc is set to 1. Thus, for the image which is generated only minor noise, and there is very little movement, the processing of removing the blocks are not used.

In addition, for example, in the case when the value of the Complexity is equal to or greater than the threshold value Thc, the values slice_alpha_c0_offset_div2 and slice_beta_offset_div2 regulate in accordance with the value of Complexity. For example, as the value of Complexity becomes smaller, values slice_alpha_c0_orfset_div2 and slice_beta_offset_div2 set close to-6. In addition, as the value of Complexity becomes more values slice_alpha_c0_offset_div2 and slice_beta_offset_div2 set close to +6. Thus, for the image in which noise blocks are less likely to be generated and in which there is a small number of movements, the degree to which apply the processing block removal, reduce. In addition, for the image in which noise of the unit, more likely to be generated, and there is a lot of movement, the degree to which apply the treatment removed the e blocks, increased.

At step S23, the filter 124 removing the blocks performs processing of removing the blocks, and the processing control by removing blocks ends. The decoded image subjected to the processing of removing the blocks that keep as a reference image in the storage device 125 of the frame. It should be noted that when disable_deblocking_filter_idc is set to 1, the processing of removing the blocks do not execute.

As described above, in accordance with the value of Complexity, the processing of removing the blocks appropriately performed for the decoded image, and generate a reference image, from which you have removed the noise blocks, while maintaining the texture. Thus, it can be improved image quality for the image, which is subjected to the encoding of the prediction between frames, using the reference image.

It should be noted that the threshold value Thc can be changed in accordance with the image type, that is, in accordance with whether the image is an I-picture, P-picture or b-picture.

In addition, the complexity of encoding P-pictures and b-pictures that represent the image encoded using prediction between frames, can be normalized by using the values of the Complexity of the I - picture, which is submitted is an image, encoded using prediction inside the frame before the image processing and removing the blocks can be controlled on the basis of the normalized values (Norm_Complexity). Norm_ComplexityPpic, which are the result of normalizing the values of the Complexity of the P-image and Norm_ComplexityBpic, which are obtained by normalizing the values of the Complexity In the image are estimated using the following equations(4)-(8):

It should be noted that the value ComplexityIpic, PictureGeneratedBisIpic and PictureAverageQuantIpic represent the complexity of encoding, the number of the generated code and the average value of the quantizer scale I-image, respectively. In addition, ComplexityPpic, PictureGeneratedBisPpic and PictureAverageQuantPpic represent the complexity of encoding, the number of the generated code and the average value of the quantizer scale P-picture, respectively. In addition, ComplexityBpic, PictureGeneratedBisBpic and PictureAverageQuantIBic represent the complexity of encoding, the number of the generated code and the average value of the quantizer scale In the image, respectively.

For example, in the case when the image is subjected to processing block removal, is a P-picture or b-picture, if Norm_Complexity Myung is more than a specified threshold, Thcn, disable_deblocking_filter_idc is set to 1. Thus, for the image, which generates only a small amount of noise blocks and it is a very small movement, the processing of removing the blocks are not used.

In addition, for example, when Norm_Complexity is equal to or greater than the threshold value Thcn, values slice_alpha_c0_offset_div2 and slice_beta_offset_div2 regulate in accordance with the value Norm_Complexity. For example, as the value Norm_Complexity becomes less slice_alpha_c0_offset_div2 and slice_beta_offset_div2 establish similar-6. In addition, as is Norm_Complexity becomes more slice_alpha_c0_offset_div2 and slice_beta_offset_div2 establish close +6. Thus, for the image in which noise blocks are less likely to be generated and there is a small amount of movement, the degree to which apply the processing block removal, reduce. In addition, for images in which noise blocks are more likely to be generated and there is a lot of movement, the degree to which apply the processing of removing the blocks increase.

Because Norm_ComplexityPpic and Norm_ComplexityBpic represent the movement of the P-picture and b-picture in the case where the movement of the I-picture is set to 1, and the complexity of the motion of each image can thus be more accurately identified,the processing of removing the blocks can be performed more appropriately. Thus, the image quality for the image, which is subjected to predictive coding between frames, can be further improved.

It should be noted that the threshold value Thcn can be changed in accordance with the image type, that is, in accordance with whether the image of the P-picture or b-picture.

In addition, it is preferable that the I-picture is used for normalization, was an I-picture that has been encoded with the latest in front of a picture or I-picture, which is referred to when encoding the image.

In addition, for the I-image processing block removal is controlled based on the value of Complexity, as described above.

Next, with reference to the block diagram of the sequence of operations shown in figure 4, will be described a second variant embodiment of the processing control removing blocks performed by the device 101, the image processing in the processing of coding, which has been described with sipkay in figure 2.

At step S41 unit 141 of the activity calculation calculates the value of the Activity. In particular, block 141 of the activity calculation calculates the value of the Activity as the number of parts, representing the complexity of the image intended for encoding, using the following equation (9):

[Expression 3]

/p>

Here, actkrepresents the activity of the k-th macroblock in the image, and it is calculated using the following equation (10):

[Expression 4]

Here one macroblock is divided into four sub-blocks formed of 88 pixels, and var sblk present value denoting the variance of the pixel values of the divided sub-blocks, and its estimated using the following equations (11) and (12):

[Expression 5]

Here Pkrepresents the pixel value for the k-th pixel in the sub-block. In addition, var sblk receive for each subblock in two cases, the DCT encoding mode frame mode coding DCT field, and minsblk=1,8 (var sblk) in equation (10) represents the minimum value obtained var sblk.

Thus, the Activity is calculated by the equation (9)represents the average activity value of individual macroblocks in the image, and represents the value, for example, used to control the speed defined by MPEG-2 TestModel 5 (TM). Thus, the value of the Activity becomes smaller as the degree of change in pixel values is reduced. In addition, the value of the Activity becomes larger as the degree of change in pixel values is increased.

At step S42, the filter 124 removal b the shackles adjusts the parameters, related to the processing of removing the blocks. In particular, block 141 calculation of activity transmits information indicating the calculated Activity in the filter 124 removing the blocks. The filter 124 removing blocks regulates values disable_deblocking_filter_idc, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 in accordance with the value of the Activity image subjected to the processing of removing the blocks.

For example, in the case when the Activity is less than the specified threshold value Tha, disable_deblocking_filter_idc is set to 1. That is, for a smooth image, which generates only a small amount of noise blocks and it is a very small number of changes in the values of the pixels, the processing of removing the blocks are not used.

In addition, for example, in the case when the value of the Activity is equal to or greater than the threshold value Tha, the values slice_alpha_c0_offset_div2 and slice_beta_offset_div2 regulate in accordance with the value of the Activity. For example, as the value of the Activity becomes less slice_alpha_c0_offset_div2 and slice_beta_offset_div2 set equal to the values close to-6. In addition, as the value of the Activity becomes large, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 set equal to the values close to +6. Thus, for the image, which are less likely to be generated noise block, and a small number of changes occur in the values of the pixels, a step is , where used processing block removal, is reduced. In addition, for complex images, in which more than likely will be generated noise block, and a large number of changes occur in the values of the pixels, the degree to which apply the processing of removing the blocks increases.

At step S43, as in the above-described processing at step S23 in Fig 3, handle removing the blocks, and ends the processing control by removing blocks.

As described above, in accordance with the value of the Activity, the processing of removing the blocks appropriately performed for the decoded image and the reference image, which has been removed noise blocks, generate while maintaining texture. Thus, it can be improved image quality for the image, which was subjected to predictive coding between frames, by using the reference image.

In addition, when using the value of the Activity, before the image is encoded, can be obtained the amount of detail that represents the complexity of the image.

In addition, as described above, because the value of the Activity is the value used for the speed control, which is defined, for example, MPEG-2 TestModel 5 (TM), the value of the Activity can be calculated by the module 119 speed control.

The AOC is e, instead of the above-described values of the Activity, i.e. the average value of an action in a separate macroblocks, you can use the value reflecting the size of the dispersion values of the pixel of the image, such as the total value of actions in a separate macroblocks.

The third variant embodiment of the processing control removing blocks performed by the device 101 of the image processing in the processing of coding, which was described above with reference to figure 2, will be described with reference to the block diagram of the sequence of operations presented on figure 5.

At step S61 block 142 orthogonal transformation calculates the total sum of the coefficients of the orthogonal transform. In particular, the block 142 orthogonal transformation divides the image intended for the coding blocks having a predetermined size. It should be noted that, below is described an example in which perform the separation into blocks of size 4x4 pixels, and use the Hadamard transform as the orthogonal transform. Block 142 orthogonal transformation performs Hadamard transform on each block, using the following equation (13):

Here P is a matrix of pixels with a size of 44 pixels before performing a Hadamard transform, and R' is a matrix of coefficients convert the cation size 44, after performing a Hadamard transformation. In addition, H is a Hadamard matrix of the fourth order, represented by the following equation (14), and HT represents a transposed matrix for the Hadamard matrix of the fourth order.

[Expression 6]

Block 142 orthogonal transformation calculates, for each block, the amount of Ph of absolute values of transform coefficients, other than the conversion of the coordinates (0,0) of the matrix P' conversion factor (conversion factor DC component (FRI, DC)). Thus, Ph is the sum of absolute values of transform coefficients of component speakers (Perth, AC), which correlated with the number of codes among the transform coefficients in the block, after performing a Hadamard transformation. In addition, the block 142 orthogonal transformation calculates the total DCtotal Ph for all blocks of the image. It should be noted that a smaller value DCtotal get when the image becomes easier, which is concentrated frequency components. In addition, a larger value DCtotal receive, as the image becomes complicated, in which the dispersed frequency components.

At step S62, the filter 124 removing blocks adjusts the parameters, toadies to deletion processing blocks. In particular, the block 142 orthogonal transformations transmits information indicating the calculated value DCtotal, the filter 124 removing the blocks. The filter 124 removing blocks regulates values disable_deblocking_filter_idc, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 in accordance with the value DCtotal image subjected to the processing of removing the blocks.

For example, when DCtotal less than the specified threshold value Thd, disable_deblocking_filter_idc is set to 1. Thus, for a very simple image, which generates only a small noise and frequency components are concentrated, the processing of removing the blocks are not used.

In addition, for example, when DCtotal is equal to or greater than the threshold value Thd, the values slice_alpha_c0_offset_div2 and slice_beta_offset_div2 regulate in accordance with the value DCtotal. For example, as the value DCtotal becomes smaller, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 set equal to the values close to-6. In addition, as is DCtotal becomes large, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 set equal to the values close to +6. Thus, for a simple image, which is less likely to be generated noise blocks, and the frequency components are concentrated, the degree to which apply the processing of removing the blocks decreases. In addition, for complex images, the which are more likely to be generated noise blocks, and frequency components are dispersed, the degree to which apply the processing of removing the blocks increases.

At step S63, as in the above-described processing at step S23 in Fig 3, handle removing the blocks, and controlling the processing of removing the blocks ends.

It should be noted that although the description above has been described the example in which applies the Hadamard transform as the orthogonal transformation, you can use other types of orthogonal transform, such as DCT (discrete cosine transform).

In addition, regardless of the type of the orthogonal transform block sizes, which will be obtained when the separation is not limited to the above-described 44 pixels. For example, the block size can be set as desired dimensions, for example, 88 pixels.

As described above, in accordance with DCtotal, the processing of removing the blocks correctly performed on the decoded image and the reference image, from which you have removed the noise blocks, generate to maintain texture. Thus, the image quality for the image, which was subjected to predictive coding between frames, using the reference image can be improved.

In addition, since DCtotal represents the sum of the coefficients of the transformation of the project, decomposed into frequency components when applying orthogonal transformations, correlation relative to the complexity of the encoding of the image increases. Thus, the complexity of the image can be expressed with high accuracy compared with the Activity.

In addition, when used DCtotal, before the image is encoded, can be obtained the amount of detail that represents the complexity of the image.

In addition, instead of block 142 of the orthogonal transformation coefficients of the orthogonal transformation can be calculated using module 115 orthogonal transformations.

In addition, instead of the above-described DCtotal, i.e. the total sum of the orthogonal transformation coefficients of the AC components of the current, you can use the value reflecting the size of the orthogonal transformation coefficients of the AC components of the current image, such as, for example, the average value of the orthogonal transformation coefficients of the AC components of the current.

The fourth variant embodiment of the processing control removing blocks performed by the device 101 of the image processing in the processing of the encoding process, which was described above with reference to figure 2, will be described with reference to the block diagram of the sequence of operations 6.

The stage is S81 module 120 summation prediction error calculates the total of the forecast errors. In particular, in the case when the image is used to encode, that is, an image for which the difference calculated by using the adder 114 is a P-picture or b-picture, the module 120 summation prediction error sums for one image prediction error transmitted from the adder 114. Thus, compute the total amount Et of forecast errors. It should be noted that, the easier it is predicted image movement, that is, the smaller and easier movement of the image, the smaller the value of Et. In addition, the more complex motion of the image is predicted, that is, the larger and more complex movements contains an image, the greater the value of Et.

At step S82, the filter 124 removing blocks adjusts the parameters related to the processing of removing the blocks. In particular, the module 120 summation prediction error transmitting filter 124 removing blocks information indicating the calculated total sum of Et of forecast errors. The filter 124 removing blocks regulates values disable_deblocking_fllter_idc, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 in accordance with the value of Et.

For example, in the case where Et is less than the given threshold value The, disable_deblocking_fllter_idc set equal to 1. Thus, for the image, which are generated only minor noise blocks and p is oshodi a very small movement, the processing of removing the blocks are not used.

In addition, for example, in the case when the value of Et is equal to or greater than the threshold value, the values slice_alpha_c0_offset_div2 and slice_beta_offset_div2 regulate in accordance with the value of Et. For example, as the value of Et becomes smaller, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 establish similar-6. In addition, as the value of Et becomes large, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 establish close +6. Thus, for a simple image, which is less likely to generate noise block and there is a small number of movements, the degree to which apply the processing of removing the blocks decreases. In addition, for complex images, in which more than likely will be generated noise block and there is more movement, the degree to which apply the processing of removing the blocks increases.

At step S83, as in the above-described processing in step S23 figure 3, perform processing udaleniya blocks, and process control block removal ends.

As described above, in accordance with the value of Et, processing udaleniya blocks appropriately performed on the decoded image and the reference image, from which you have removed the noise blocks, generate while maintaining texture. Thus, the image quality for the image subjected to predictive coding between frames, when using this reference image can be improved.

In addition, in the case when Et is used before encoding image can be obtained the amount of detail that represents the complexity of the image.

In addition, instead of the above Et, which represents the total amount of forecast errors, you can use the value reflecting the size of the forecast errors for the image, such as, for example, the average value of the coefficients of the orthogonal transform.

As described above, in accordance with a property of the image, it is possible to properly perform the processing of removing the blocks. As a result, the subjective image quality can be improved.

It should be noted that although in the above description, the example in which separately used any of the values Complexity, Activity, DCtotal and Et so that the values disable_deblocking_filter_idc, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 can be adjusted, it was submitted that the complexity of the image may be determined using a set of values so that the values disable_deblocking_filteri_dc, slice_alpha_c0_offset_div2 and slice_beta_offset_div2 can be adjusted based on the result.

In addition, although the description above has been described the example in which the encoding is performed using the method of H.264/AVC, the present invention also is reminimo for the case in which encoding is performed by using the encoding method in which the filter is applied to remove blocks in the loop, such as, for example, MPEG-4 (Expert group on the moving image, phase 4) or the way VC-1 (Video Codec 1).

The above-described processing sequence can be performed using hardware or software. When the data sequence processing is performed by software, a program constituting this software, install from media recording the program in a computer built in dedicated hardware or, for example, a personal computer for General purpose, which is configured to perform various functions on the basis of various programs installed therein.

7 shows a block diagram representing an example configuration of a personal computer 300, which performs the above processing sequence using the program. The CPU (Central processing unit) 301 executes various types of processing in accordance with the program stored in the ROM (permanent memory) 302 or module 308 records. The program executed by the CPU 301, data, etc. as appropriate, retain in RAM (random access memory) 303. The CPU 301, the ROM 302 and the RAM 303 of the trading with each other via the bus 304.

The interface 305 I/o is connected to the CPU 301 via the bus 304. The module 306 of the input, consisting of a keyboard, mouse, microphone, etc. and module 307 output consisting of a display, a loudspeaker, etc. connected to the interface 305 I/o. The CPU 301 executes various types of processing in accordance with instructions entered by the module 306 of the input. The CPU 301 outputs the processing module 307 output.

The module 308 records connected with the interface 305 I/o, for example, consists of the hard drive. The module 308 record contains a program intended for execution by the CPU 301 and various data. Module 309 data communicates with external devices through a network such as the Internet or a local area network.

In addition, the program can be obtained via the module 309 and data stored in the module 308 record.

When the removable medium 311 such as a magnetic disk, optical disk, magneto-optical disk or semiconductor memory, installed in the actuator 310, which is connected with the interface 305 I/o, the actuator 310 performs drive removable media 311 information and receives the program and data recorded on the removable medium 311. The resulting program and data transmit and store in the module 308 records when necessary.

The recording media program, which is installed in the computer and in to the m contains programs executable computer program consists of removable media 311, which is a packaged medium such as a magnetic disk (including a flexible disk), optical disk (including CD-ROM (Compact disk read only) or DVD (Digital versatile disk)), magneto-optical disk or semiconductor memory, the ROM 302, which temporarily or permanently recorded program, or a hard disk, forming module 308 entries, as shown in Fig.7. The program stored on the recording media programs using a wired or wireless communication media such as local area network, Internet, or digital satellite broadcasting, via the module 309 data, which is an interface such as a router or modem when necessary.

It should be noted that in this specification, the steps describing the program stored on the recording media programs that include not only processing performed in time series in accordance with the written order but also processing performed in parallel or independently, this processing need not be carried out according to a time sequence.

In addition, a variant of the embodiment of the present invention is not limited to the description of the bag above variant embodiment, and various changes can be made without going beyond the essence of the present invention.

1. The imaging device containing:
the removal filter blocks, which performs processing of removing the blocks in the decoded image obtained by decoding the second image, which is used for motion estimation of the first image when the first image is encoded, and which has been encoded prior to the first image; and
a means of calculating the number of parts, designed to calculate the number of parts, representing the complexity of encoding the second image,
in which the filter removing the blocks made with the possibility of control in relation to whether or not to apply a processing of removing the blocks to the decoded image, or performs a management degree in which the processing of removing the blocks should be applied to the decoded image on the basis of the number of parts in the case, when the second image is an image encoded using prediction between frames, the tool will calculate the amount of detail sets as the number of parts is obtained by normalizing the complexity of the coding of the second image by using the complexity of the encoding of the third image is Azania, encoded using prediction inside the frame before encoding the second image.

2. The imaging device according to claim 1 in which the means of calculating the number of parts divides the second image that has not been encoded into many blocks and calculates the number of parts based on the variance of pixel values for each block.

3. The imaging device according to claim 1 in which the means of calculating the number of parts divides the second image that has not been encoded into many blocks and calculates the number of parts on the basis of the conversion coefficient obtained by performing orthogonal transformation for each block.

4. The imaging device according to claim 1 in which the means of calculating the number of parts counts the number of parts on the basis of the forecast error, which is the difference between the predicted image for the second image, predicted using prediction between frames, and the second image, which was not encoded.

5. The imaging device according to claim 1, in which the image encode using the method of H.264/AVC (Advanced video encoding) and in which the removal filter units performs control of whether or not to apply the processing is woven removing the blocks to the decoded image, or controls the degree to which the processing of removing the blocks you want to apply to the decoded image, by adjusting the values of disable_deblocking_filter_idc, slice_alpha_c0_offset_div2 or slice_beta_offset_div2.

6. The imaging device according to claim 1, in which the image encode using the method of MPEG-4 (Expert group on the moving image, phase 4), H.264/AVC (Advanced video encoding) or VC-1 (Video Codec 1).

7. A method of processing image, containing the following steps:
calculate the amount of detail that represents the complexity of the coding of the second image, which is used for motion estimation of the first image when the first image is encoded, and which has been encoded prior to the first image; and
manage whether or not to apply a processing of removing the blocks to the decoded image obtained by decoding the second image, or control the extent to which the processing of removing the blocks you want to apply to the decoded image on the basis of the number of parts in the case, when the second image is an image encoded using prediction between frames set as the number of parts is obtained by normalizing the complexity of encoding the second is zobrazenie, by using the complexity of the encoding of the third image encoded using prediction inside the frame before encoding the second image.

8. The recording medium containing a program for execution by computer processing method of the image that contains the following stages:
calculate the amount of detail that represents the complexity of the coding of the second image, which is used for motion estimation of the first image when the first image is encoded, and which has been encoded prior to the first image; and
manage whether or not to apply a processing of removing the blocks to the decoded image obtained by decoding the second image, or control the extent to which the processing of removing the blocks you want to apply to the decoded image on the basis of the number of parts in the case, when the second image is an image encoded using prediction between frames set as the number of parts is obtained by normalizing the complexity of the coding of the second image by using the complexity of the encoding of the third image encoded using prediction inside the frame before encoding the second of the images.



 

Same patents:

FIELD: information technology.

SUBSTANCE: disclosed is an image decoding method comprising steps of parsing network abstraction layer (NAL) units of a base view (S200); decoding an image of the base view (S202); parsing multiview video coding (MVC) extension parameters of a non-base view (S204); searching whether or not prefix NAL units for a base view are present (S205); either calculating MVC extension parameters for the base view when no prefix NAL units are present (S206) or parsing the MVC extension parameters of the base view when prefix NAL units for the base view are present (S207); and decoding the non-base view using the MVC extension parameters of the base view and the MVC extension parameters of the non-base view (S210).

EFFECT: providing multiview video coding methods of multiview video decoding methods, even when prefix NAL units are not used.

2 cl, 23 dwg

FIELD: information technologies.

SUBSTANCE: method of video coding includes establishment of candidates of reference pixels for pixels within a previously specified range of distances measured from a target coding unit; generation of a predicted signal by means of serial selection of reference pixels used for inner prediction of the target coding unit, among reference pixels-candidates, whenever a condition of distance from the target coding unit varies, and by generation of a predicted signal by reference pixels for each condition of distance; calculation of costs for coding to implement coding with inner prediction of the target coding unit using each generated predicted signal; final detection of reference pixels used for inner prediction of the target coding unit, on the basis of each calculated cost for coding; and coding of information indicating position of detected reference pixels.

EFFECT: provision of efficient internal prediction of an image, which contains eclipses or noise, or to an image, where signals arise that have similar spatial frequencies, such images may not be processed by means of a regular internal prediction.

10 cl 18 dwg

FIELD: information technologies.

SUBSTANCE: image coding device determines a basic factor of quantisation (QPmb), serving as a predictable quantisation factor in case, when a generated code of the main coding approaches the target size of the code during coding of the input image. The image coding device codes the input image for each increment during control with feedback by performance of quantisation with the help of an adapted quantisation parameter (QPt) on the basis of an average parameter.

EFFECT: invention provides for compression of a generated code size for each image increment below the target size of a code in a proper manner without use of a quantisation factor serving as the basis for a quantisation pitch, with high deviation.

14 cl, 7 dwg

FIELD: physics, communication.

SUBSTANCE: invention relates to picture digital signal encoder/decoder in appropriate chroma signal format. Encoder comprises module to divide input bit flow into appropriate colour components, module to divide colour component input signal into blocks so that coding unit area signal can be generated, module to generate P-frame for said signal, module to determine prediction mode used for encoding by efficiency P-frame prediction, module to encode prediction error for encoding difference between predicted frame corresponding to prediction mode as defined by appropriate module and colour component input signal, and module for encoding with variable length of prediction mode code, output signal of prediction error encoding module and colour component identification flag indicating colour component wherein belongs said input bit flow resulted from division of colour components.

EFFECT: higher efficiency.

2 cl, 98 dwg

FIELD: information technology.

SUBSTANCE: disclosed is a liquid crystal display device comprising: means (14) of converting frame frequency via interpolation of an image which is corrected based on an interframe motion vector between frames of input video signals. During interpolation of signals which are represented by two or three successive identical images generated in television projector equipment for use in the next interpolated image, motion vectors are recalculated based on motion vectors (Va) used in generating the first interpolated image, thereby identifying motion vectors having high accuracy and fewer errors; a unit (10) for determining a repeated image, and uses an interval during which identical images successively appear without the need for identifying vectors for further increase in accuracy of motion vectors identified between the image and the other immediately preceding image.

EFFECT: high accuracy of identified vectors and generating interpolated images of higher quality, even if the device processes such data as three to two pull-down or two to two pull-down, having high probability of deterioration of accuracy of identified vectors due to a wide interval of movement.

3 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: disclosed is a method for scalable encoding of video information, which calculates a weighting coefficient and indicates change in brightness between the encoded target region of the image and the region of a reference image in an overlying layer, calculates a motion vector, and generates a prediction signal by applying the weighting coefficient to the decoded signal of the region of the reference image indicated by the motion vector, and compensating for movement. If the immediate underlying region of the image has performed interframe prediction in the immediate underlying layer, the method identifies the region of the reference image of the immediate underlying layer which was used by the immediate underlying region of the image as a prediction reference for predicting movement, and calculates a weighting coefficient by applying the weighting coefficient which was used by the immediate underlying region of the image in weighted prediction of movement, to the DC component of the region of the image in the overlying layer, and accepting the result of application as the DC component of the immediate underlying region of the image.

EFFECT: high efficiency of scalable encoding.

26 cl, 24 dwg, 2 tbl

FIELD: information technology.

SUBSTANCE: target frame of a reference vector and a reference vector from already encoded frames are selected; information for labelling each frame is encoded; the reference vector is set to indicate a region in the target frame of the reference vector relative the target encoding region; the reference vector is encoded; the corresponding regions are searched using image information of the target region of the reference vector belonging to the target frame of the reference vector and is indicated through the reference vector and the reference frame; the reference region in the reference frame is determined based on the search result; a predicted image is formed using reference frame image information corresponding to the reference region; and the difference information between the target encoding region information and the predicted image is encoded.

EFFECT: efficient encoding of vector information used for inter-frame predictive coding, even when the reference frame used in the inter-frame predictive coding differs between the target encoding region and its neighbouring region.

24 cl, 17 dwg

FIELD: information technologies.

SUBSTANCE: method is proposed to code images, in which a coding object pixel value is forecasted, and the forecasted value is produced; data of probability distribution is calculated, which indicates the value that the initial pixel value has for the produced predicted value, by means of shifting, according to the predicted value, data of differential distribution of difference between the initial value of the pixel and the forecasted value when coding with forecasting. Data of differential distribution are stored in advance; the produced data of probability distribution are cut-off to hold data in the range from the lower limit to the upper limit for possible values of the initial pixel value; and the coding object pixel value is coded using cut-off data of probability distribution of the initial pixel value from the lower limit to the upper limit.

EFFECT: higher efficiency of coding with forecasting by rejecting calculation of difference between an initial pixel value and its forecasted value when doing time and space forecasting.

10 cl, 19 dwg

FIELD: information technology.

SUBSTANCE: videocoding method is suggested to be formed on the basis of information about mismatch between already coded reference image of camera and being coded target image of camera. This method includes step in which for each predetermined unit sections in differential image of one of following groups are selected: group of decoded differential image obtained using decoded differential image between already coded image of camera and image with compensated discrepancy, and group of decoded camera image obtained using decoding already coded camera image by means of determination if there is image with compensated discrepancy in corresponding position or not, i.e. if corresponding pixel in the image with compensated discrepancy has effective value or not.

EFFECT: higher efficiency of coding video with multiple viewpoints applying movement compensation to differential image, lower prediction difference in the part with both time redundancy and redundancy between cameras.

14 cl, 9 dwg

FIELD: information technology.

SUBSTANCE: image processing device is suggested including acquisition module intended to obtain data of moving image, containing multiple sequential frames and one or more image data corresponding to the frames and having higher spatial resolution then these frames; movement prediction module intended to detect movement vector between frames using moving image data; difference value calculation module intended to calculate difference value between given frame and frame corresponding to image data; and image generation module which allows to generate image data with compensated movement which data corresponds to given frame, on the basis of frame corresponding to image data and movement vector.

EFFECT: prevention of noise interference in high-frequency image data component during data generation for image with high spatial resolution by means of movement prediction on the basis of data sequence of image with low spatial resolution and movement compensation using data of image with high spatial resolution.

10 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: disclosed is a method of forming a composite image based on a separate image, wherein a separate image is obtained, the separate image is compared with a composite image, a mismatch value is obtained based on pixel-by-pixel comparison of the separate image and the composite image, and the mismatch value is decreased by changing at least the combined image or the separate image.

EFFECT: high accuracy and speed of forming a composite image based on a separate image.

44 cl, 6 dwg

Video camera // 2473968

FIELD: information technology.

SUBSTANCE: video camera has a portable housing having a light focusing lens, a light-sensitive device which converts the focused light into source video data, a storage device installed in the housing, and an image processing system configured to introduce predistortions into the source video data and compression thereof, wherein the compressed source video data remain essentially visual without loss after decompression, and also configured to store compressed source video data in the storage device.

EFFECT: reduced loss of quality of a compressed image during decompression and display.

22 cl, 18 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a technique of obtaining digital images of an object, primarily for aerographic and reconnaissance purposes. An image with a large dynamic range is formed by summing images of an object obtained at different exposures, which are modified to the real brightness. The value of the parameter which determines capturing efficiency is determined for each pixel of each image. The most informative elements of each image are assigned the maximum weight coefficient during summation.

EFFECT: high quality of the resultant image.

FIELD: information technologies.

SUBSTANCE: image processing device is implemented with higher sharpness, containing a smoothing facility to smoothen brightness of an input image; a subtracting facility to subtract a smoothened image from the input image brightness; a calculating facility related to brightness of the amplification ratio to calculate the amplification ratio related to brightness from the input image; the first multiplying facility to multiply a differential image and the amplification ratio related to brightness; a summation facility to add the multiplication result to the input image brightness; a facility to calculate a colour contrast amplification ratio to calculate the colour contrast amplification ratio from the colour contrast of the input image and the amplification ratio related to brightness; the second multiplication facility to multiply the colour contrast of the input image and the colour contrast amplification ratio.

EFFECT: improved image processing in part of higher sharpness by changing brightness only, without a shade change.

9 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: to bring separate single-pixel indicators and separate logic pixels into conformity, a control table is used, which determines the order of displaying image data on the screen of a display device. One single-pixel indicator and one logic pixel correspond to an arbitrary element of the control table. An indicator of a defined colour is activated to generate light flux according to data of the same colour selected from the logic pixel. Selection of these data from the logic pixel and activation of that single-pixel indicator are repeated at high speed. The order of arrangement on the image data plane of geometric centres of non-overlapping or partially overlapping groups from the plurality of logic pixels coincides with the order of arrangement on the surface of the screen of the display device of conditional geometric centres of non-overlapping groups from the plurality of single-pixel indicators, which (groups from a plurality of single-pixel indicators) correspond to those groups from the plurality of logic pixels. Each group from the plurality of logic pixels mutually explicitly corresponds to a group from the plurality of single-pixel indicators. To increase brightness of the image displayed on the screen of the display device, that screen contains white-colour single-pixel indicators which are activated to generate light flux according to data calculated based on data contained in logic pixels which correspond to those white-colour single-pixel indicators.

EFFECT: broader functional capabilities, particularly display of raster data of a colour image on the screen of a display device, higher brightness of the image displayed on that screen.

16 cl, 21 dwg

FIELD: information technology.

SUBSTANCE: source RGB image is obtained; RGB image noise is filtered; global contrast of the RGB image is adjusted; a brightness component from R,G,B components is extracted from the colour image by transforming the RGB into a three-component colour system. Shadow tones of the image are adjusted by adding to brightness values of each image pixel the product of the difference between pixel brightness values, which corresponds to the image of details in the shadow tones, and the image pixel brightness, inverting the doubled result of bilateral filtration in the dark half of the image brightness range, raised to a power which determines the width of the tonal range, and the amplification coefficient of the shadow tones.

EFFECT: broader functional capabilities owing to amplification of local contrast in light, shadow and halftones of an image.

9 dwg

FIELD: information technology.

SUBSTANCE: method of integrating digital grayscale television and thermal images involves obtaining original images, performing imaging integration based on criteria summation for each pixel, formation of a resultant image and normalisation of the brightness range of the image. Average brightness is calculated for all brightness values of pixels of the second channel image, as well as the average value of absolute difference between the average brightness value of the second channel image and brightness values of all pixels of the second channel image. For each pixel of the integrated image, the sum of the brightness value of the pixel of the main channel image and the absolute difference of the brightness value of the second channel image and the average brightness of the second channel image is calculated; the average value of the absolute difference of the brightness of the pixels of the second channel image and the average brightness of the second channel image are subtracted from the obtained sum.

EFFECT: high quality of an image containing information elements of images of the same scene obtained in different spectral ranges.

6 dwg

FIELD: information technology.

SUBSTANCE: method for use in functional medical imaging involves adaptively partitioning functional imaging data as a function of a spatially varying error model. The functional image data are partitioned according to an optimisation strategy. The data may be visualised or used to plan a course of treatment. In one version, the image data are partitioned so as to vary their spatial resolution. In another version, the number of clusters is varied based on the error model.

EFFECT: high efficiency of functional medical imaging owing to accounting for noise effects and other uncertainties in functional imaging.

40 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: presence or absence of a point in the first colour signal X of the second image B, which corresponds to each pixel position of the first colour signal X of the first image A is evaluated, and the position of the relevant corresponding point is also evaluated. For each of the evaluated pixel position in the colour signal Y of image A, image information of the corresponding position in the colour signal Y of the second image B is assigned. The colour signal Y is generated in the pixel position on image A, for which evaluation on the absence of the corresponding point was obtained, through image interpolation using image information of the colour signal Y assigned to pixels having the corresponding points.

EFFECT: reduced image quality deterioration.

7 cl, 11 dwg

FIELD: information technology.

SUBSTANCE: image data are formed in image C by using image A and image B, having a higher bit depth than image A. Image C, having the same bit depth as image B, is formed by increasing bit depth of image A through superposition of hue maps. Presence or absence of points on image B corresponding to each pixel position in image C, as well as the position of the relevant corresponding point is determined. Each pixel position on image C, for which it was determined that the corresponding point exists, is assigned image data from the corresponding position on image B. Possibility of forming image data at each pixel position on image C, for which, during evaluation of the corresponding point, it was determined that the corresponding point does not exist, is facilitated by using image data assigned according to the evaluation result, consisting in that the corresponding point exists.

EFFECT: improved quality of image with low bit depth.

17 cl, 14 dwg

FIELD: information technology.

SUBSTANCE: method involves parallel processing of the component of each decomposition level; brightness-contract transformation parameters are determined by forming a function for correcting brightness levels and a function for correcting contrast, forming a matrix of correction factors for third level decomposition contrast using the function for correcting contrast, reconstructing the family of matrices of scaled contrast correction factors for spatial matching on each level of correction factors with values of the detail component.

EFFECT: high quality of displaying digital images.

8 dwg

Up!