Method for automatic formation of procedure of generating predicted pixel value, image encoding method, image decoding method, corresponding device, corresponding programmes and data media storing programmes

FIELD: information technology.

SUBSTANCE: method is carried out by realising automatic computer formation of a prediction procedure which is appropriately applied to an input image. The technical result is achieved by making an image encoding device for encoding images using a predicted pixel value generated by a predetermined procedure for generating a predicted value which predicts the value of a target encoding pixel using a pre-decoded pixel. The procedure for generating a predicted value, having the best estimate cost, is selected from procedures for generating a predicted value as parents and descendants, where the overall information content for displaying a tree structure and volume of code estimated by the predicted pixel value, obtained through the tree structure, is used as an estimate cost. The final procedure for generating a predicted value is formed by repeating the relevant operation.

EFFECT: high efficiency of encoding and decoding, and further reduction of the relevant volume of code.

12 cl, 14 dwg

 

The technical field

The present group of inventions relates to a method of automatic creation procedure of generating predicted pixel values for the automatic generation procedure of generating predicted pixel values that are used to implement high-precision prediction pixel value, and the corresponding device; method of encoding images for efficient encoding of images using the procedure of generating predicted pixel value generated by the above method, and the corresponding device; the method of decoding images for efficient decoding encoded data generated by relevant coding of images, and the corresponding device; programs used to implement the above methods; and computer-readable storage media for storing programs.

This application claims the priority of patent application Japan 2008-275811, filed October 27, 2008, the contents of which are incorporated here by reference.

The level of technology

Usually when encoding images, the value of each pixel of the target encoding is predicted using a previously decoded the same or higher pixels, and the residue prediction is encoded.

According so the second encoding method to forecasting when coding the target pixel (marked "p"), subject to encoding, the prediction value p is generated by using the fact that the previously decoded peripheral pixels (e.g., Inw, In, Ine, and Iw in Fig. 14) have, in General, a high correlation with p, and, in fact, using such peripheral pixels. Further, the predicted value of p is denoted by p'. In the next step forecast error p-p' is subjected to entropy encoding.

For example, bespattering mode in JPEG (see non-Patent document 1) has seven types of predictors, and one selected from them is used for prediction and coding of the pixel value.

In the example referred to as "averaged prediction" as one of the methods in the JPEG predictors, the prediction is done by calculating the average value In and Iw as follows:

x'=(In+Iw)/2 Formula (1)

There are also six other forecasting methods (in addition to the above), which include:

x'=In+Iw-Inw-plane prediction Formula (2)

x'=In forecasting the previous value of the Formula (3)

x'=Inw+(In-Iw)/2 complex prediction Formula (4)

JPEG-LS (see non-Patent document 2), with a higher level of efficiency than JPEG, uses a little more complex forecasting method called "prognose the Finance MED", below.

if Inw≥max (Iw, In) then

x'=min (Iw, In)

else if Inw≤min (Iw, In) then

x'=max (Iw, In)

else

x'=Iw+In-Inw

where max (x, y) - function that returns the x and y values that is larger and mix (x, y) - function that returns the value of x and y, which is smaller.

In addition, a well-known method comprising the job weighted average for the peripheral pixels as the predicted values. According to the simplified method, the weight of each peripheral pixel can be calculated using the least squares method for each image, or you can implement a method of optimization of the coefficients to minimize the relevant amount of code (see non-Patent document 3).

Additionally, although this does not apply to predictive coding, in non-Patent document 4 discloses the optimization of the encoding parameters for encoding images or video using a genetic algorithm (GA), where "template" for generating context that is used to encode binary image is modified using a genetic algorithm, which improves efficiency. Thus, the template is treated as a parameter, and use a fixed encoding procedure.

As a similar method, related to the direction, in non-Patent document 5 discloses the use of GE is micheskogo algorithm for dynamic changes separated form the unit square, encode that relevant increases efficiency. By analogy with the pattern in non-Patent document 4, the encoding procedure in this case is fixed.

Document technique

Non-patent document

Non-patent document 1: ISO/IEC SC29/WG1 ISO/IEC 10918-1, Digital compression and coding of continuous-tone still images", p. 133, 1993.

Non-patent document 2: M. Weinberger, G. Seroussi, and G. Sapiro, "The LOCO-I Lossless Image Compression Algorithm: Principles and Standardization into JPEG-LS", IEEE Trans. Image Processing, Vol. 9, No. 8, pp. 1309-1324, August 2000.

Non-patent document 3: Ichiro Matsuda, Nau Ozaki, Yuji Umezu, and Susumu Itoh, "Lossless Coding Using Variable Block-Size Adaptive Prediction Optimized for Each Image", Proceedings of 13th European Signal Processing Conference (EUSIPCO 2005), WedAmPO3, Sep. 2005.

Non-patent document 4: Masaharu Tanaka, Hidenori Sakanashi, Masanobu Mizoguchi and Tetsuya Higuchi, "a Bi-level Image Coding for Digital Printing Using Genetic Algorithm", Proceedings of IEICE, D-II, Vol. J83-D-II, No. 5, pp. 1274-1283, May 2000.

Non-patent document 5: Koh'ichi Takagi, Atsushi Koike engineering Germany, Shuichi Matsumoto, and Hideo Yamamoto, "the Moving Picture Coding Based on Region Segmentation Using Genetic Algorithm", Proceedings of IEICE, D-II, Vol. J83-D-II, No. 6, pp. 1437-1445, June 2000.

The invention

The objective of the invention

As described above, the traditional method of forecasting has flexibility only for optimization of numerical parameters, for example, weighting coefficients for each image, and the forecasting procedure for the determination of the pixel is used to compute predictive, or the formula used for conditional branching, which is fixed.

Thus, traditionally, a new forecasting procedure can generate only people manually, by trial and error. Thus, the structure of the corresponding predictor may not be more difficult than to understand people.

In addition, there is no traditional way to re-create the special procedures of the prediction for each input image.

Additionally, when processing the image, the target image to be processed (i.e. training information or resource) must be manually generated and sustained by the person.

In light of the above circumstances, the present invention is the provision of new techniques to improve the efficiency of encoding and decoding, due to the implementation of automatic computer forming the forecasting process, which is suitably applied to the input image, and to further reduce the relevant amount of code. Here, by analogy with the traditional methods of forecasting, the present invention also uses the previously decoded peripheral pixels to generate the predicted values.

The solution of the problem

<1> structure of the device automatic generation procedure of generating predicted values of the pixel corresponding to the present invention

P is IDE all, consider the structure of the device automatic generation procedure of generating predicted values of the pixel in accordance with the present invention.

The automatic formation procedure of generating predicted values of the pixel in accordance with the present invention, implements the automatic generation procedure of generating predicted values for the prediction value of the target pixel encoding by using the previously decoded pixel. The device has a structure which includes:

(1) the first device that generates the parent population by random generation procedure of generating predicted values, each of which is specified through a tree structure;

(2) a second device which selects a set of procedures for generating the predicted values as parents from the parent population and generates one or more procedures generate predicted values as descendants on the basis of a predetermined method development (or evolution) of a tree structure, which exposes the selected procedure of generating predicted values development (or evolution), where the current function of generating predicted values can be a leaf node of the tree;

(3) t the e device that:

selects the procedure of generating the predicted values with the minimum cost evaluation of the procedures generating the predicted values as the parents and children, where the total information content to represent the tree structure and the amount of code estimated predicted value of the pixel obtained through a tree structure, is used as the cost of the evaluation and the selection procedure of generating the predicted value is the value of the best estimate of the encoding target image coding; and

saves the selected procedure to generate predicted values and one or more other procedures generate predicted values in the parent population; and

(4) a fourth device that manages to repeat the processes carried out by the second and third device until the predetermined condition, and generates a procedure to generate predicted values, with the value of the best estimate, as a result of repetition as the final procedure of generating predicted values.

In the above structure, the second device may establish procedures for the generation of predicted values as descendants on the basis of a predefined way again in the party tree structure, responsible for the development, where the function that prints the coordinates of the pixel in the image can be a leaf node of the tree.

In addition, the first device may generate the parent population so that the current function of generating predicted values was included in the parent population.

The method of automatic generation procedure of generating predicted values of the pixel corresponding to the present invention, implemented through the operations of the above-described processing devices, can also be implemented through computer program. A computer program is stored on an appropriate computer-readable storage medium or through a network, and, when implementing the invention, the program is installed and executed on the control device, for example, CPU, thanks to which the invention is implemented.

In the automatic formation procedure of generating predicted values of the pixel having the structure described above, when the parent population is generated by randomly generate procedures generate predicted values, each of which is specified through a tree structure, the set of procedures for generating the predicted values are selected as parents from the parent population. Then one is whether multiple procedures generate predicted values is formed as a descendants on the basis of a predetermined method development tree structure, which puts the development of the selected procedure of generating predicted values. Then select a procedure of generating the predicted values with the minimum cost, where the total content (which can be obtained according to Algorithm 1, described below) to represent the tree structure and the amount of code estimated predicted value of the pixel (which can be obtained according to Algorithm 2, described below), obtained through a tree structure, is used as the cost of the evaluation and the selection procedure of generating the predicted value is the value of the best estimate of the encoding target image coding. The selected procedure of generating predicted values and some other procedures generate predicted values are saved in the parent population. A new procedure of generating predicted pixel value is automatically generated by repeating the above processes.

Accordingly, the automatic formation procedure of generating predicted values of the pixel that meets the present invention realizes high-precision prediction pixel value by the automatic generation procedure of generating a predicted pixel value based on the way the development tree structure, for example, genetic programming. Since the total information content to represent the tree structure and the amount of code estimated predicted value of the pixel obtained through a tree structure, are used as cost estimates, it is possible to automatically generate a procedure to generate a predicted pixel value for the realization of highly efficient coding of images, at the same time, inhibiting the growth of a tree.

In addition, since the method of development of the tree structure is performed under the condition that the current generation function projected values may be a leaf node in the tree, you can achieve the same level of performance prediction, as in the traditional method.

For additional guarantees of obtaining such an effect, when generating a parent population, the parent population can be generated so that the current function of generating predicted values was included in the parent population.

In addition, method development tree structure can be provided that the function that prints the coordinates of the pixel in the image can be a leaf node of the tree. Accordingly, the local switch for the procedure of generating predicted values of the pixel can be made is TLAT using x and y coordinates in accordance with the internal structure of the relevant image.

<2> structure of the device image encoding device and the image decoding according to the present invention (the first type)

When implementing transfer functions automatically generated procedures generate a predicted pixel value decoding side, the device image coding apparatus and the image decoding in accordance with the present invention have the following structure.

<2-1> Structure of the encoder of the image corresponding to the present invention

When implementing a transfer function generation procedure, the predicted pixel value decoding side, the device coding of images, in accordance with the present invention, has a structure which includes:

(1) the first device, which forms the procedure of generating the predicted values with the value of the best estimate, for the encoding target image coding, through the operations of the automatic formation procedure of generating predicted values of the pixel in accordance with the present invention;

(2) a second device that encodes the procedure of generating the predicted values generated by the first device (using, for example, Algorithm 3 described below);

(3) require the rd device, which generates the prediction value of each pixel included in the target image coding based on the procedures for generating the predicted values generated by the first device (i.e., using, for example, Algorithm 2, described below); and

(4) a fourth device that encodes the signal residue prediction, calculated using the predicted pixel value generated by the third device.

The method of encoding images that meet present invention, implemented through the operations of the above-described processing devices, can also be implemented through computer program. A computer program is stored on an appropriate computer-readable storage medium or through a network, and, when implementing the invention, the program is installed and executed on the control device, for example, CPU, thanks to which the invention is implemented.

In accordance with the above-described structure, a device for encoding image that meets the present invention realizes high-precision prediction pixel value on the basis of the formation procedure of generating predicted pixel value by the automatic formation that meets the present invention, and performs coding of images with the use of the Finance procedures generate predicted pixel values for the realization of highly efficient coding of images. This enables to realize the high-efficiency coding of images.

<2-2> the Structure of the image decoding conforming to the present invention

For decoding coded data generated bya device for encoding image that meets the present invention described in the above paragraph <2-1>, the device decoding images, in accordance with the present invention, has a structure which includes:

(1) the first device that decodes encoded data for the procedure of generating the predicted values generated by the operations of the automatic formation procedure of generating predicted values of the pixel in accordance with the present invention (using, for example, Algorithm 4 described below), where the coded data is generated on the encoding side;

(2) a second device that generates a predicted value of each pixel included in the target image decoding, based on the procedures for generating the predicted values are decoded from the first device (i.e., using, for example, Algorithm 2 described below); and

(3) a third device which decodes encoded data signal residue prediction, vicis is built using the predicted pixel value, generated on the basis of the generation procedure, the predicted values are decoded from the first device, where the coded data is generated on the encoding side; and

(4) a fourth device that reproduces the target image decoding based on the predicted pixel value generated by the second device, and signal the rest of the forecast, the decoded third device.

The method of decoding images that meet present invention, implemented through the operations of the above-described processing devices, can also be implemented through computer program. A computer program is stored on an appropriate computer-readable storage medium or through a network, and, when implementing the invention, the program is installed and executed on the control device, for example, CPU, thanks to which the invention is implemented.

In accordance with the above-described structure, the decoding device images that meet present invention, implements the decoding coded data generated by the encoding device image that meets the present invention described in the above paragraph <2-1>.

<3> the Structure of the device image encoding device and the image decoding according to astasia invention (the second type)

For each image, encoded at the coding side, the corresponding decoded image is also generated at the decoding side, so the same decoded image can co-exist on the coding and decoding sides. Thus, the transmission procedure of generating predicted pixel value from the encoding side to the decoding side, which is necessary for the implementation of the present invention, can be eliminated.

To implement this for the abolition of the present invention, the device image coding apparatus and the image decoding, in accordance with the present invention have the following structure.

<3-1> Structure of the encoder of the image corresponding to the present invention

When implementing the function of the absence of transfer, the procedure of generating predicted pixel value decoding side, the device coding of images, in accordance with the present invention, has a structure which includes:

(1) the first device, which encodes a partial target image coding with a predefined size, using the existing procedures of generating predicted values of the pixel are formed not on the basis of the mode of development of the tree structure the tours;

(2) a second device that provides a procedure for generating the predicted values with the value of the best estimate, for encoding the decoded image obtained during the partial coding target image, encoding the first device, through the operations of the automatic formation procedure of generating predicted values of the pixel in accordance with the present invention, which evaluates the content to represent a tree structure is equal to zero;

(3) the third device that generates a predicted value of each pixel included in the remaining partial target image coding, not encoded by the first device, based on the procedures for generating the predicted values generated by the second device (i.e., using, for example, Algorithm 2, described below); and

(4) a fourth device that encodes the signal residue prediction, calculated using the predicted pixel value generated by the third device.

The method of encoding images that meet present invention, implemented through the operations of the above-described processing devices, can also be implemented through computer program. Computer software is mA is stored on an appropriate computer-readable storage medium or through a network, and, when implementing the invention, the program is installed and executed on the control device, for example, CPU, thanks to which the invention is implemented.

In accordance with the above-described structure, a device for encoding image that meets the present invention realizes high-precision prediction pixel value on the basis of the formation procedure of generating predicted pixel value by the automatic formation that meets the present invention, and performs coding of images using the procedure of generating predicted pixel values for the realization of highly efficient coding of images. This enables to realize the high-efficiency coding of images.

In addition, the above-described device for encoding image that meets the present invention, encodes a partial target image coding with a predefined size, using the existing procedures generate a predicted pixel value generated regardless of the method of development of the tree structure, thereby generating a decoded image for the relevant partial target image coding, where the decoded image can co-exist on the coding and the decoding side is nah. The decoded image is used for the formation procedure of generating predicted pixel values, which may also be formed on the decoding side and has the value of the best estimate. This allows you to do away with the transfer procedure of generating predicted pixel value decoding side.

<3-2> the Structure of the image decoding conforming to the present invention

For decoding coded data generated by the encoding device image that meets the present invention described in the above paragraph <3-1>, the device decoding images, in accordance with the present invention, has a structure which includes:

(1) the first device that decodes encoded data for the partial target image decoding, which has a predefined size, and encoded using an existing procedure of generating predicted values of the pixel are formed not on the basis of the mode of development of the tree structure, where the coded data is generated on the encoding side;

(2) a second device that provides a procedure for generating the predicted values with the value of the best estimate, for encoding partial target image decterov the Oia, received by the first device, through the operations of the automatic formation procedure of generating predicted values of the pixel in accordance with the present invention, which evaluates the content to represent a tree structure is equal to zero;

(3) the third device that generates a predicted value of each pixel included in the remaining partial target image decoding, not decoded by the first device, based on the procedures for generating the predicted values generated by the second device (i.e., using, for example, Algorithm 2 described below);

(4) a fourth device that decodes encoded data signal residue prediction, calculated using the predicted pixel value generated by the generation procedure, the predicted values are decoded from the second device, where the coded data is generated on the encoding side; and

(5) the fifth device that reproduces the remaining partial target image decoding, not decoded by the first device, based on the predicted pixel value generated by the third device, and signal the rest of the forecast, fourth decoded at what trojstva.

The method of decoding images that meet present invention, implemented through the operations of the above-described processing devices, can also be implemented through computer program. A computer program is stored on an appropriate computer-readable storage medium or through a network, and, when implementing the invention, the program is installed and executed on the control device, for example, CPU, thanks to which the invention is implemented.

In accordance with the above-described structure, the decoding device images that meet present invention, implements the decoding coded data generated by the encoding device image that meets the present invention described in the above paragraph <3-1>.

In addition, the above-described device, the image decoding conforming to the present invention, decodes the coded data of the partial target image decoding, which has a predefined size, and encoded using an existing procedure of generating predicted pixel value generated regardless of the method of development of the tree structure, thereby generating a decoded image for the relevant partial target image decoding, where decoding : CTCSS / DCS is data, the image may co-exist on the coding and decoding sides. The decoded image is used for the formation procedure of generating predicted pixel values, which may also be formed on the coding side and has the value of the best estimate. This allows you to do away with the transfer procedure of generating predicted pixel value from the coding side.

The result of inventions

As described above, in accordance with the present invention, (i) the procedure for the prediction pixel value is automatically changed using the computer while evaluating content for the procedure of generating predicted pixel value, or (ii) the calculation of the development is also carried out at the decoder side using pixels previously encoded existing method. Thus, you can use the predictor, which can reduce the content of the residue and, thus, to encode the image with the least amount of code.

In addition, since the candidates in the leaf nodes (the present invention) also include the function of the predictor on the basis of traditional methods, the efficiency of forecasting, greater than or equal to (at the bottom), achievable by traditional methods. In addition, candidates may include the coordinates of each pixel to be the kodirovaniyu, that allows you to switch the forecasting procedure in accordance with the internal structure of the relevant image.

Additionally, since the input image can, of course, not to be fixed, it is assumed that evolutionary way of image processing (see Reference document 4 described below)proposed by Nagao and others, in General, be applied to different input images. However, it is virtually impossible to ensure that relevant method can preferably be applied to each unknown input image. In contrast, expectations of the present invention are only focused on efficient encoding the current input image, and to consider such an unknown input is not necessary. Thus, the present invention has a high level of practicality.

In addition, since the total content of the balance and content of the tree is the parameter to be minimize in the present invention, there is no need to use the "training information", which should be prepared by people in General applications of image processing.

Brief description of drawings

Fig. 1 is a diagram showing the view through a tree structure for the averaged prediction.

Fig. 2 - the Hema, explaining the crossover operation.

Fig. 3 is a diagram illustrating the operation of mutation.

Fig. 4 is a diagram illustrating the operation of inversion.

Fig. 5 is a diagram showing the structure of the device forming a developed predictor as an option for implementation of the present invention.

Fig. 6 is a logical block diagram performed by the device for the formation of a developed predictor according to a variant implementation.

Fig. 7 is a diagram illustrating the structure of the encoding device and image decoding device images as an option for implementation of the present invention.

Fig. 8 is a logical flowchart performed by a device for encoding images according to a variant implementation.

Fig. 9 is a logical block diagram executed by the device, the image decoding according to a variant implementation.

Fig. 10 is a diagram illustrating the structure of the encoding device and image decoding device of the image as another variant implementation of the present invention.

Fig. 11 is a logical flowchart performed by a device for encoding images according to a variant implementation.

Fig. 12 is a logical block diagram executed by the device, the image decoding according to a variant implementation.

Fig. 13 is a diagram illustrating the experiment, carried out aemy to test the effectiveness of the present invention.

Fig. 14 is a diagram illustrating a previously decoded pixels that surround the target pixel.

Preferred embodiments of the inventions

The present invention provides for the use of genetic programming (GP) to implement automatic computer forming the forecasting process, which is suitably applied to the input video or static image (hereinafter referred to as simply "image"), and an additional reduction of the relevant amount of code.

Below is the basic idea of the present invention.

<1> View the forecasting process through a tree structure

For example, the average prediction expressed by the above Formula (1)can be represented using a tree structure shown in Fig. 1. For convenience, as a description format that is equivalent to such a presentation of a tree structure, you can use "symbolic expression".

In genetic programming, explained below, a character expression is usually used to represent the tree structure.

For example, in symbolic terms, the above max (x, y) is defined as (max x y), and the above prediction MED is described as follows:

(T (sub (Inw) (max (Iw) (In))) (min (Iw) (In)) (T (sub (min (Iw) (In)) (nw)) (max (Iw) (In)) (add (Iw) (sub (In) (Inw))))),

where each line has a specific value.

Above the T function has three arguments, and the following conditional branching is as follows:

(T A B C)=Bif A≥0
=Cif A <0Formula (5)

where T is the first letter of the word "Trinity".

As described above, any algorithm can be represented in the form of a "tree", and thus, the algorithm for predicting the pixel value can similarly be represented as a tree.

Instead of the above "T", the relevant function can use addition, subtraction, multiplication, division, trigonometric function, quadratic function, square root, exponent, logarithm, absolute value, minimum value, maximum value, and so on.

Since this function uses the arguments, it is in a different position than the relevant sheet of wood, and, thus, often referred to as "non-leaf node". The function can be prepared in advance, or can be set dynamically (see Reference document 1).

Reference document 1: J. Koza, "Genetic Programming II, Automatic Disovery of Reusable Programs", The MIT Press, pp. 41, 1998.

In addition, a numerical value, for example, 0,148 or value of the peripheral pixel, for example, Iw, In, Ine or Inw (see Fig. 14) can play the role of the destination node, which itself has value and is assigned to a leaf of a tree.

<2> Characteristics of the leaf nodes in the present invention

In the present invention, candidate end nodes include a function that outputs the predicted value using an existing encoding method.

Since any function needs arguments, this function is not initially assigned to any leaf node. However, the function that outputs the predicted value using an existing encoding method is a function based on an existing encoding method, and thus, the argument types for the function is defined. Thus, the function can also be assigned to the end node.

By analogy with the above value of the peripheral pixel, the predicted value of the output function that outputs the predicted value using an existing encoding method", is determined individually for each target pixel subject to encoding.

The predicted value of the output of the relevant function, may be a predicted value based on the least is x squared, the value obtained planar prediction, the predicted value for CALIC (see Reference document 2 below), the predicted value for JPEG-LS, and so on.

Reference document 2: X. Wu and N. Memon: "Context-Based, Adaptive, Lossless Image Coding", IEEE Transactions on Communications, Vol.45, No.4, pp. 437-444, Apr. 1997.

As described above, when the candidates in the leaf nodes include a function that outputs the predicted value using an existing encoding method, it is possible to achieve the same level of performance prediction, as in the traditional way, essentially without any excessive load.

Thus, in the present invention, as explained below, the procedure of prediction (i.e., tree structure) for predicting pixel values developed (or evolved) using genetic programming to automatically generate the predictor (i.e., the forecast procedure), which has improved the efficiency of prediction, where the candidates in the leaf nodes include a function that outputs the predicted value using an existing encoding method. Accordingly, the traditional predictor can also be the target for the relevant development.

Thus, if the traditional predictor can provide a higher level of efficiency of prognosisof the tion, than the other, automatically generated, predictor, this traditional predictor, in the end, is automatically generated by genetic programming, to achieve the same efficiency prediction, as in the traditional way, essentially without any excessive load.

If the combination of predictor developed (or evolved) using genetic programming, and traditional predictor can provide a more effective forecasting, it is used for encoding.

Additionally, in the present invention, candidate leaf nodes also include a function that displays the coordinates of the target node, subject to encoding.

Coordinates output from this function can be normalized value, for example, "x=-1" for each pixel of the left end; "x=1" for each pixel of the right end, "y=-1" for each pixel of the upper end; "y=1" for each pixel of the lower end of the image, or the actual coordinate values.

Function that displays the coordinates of the target node can randomly display the coordinates in the image plane, without using arguments. Thus, the function can also be assigned to the end node.

As described above, when the candidates of the leaf nodes also include is a function, which displays the coordinates in the image plane) of the target node, subject to encoding, possibly local switching for processing using x and y coordinates, in accordance with the internal structure of the relevant image.

For example, you can build a predictor that performs switching processing in accordance with the value of y so that the top 5/6 part of the image is applied to the predictor, using the procedure of forecasting, and the remaining lower 1/6 part is applied to the predictor that uses a different procedure for forecasting.

<3> estimates for forecasting procedures, the content (amount of information) of the tree, and the method of calculating the predicted value

<3-1> estimates for forecasting procedures

With the development (or evolution) of the forecasting process, as explained below, requires evaluation scale.

In the present invention, the sum (X+Y) the following values are used as estimates (referred to as the "degree of agreement" in genetic programming) for each individual, which is the procedure of forecasting:

(i) information content of X (amount of information) to represent the tree structure; and

(ii) the content Y of balance forecasting, p is obtained by actual prediction pixel value using forecasting procedures based on the above tree structure.

In the present invention, the estimated value for the individual (the tree is referred to as "individual" in genetic programming) is determined not only based on the content of the Y residue prediction, but also with regard to content X of the tree. One reason is the need to pass procedure of prediction decoding side.

Another reason is that, when determining the evaluation value with regard to content X of the tree, it is possible to prevent a problem "bloat" expansion tree) in genetic programming.

<3-2> content X to represent the tree structure

Content X to represent the tree structure is the total content of all tree nodes.

Content to represent the tree structure can be calculated using the following recursive function. It is assumed that a numerical value associated with each node in the alignment of the tree, is expressed, for example, 10-bit integer fixed point.

Algorithm 1

function tree_info(t)
begin
if t is a numeric value then
return FUNCINFO + 10
else begin
s: = FUNCINFO // part of
foreach (all lower nodes c United t) begin
s: = s+tree_info(c)
end
return s
end
end

This assumes that the individual functions are assigned sequence numbers from 0 to N-1.

FUNCINFO has the following value, which indicates the amount of the ode, generated when the function is subjected to encoding with fixed length:

FUNCINFO=log2(N+1) Formula (6)

where (N+1) is also used to account for the numerical values (for example, 2 or 1/4) in addition to the functions.

Although the above discussed coding fixed length, it is possible to perform encoding with variable-length or arithmetic coding, taking into account the frequency of each function.

Then, taking into account the fact that "root" is the highest node in the target procedure predict (tree), the content X of the tree can be calculated as follows:

X=tree_info (root) Formula (7)

<3-3> the Method of calculating the predicted value

The method of calculating the predicted value based on the forecasting process, presents the relevant tree can use a recursive function, as shown below.

Algorithm 2

function tree_eval(t)

begin

if t is a numeric value then

// immediate value

return a numerical value

else if t has no argument then // for example, In

return the value of the function t

else if t has one argument then // for example, sqrt(A)

return function t (tree_eval (first bottom node t))

else if t has two arguments then // for example, add (A, B)

return function t (tree_eval (first bottom node t), tree_eval (second from the bottom node t))

else if t has three arguments then // for example the EP, the ternary operator T

return function t (tree_eval (first bottom node t), tree_eval (second from the bottom node of t), tree_eval (third from the bottom node t))

end

Although the above algorithm provides that the number of arguments is limited to three or less, similar processes can be carried out even if the upper limit on the number of arguments is equal to 4, 5, ...

Then, taking into account the fact that "root" is the highest node in the target procedure predict (tree), the prediction value p' for the current target pixel can be calculated as follows:

x'=tree_eval (root), Formula (8)

The content Y of the remainder of the forecast can be calculated by the following formula.

[Formula 1]

In the above formula, hdspecifies the number of occurrences (for histograms) prediction error d (= x - x') in the full image, and W and H, respectively, indicates the number of pixels in the horizontal and vertical directions.

Similarly, when performing in CALIC, content can be reduced by using a method called "isolation of context", "feedback error" or "transfer error".

<4> the Encoding and decoding procedures of forecasting

<4-1> procedure Coding prediction

Procedure coding prediction also can the be performed using the following recursive procedure, a similar assessment content.

Algorithm 3

procedure tree_encode(t)

begin

if t is a numeric value then begin

to encode the N (numeric value) using bits FUNCINFO

to encode a numerical value (fixed-point) using a 10-bit form.

end else begin

to encode the function number (0,.. N-1) t using bits FUNCINFO

foreach (all lower nodes c United t) begin

tree_encode(c)

end
end
end

Then, taking into account the fact that "root" is the highest node in the target procedure predict (tree)can be "tree_encode (root), effecting, thus, the relevant coding tree, where the lower limit of the required amount of code coincides with the "tree_info (root)".

<4-2> Decoding procedure of forecasting

The decoding procedure of forecasting, encoded according to Algorithm 3, can also be performed using a similar recursive procedure described below.

Algorithm 4

function tree_decode()

begin

to generate an empty tree T

to decode the bits FUNCINFO to decode functions n

if n = N then begin // numerical value

to decode 10 bits to decode 10-pitogo x values are fixed-point

T:= x

end else begin

to calculate the function F corresponding to the function number n

T:= F

for i = 1 to "number of arguments required for F begin

the i-th bottom node T:= tree_decode()

end
end
return T
end

The above number of arguments required for F, is the number of (known coding and decoding sides) values used for output values of the relevant functions. If F=add relevant number is equal to 2, and if F=T, the relevant number is equal to 3.

Here F is the lower nodes corresponding to the relevant number, as their own arguments.

After that, when executed tree_decode(), the tree is decoded by the relevant bit stream and then returned.

<5> Automatic development of procedures for forecasting using genetic p is grammarware

In the present invention, the predictor is developed through the following well-known procedures (including the choice of copies, the generation of offspring and selection of survivors) for genetic programming.

In genetic programming, each tree is called the "individual". This is based on the following explanation.

1. First of all, pre-generated population using random numbers or the existing prediction algorithm (e.g., above the plane or MED forecasting).

2. From the population is selected many parents (parent population) (select copies).

3. From the parent population is generated (generation descendants) and evaluated (evaluation scale explained above) the number of children individuals.

4. Based on the results of the assessment are selected survivors of the number of daughter individuals (selection survivors).

According to the above procedure, each offspring is generated by implementing the following process between individuals selected as parents:

(i) the crossover shown in Fig. 2, for the random selection of crossover points in the parents 1 and 2, and the implementation of crossover between parent trees in accordance with the points of crossing-over;

(ii) mutation, shown in Fig. 3, for the random selection of a point mutation, and replacement of R is dyaliscope tree mutated tree in accordance with point mutations; or

(iii) the inversion shown in Fig. 4, for the exchange between the trees-brothers.

The choice of copies and select the survivors are together called "the model of alternation of generations", for which you can apply a well-known method MGG (minimum gap between generations), proposed in the following Reference document 3.

Reference document 3: Hiroshi Sato, Isao Ono, and Shigenobu Kobayashi, "A New Generation Alternation Model for Genetic Algorithms and Its Assessment", Journal of Japanese Society for Artificial Intelligence, Vol. 12, No. 5, pp. 734-744, 1996.

In the following Reference document 4 representative revealed how the development of operating procedures using genetic programming, in a way that relates to the procedure for image processing.

Reference document 4: Wataru Fujishima and Tomoharu Nagao, "PT-ACTIT; Tunable Parameter-Automatic Construction of Tree-structural Image Transformation", Journal of Institute of Image Information and Television Engineers, Vol. 59, No. 11, pp. 1687-1693, 2005.

However, it was not proposed method development procedure coding "images" (provided by the present invention). The above methods disclosed in non-Patent document 4 or 5 are just an optimization parameter encoding.

Below, the present invention will be explained in detail using embodiments.

In Fig. 5 shows the structure of the device 1 forming a developed predictor as an option for implementation of altoadige of the invention.

The device 1 forming a developed predictor in the present embodiment, implements the automatic generation of the predictor, which uses genetic programming (in which each tree is referred to as an individual) to generate predicted pixel values. Next, the predictor formed in the present embodiment, referred to as the developed predictor.

To implement automatic generation, as shown in Fig. 5, the device includes a block 10 generating a parent population, unit 11 to store the parent population, the block 12 selection and duplication of the parent of the individual, unit 13 generating subsidiary of individuals, the storage unit 14 mutational information, the calculation unit 15 estimates the block 16 definition of tenacious individuals, block 17 definition of convergence, and the block 18 definition of a developed predictor.

Unit 10 generating a parent population to generate the parent population, randomly generating individuals for predictor as the initial option for a developed predictor, and saves the parent population in unit 11 to store the parent population. In this process, the existing function of generating predicted values (as an individual) is contained in the generated and stored parent population.

Unit 10 generating Rodi is Yelsk population also requests the block 15 estimated values to calculate an estimated value for each individual, stored in the storage block 11 parent population, and receives an estimated value returned from the block 15 estimated values in response to a relevant query. Unit 10 generating a parent population saves each estimated value in unit 11 to store the parent population, in connection with the appropriate individual (saved).

Unit 12 selection and duplication of the parent of the individual selects and duplicates the totality of individuals stored in the storage block 11 parent population, thereby generating a set of parent individuals.

Unit 12 selection and duplication of the parent of the individual also removes individuals as a source of options for the generated parent individuals from the unit 11 to store the parent population.

On the basis of genetic programming, unit 13 generating subsidiary of individuals generates a subsidiary of individuals, selecting parent individuals generated by the block 12 selection and duplication of the parent of the individual, for crossover, shown in Fig. 2; mutations shown in Fig. 3, using mutational information stored in the storage unit 14 mutational information; inversion shown in Fig. 4; and so on.

Unit 13 generating subsidiary of individuals also calculates an estimated value for each generated the daughter is his individual requesting unit 15 estimated values to calculate the estimated value, and taking the estimated value returned from the block 15 estimated values in response to the relevant request.

Unit 14 storage mutation information mutation retains information (i.e., mutated tree), used when the unit 13 generating subsidiary of individuals exposed mutations parent individual, and mutational information includes a function (as an individual), which displays the relevant coordinates in the image and the existing generating function of the predicted values (as an individual).

When the calculation unit 15 estimates accepts the request to calculate the estimated value for a given individual, it calculates the total aggregate content (above X: the content of the individual), are required to represent the corresponding tree structure, and content (above Y: the content of the remainder of the forecast) balance forecasting full image, for which the prediction pixel value was actually carried out using the procedure of forecasting on the basis of the relevant tree structure. Block 15 estimated values returns the computed total full the information content as estimated values for the individual, on the unit that issued the query, calculate the estimated value.

On the basis of estimated values (extracted from the unit 11 to store the parent population) for each parent of the individual, generated by the block 12 selection and duplication of the parent of the individual, and the evaluation values assigned to each child individual generated by the unit 13 generating subsidiary of individuals, block 16 definition of tenacious individuals selects the individual with the best estimated value, and stores the selected individual and one or more other individuals in the unit 11 to store the parent population, together with the corresponding estimated values.

On the basis of the evaluation values outputted from the calculation unit 15 estimates and area, block 17 definition of convergence determines whether the convergence condition, which indicates the completion of the formation of a developed predictor. If it is determined that the condition is met, the block 17 definition of convergence instructs the block 18 definition of a developed predictor to determine a developed predictor.

Taking the definition query of the developed predictor from block 17 definition of convergence, block 18 definition of a developed predictor indicates the individual having the best evaluation value among the individuals stored in the storage block 11 parent population, and to define the em and displays the specified individual as the developed predictor.

In Fig. 6 shows a logical block diagram performed by the device 1 forming a developed predictor of having above-described structure.

In accordance with a logical block diagram will be explained in detail the operation performed by the device 1 forming a developed predictor.

According to logic flow diagram depicted in Fig. 6, having made the request to generate a developed predictor for the image as the target of the encoding device 1 forming a developed predictor of first generates the parent population (i.e. many individuals as the source variation for the relevant development), which includes the individual, which displays the predicted value using an existing encoding method (see step S101).

In the next step S102, for each individual in the parent population, the total information content of X to represent the corresponding tree and the content Y for the remainder of the forecast the entire image, for which the prediction pixel value was actually carried out using the procedure of forecasting on the basis of the relevant tree structure, is calculated to calculate the estimated value.

Content X to represent the corresponding tree is calculated using wicheap the sliding algorithm 1.

The predicted value obtained by the procedure of forecasting on the basis of the tree structure is computed using the above algorithm 2.

In the next step S103, each individual in the parent population is stored in the storage block 11 parent population, together with the estimated value assigned to the individual.

In the next step S104, N parent individuals are selected among the individuals stored in the storage block 11 parent population, and assigned estimated values are also retrieved.

In the next step S105, the selected N individuals are duplicated, and also removed from the unit 11 to store the parent population.

In the next step S106, M child individuals are generated from the N duplicated parent individuals by implementing, for example, crossover, shown in Fig. 2, the mutations shown in Fig. 3, or inversion, is shown in Fig. 4, by means of genetic programming, which can be used mutational information stored in the storage unit 14 mutational information.

In the above process, the candidates individuals (trees), added by mutations include a function for generating the predicted values using the traditional method, and the function that prints the x and y coordinates of the pixel mean is containing the encoding.

In the next step S107, for each of the generated M a child of the individual, the total content X to represent the corresponding tree structure and information content of Y for the remainder of prediction actual prediction pixel value using the procedure of forecasting on the basis of the relevant tree structure, is calculated to calculate the estimated value.

In the next step S108, among the goals of choice, consisting of M generated affiliated individuals and duplicate N parent individuals, selects the individual with the best estimated value, and the other N-1 individuals are randomly chosen, as tenacious individuals.

In the next step S109, the selected tenacious individuals stored in the storage block 11 parent population with the relevant assigned estimated values.

In the next step S110, it is determined whether a predetermined convergence condition. If it is determined that the convergence condition is not yet completed, it has been decided that the development at the moment is not enough, and the operation returns to step S104.

Used the convergence condition may be that at which the rate of reduction of the estimated value Z (X+Y) is less than a fixed value (for example, 0,1), or the number of iterations to calculate the estimated value exceeds a fixed value (for example, 10,000).

If the above step S110 decided that a predetermined convergence condition is fulfilled, the operation proceeds to step S111, where the individual having the highest evaluation value is selected and displayed as finally evolved individual (i.e. a developed predictor), among individuals of the parent population that is stored in the storage block 11 parent population. This operation completes.

As described above, the device 1 forming a developed predictor according to the present variant implementation can automatically form a developed predictor, which realizes high-precision prediction pixel value, using genetic programming.

To implement such an automatic shaping device 1 forming a developed predictor, according to the present variant implementation uses the estimated value that is the total content X to represent a tree structure, and the content Y for the remainder of the forecast the entire image, for which the prediction pixel value was actually carried out using the procedure of forecasting on the basis of p is levantou tree structure. Thus, it is possible to automatically generate a predictor that performs a prediction pixel value for the realization of highly efficient coding of images.

For the above operation, the candidates added individuals include a function that generates a predicted value, using the traditional method. Thus, it is possible to achieve the same level of performance prediction, which provides the traditional method.

In addition, when such an individual as a function that generates a predicted value using the traditional method, is included in the parent population at generation parent population, the aforementioned "the same level of performance prediction" can be implemented more confident.

In addition, since the candidates to added individuals include the function that prints the x and y coordinates of the pixel subject to encoding is also possible to make a local change in the developed predictor in accordance with the internal structure of the relevant image using the x and y coordinates.

In Fig. 7 shows embodiments of device 100 encoding images and device 200 decoding images that use the device 1 forming a developed predictor according to the SNO this option implementation.

The device 100 coded image shown in Fig. 7, includes a block 101 of forming a developed predictor that generates a developed predictor applied to the target image encoding in accordance with the operation performed by the device 1 forming a developed predictor according to the above variant implementation, the block 102 encoding a developed predictor for coding a developed predictor formed by block 101 forming a developed predictor, block 103 encoding images to encode the target image coding using a developed predictor formed by block 101 forming a developed predictor, and the block 107 transmission of the coded data, which transmits to the device 200 of the image decoding (in the present embodiment) of the coded data generated by the unit 102 encoding a developed predictor and the block 103 encoding of images.

The above block 103 encoding of images includes generator 104 predicted pixel value, which predicts the pixel value using the developed predictor formed by block 101 forming a developed predictor, block 105 calculation of residue prediction, which computes the remainder of forecasting on the basis of the pixel value, preds the connected generator 104 predicted pixel value, and the encoder 106 residue prediction for coding of the residual prediction calculated by the calculation block 105 residue prediction.

The device 200 of the image decoding shown in Fig. 7, includes unit 201 receiving encoded data for receiving encoded data transmitted from the device 100 encoding images according to the present variant implementation, the decoding unit 202 of the developed predictor for decoding a developed predictor generated by the device 100 encoding image by decoding encoded data of a developed predictor adopted by the unit 201 receiving encoded data, and unit 203 of the image decoding for decoding the image encoded by the device 100 coding of images, on the basis of the developed predictor, decoded by the decoding unit 202 of the developed predictor, and the coded data received by the unit 201 receiving encoded data.

For decoding the image encoded by the device 100 image coding, unit 203 of the image decoding includes generator 204 predicted pixel value for predicting the predicted values using the developed predictor, decoded by the decoding unit 202 of the developed predictor, the decoder 205 residue prediction for what kodirovanija data-encoded residue prediction, adopted by the unit 201 receiving encoded data, and the player 206 image to display the image encoded by the device 100 encoding of images based on the pixel value, the predicted generator 204 predicted pixel value, and the remainder of the forecast, decoded by the decoder 205 residue prediction.

In Fig. 8 shows a logical block diagram performed by the device 100 encoding of images is shown in Fig. 7, and in Fig. 9 shows a logical block diagram performed by the device 200 of the image decoding shown in Fig. 7.

In accordance with a logical block diagram will be explained the operations of the device 100 image encoding device 200 of the image decoding having the structure shown in Fig. 7.

According to logic flow diagram depicted in Fig. 8, when the device 100 coding of images having the structure shown in Fig. 7, receives the request for the encoding target image coding, it first generates a developed predictor applied to the target image coding, on the basis of the operations performed by the above-described device 1 forming a developed predictor (see step S201). In the next step S202, formed developed a predictor is encoded using either the described algorithm 3.

Then, to encode the target image coding, the prediction value of the pixel (above p') is generated using a set of developed predictor (see step S203), and then the remainder of the forecast (above p-p') is calculated based on the generated predicted value of the pixel (see step S204).

In the next step S205, the remainder computed prediction is encoded, and in the next step S206, a determination is made, complete the coding for all of the pixels contained in the target image coding. If it is determined that the encoding of all pixels has not been completed, the operation returns to step S203. If it is determined that the encoding of all pixels is completed, the current operation is terminated.

According to logic flow diagram depicted in Fig. 9, when the device 200 decodes images having the structure shown in Fig. 7, receives the coded data generated by the device 100 of the image encoding device 200 of the image decoding first decodes the coded data of the developed predictor based on the above algorithm 4, to decode the developed predictor generated by the device 100 image encoding (see spider S301 demonstration stage).

Upon further decoding target image decoding, the stage is W302, the predicted pixel value (above p') is generated using the decoded developed predictor, and in the next step S303, the coded data of the remainder of the forecast are decoded to obtain the decoded residue prediction (i.e. the above-described p-p'). In the next step S304, the pixel value is generated and displayed based on the previously generated predicted pixel value and the decoded residue prediction.

In the next step S305, a determination is made, complete the relevant decoding for all pixels included in the target image decoding. If it is determined that the decoding of all pixels has not been completed, the operation returns to step W302. If it is determined that the decoding of all pixels is completed, the current operation is terminated.

As described above, the device 100 coding of images having the structure shown in Fig. 7, forms a developed predictor, encodes the image using the developed predictor, and also encodes a developed predictor. The device 200 decodes images having the structure shown in Fig. 7, receives the developed predictor generated by the device 100 encoding image by decoding encoded data of a developed predictor, and decodes the relevant image is agenie using the developed predictor.

Also, as described above, the device 1 forming a developed predictor automatically generates a developed predictor, which realizes high-precision prediction of pixel values.

Thus, in accordance with the device 100 of the image encoding device 200 of the image decoding that encode and decode the image using the developed predictor generated by the device 1 forming a developed predictor, it is possible to achieve high coding efficiency.

In Fig. 10 shows other embodiments of device 100 encoding images and device 200 decoding images that use the above described device 1 forming a developed predictor.

The device 100 encoding image and the device 200 of the image decoding shown in Fig. 7, require encoding and transfer of the developed predictor. However, the device 100' encoding image and the device 200' decoding of images shown in Fig. 10, do not perform such encoding and transfer of the developed predictor, and a developed predictor formed the coding side, can be formed decoder side by using the previously transmitted pixel, allowing encoding and decoding side can encode and decode and the imagination, using the same advanced predictor.

For realizing the aforesaid functions, the device 100' encoding is shown in Fig. 10, has a first encoding unit 110 of the image to encode a partial image that is the target image coding and has a predefined size, using an existing predictor; block 111 forming a developed predictor for the formation of a developed predictor applied to the decoded image of an encoded partial image where the decoded image obtained by decoding performed by the first encoding unit 110 of the image, for the implementation of the relevant coding; the second coding block 112 to encode the remaining partial images of the target image coding using a developed predictor formed by block 111 forming a developed predictor; and block 116 transmission of the coded data, which transmits the coded data generated by the first coding block 110 and the second coding block 112, the device 200' decoding of images.

To encode the partial images that are not encoded by the first coding block 110, the above-mentioned second coding block 112 includes a generator 113 predicted pixel value, which predicts the value of PI is flow using the developed predictor, generated by the block 111 forming a developed predictor, block 114 calculation of residue prediction, which computes the remainder of forecasting on the basis of the pixel values predicted by the generator 113 predicted pixel value, and the encoder 115 residue prediction for coding of the residual prediction calculated by the calculation block 114 residue prediction.

The device 200' decoding of images shown in Fig. 10, has unit 210 of receiving encoded data for receiving encoded data transmitted from the device 100' coding of images; the first unit 211 of the image decoding to decode encoded data generated by the first encoding unit 110 of the image contained in the coded data received by the unit 210 of receiving encoded data; block 212 forming a developed predictor for the formation of a developed predictor applied to the partial image, the decoded first block 211 decoding images; the second block 213 of the image decoding to decode the partial image is not decoded first block of the image decoding 211, on the basis of the developed predictor formed by block 212 of forming complex predictor, and the encoded data generated by the second coding block 112 and the content is already in the encoded data, received unit 210 of receiving encoded data; and a synthesizer 217 image to generate image as a target of decoding by the synthesis image, the decoded first block 211 decoding images, and the image decoded by the second unit 213 decoding of images.

To decode the partial image is not decoded first block 211 decoding images, the above-mentioned second block 213 of the image decoding includes generator 214 predicted pixel value for predicting the predicted values using the developed predictor formed by block 212 forming a developed predictor; the decoder 215 residue prediction for decoding encoded data of the remainder of the forecast adopted by the unit 210 of receiving encoded data and the designated partial image which is not decoded first block 211 decoding images; and player 216 images for reproducing a partial image, which is not decoded first block 211 decoding image based on the pixel value, the predicted generator 214 the predicted pixel value, and the remainder of the forecast, decoded by the decoder 215 residue prediction.

In Fig. 11 shows a logical block diagram, implementation of aema device 100' coding of images, is depicted in Fig. 10, and in Fig. 12 shows a logical block diagram performed by the device 200' decoding images, shown in Fig. 10.

In accordance with a logical block diagram will be explained the operations of the device 100' image encoding device 200' decoding of images.

When the device 100' encoding image receives the request for encoding the image, it starts with encoding the partial image having N pixels as the intended conditions), which belongs to the target image coding and has a predefined size, using an existing predictor (see step S401). In the next step S402, the encoding continues until it is confirmed by the completion of coding for the relevant partial image and, thus, the encoding of the partial image.

For example, such partial image belonging to the target image encoding and which has a predefined size, coded using JPEG-LS.

In the next step S403, the developed predictor applied to the decoded image (encoded partial image data)obtained by encoding the above-described step S401, is formed on the basis of the operations performed by videopia the NYM device 1 forming a developed predictor.

As described above, basically, the developed predictor, with a preferred estimate, is formed in accordance with the evaluation value is defined as the sum of the content X to represent the tree structure and the content of Y for the remainder of the forecast the entire image, for which the prediction pixel value was actually carried out using the procedure of forecasting on the basis of the relevant tree structure. However, since in the present embodiment, is not required to transfer the developed predictor, the estimated value is calculated by setting the content of X is equal to 0 (the procedure of development does not change), and developed a predictor is formed on the basis of the calculated evaluation values.

Then, to encode the remaining partial images, which also belongs to the target image coding, but not encoded on the above-described step S401, the predicted pixel value (above p') is generated using a set of developed predictor (see step S404), and in the next step S405, the remainder of the forecast (above p-p') is calculated based on the generated predicted pixel value.

In the next step S406, the calculated balance prognose the Finance is encoded and the next step S407, it is determined, whether completed coding for all pixels contained in the target image coding. If it is determined that the encoding of all pixels has not been completed, the operation returns to step S404. If it is determined that the encoding of all pixels is completed, the current operation is terminated.

According to logic flow diagram depicted in Fig. 12, when the device 200' decoding takes the image coded data generated by the device 100' image encoding device 200' of the image decoding starts with decoding encoded data of a partial image having N pixels as the intended conditions), encoded by the device 100' coding of images using an existing predictor (see step S501). In the next step S502, the decoding continues until it is confirmed by the completion of decoding for the relevant partial image and, thus, the partial decoding of the image.

For example, a partial image having a predefined size, is decoded using JPEG-LS.

In the next step S503, the developed predictor applied to the decoded image obtained by decoding in the above step S501, is formed on the basis of the years of operation, performed by the above described device 1 forming a developed predictor.

As described above, basically, the developed predictor, with a preferred estimate, is formed in accordance with the evaluation value is defined as the sum of the content X to represent the tree structure and the content of Y for the remainder of the forecast the entire image, for which the prediction pixel value was actually carried out using the procedure of forecasting on the basis of the relevant tree structure. However, since in the present embodiment, is not required to transfer the developed predictor, the estimated value is calculated by setting the content of X is equal to 0 (the procedure of development does not change), and developed a predictor is formed on the basis of the calculated evaluation values.

Then, to decode the remaining partial images, which also belongs to the target image decoding, the predicted pixel value (above p') is generated using a set of developed predictor (see step S504), and in the next step S505, the remainder of the forecast (above p-p') is decoded by decoding encoded data of the residual prediction.

The following is the Tapu S506, the pixel value is generated and displayed based on the previously generated predicted pixel value and the decoded residue prediction.

In the next step S507, it is determined, whether completed relevant decoding for all the pixels included in the remaining partial image of the target image decoding. If it is determined that the decoding of all pixels has not been completed, the operation returns to step S504. If it is determined that the decoding of all pixels is completed, the current operation is terminated.

As described above, the device 100' encoding of the image having the structure shown in Fig. 10 encodes the portion of the target image coding using an existing encoding that generates a developed predictor using the decoded image obtained in this encoding, and encodes the remaining partial image formed using the developed predictor.

The device 200' decoding images having the structure shown in Fig. 10, decodes the portion of the target image decoding by decoding the relevant coded data in accordance with the existing method of decoding, generates a developed predictor using the decoded image, and decodes the left is Eesa partial image formed using the developed predictor.

In addition, as described above, the device 1 forming a developed predictor automatically generates a developed predictor, which realizes high-precision prediction of pixel values.

Thus, in accordance with the device 100' image encoding device 200' decoding images that encode and decode the image using the developed predictor generated by the device 1 forming a developed predictor, it is possible to achieve high coding efficiency.

In the experiment conducted by the authors of the present invention for testing the effectiveness of the invention, when the estimated value of X+Y is minimized for images, you can generate the following relatively simple developed predictor.

(add (sub 0.5 (sub (div (Igap) (Ine)) (Igap))) (div (Inw) (Igap))),

where Igap denotes nonlinear prediction value obtained from the periphery, and Ine and Inw are the values of the peripheral pixels, as shown in Fig. 14.

Estimate the value of X+Y had the size 1170235 bits, which is better than the maximum value 1176090 bits obtained using currently available existing predictors.

Relevant advanced predictor performs division between the predicted values and this indicates that the present invention allows to form p is ocedure generate predicted values, which cannot be expected, considering the traditional predictors.

Below are the results of an experiment carried out to test the effectiveness of the invention. In the experiment, the current function of generating predicted values (as an individual) was not included in the parent population.

In this experiment, a comparative trees forecasting was a predictor based on the least squares (LS), which performs linear prediction, the predictor based on the minimum entropy (LE), which also performs linear prediction, the predictor GAP for CALIC, which performs linear prediction (using the four peripheral pixels, similar to the present invention (see Fig. 14)), and the MED predictor for JPEG-LS, which also performs non-linear forecasting (using three peripheral pixels).

When the above-mentioned prediction "LE", the five factors in the prediction of LS are used as initial values, and Y is minimized through a multidimensional search based on Powell's method. Forecasting LE provides the highest level of performance among the methods of linear prediction.

In Fig. 13 shows the content (X+Y) for the remainder of each image (with 512×512 p is xela, 8-bit gradation and only the brightness data)used in the experiment, where the content also takes into account the service load (50 to LS and LE, 0 for MED, and the bits of X for the method proposed according to the present invention). In Fig. 13 also shows each increase with respect to the content of the proposed method for each image. In addition, in the bottom row of Fig. 13 shows the content X of the tree that meets the present invention, for each image. Unit residual content is bpp (pits per pixel).

The results of the experiment confirmed that the predictor that is automatically generated in accordance with the present invention, has the highest level of efficiency. The average content X for tree predictor, automatically generated in accordance with the present invention, is 726 bits, which indicates a small complication compared with predictors GAP and MED (which, respectively, have 349,5 bits and 116,0 bits, and set equal to zero in this experiment).

For the image "Lena", the calculation of the development was carried out separately without considering the content X of the tree to minimize only the residual information content of y In comparison with the results presented in the military in Fig. 13, the results of separate calculations for the proposed method that meets the present invention, show that Y (the value of Y is not shown in Fig. 14) decreased by 0.06%, but X has increased by about three times (i.e., X=2795 bits), so that X+Y has increased by 0.14%. Although the growth of the tree is the problem, referred to as "bloat", due to the SE, the present invention is, of course, prevents its occurrence is due to X.

For the image "Baboon" was formed tree prediction for different destination processes the bottom 1/6 of the area and the remaining area in the image. The area is 1/6 and the remaining area corresponds to whether it is an area that has only a beard. This tree prediction supports the high-level search on the basis of SOEs.

The effectiveness of the present invention can be verified on the basis of the above experimental results.

Industrial application

As described above, the present invention can be applied to encoding and decoding video or a static image, to implement high-precision prediction pixel value. Thus, using the computer can automatically generate the forecasting procedure, suitable for each input image, and to further reduce the relevant about jam code.

The list of symbols

1 device for the formation of a developed predictor

10 unit generating a parent population

11, the storage unit parent population

12 unit selection and duplication of the parent of the individual

13 block generation subsidiary of individuals

14, the storage unit mutational information

15 unit calculating the estimated value

16 block definition tenacious individuals

17 block definition of convergence

18 block definition developed predictor.

1. Device for encoding image for encoding an image using the predicted pixel value generated by a predetermined procedure of generation of the predicted values, which predicts the value of the target pixel encoding using a previously decoded pixels, and the device comprises a first device, and, when you accept a request to encode the target image coding, the first device generates a procedure to generate predicted values, with the value of the best estimate, for the encoding target image coding through the operation of the device the automatic generation procedure of generating predicted pixel value, a second device that encodes the procedure of generating the predicted value is, formed the first device, the third device that generates a predicted value of each pixel included in the target image coding based on the procedures for generating the predicted values generated by the first device, the fourth device, which encodes the signal residue prediction for each pixel, and the above-mentioned signal is calculated based on the predicted pixel value generated by the third device, and the fifth device that sends data encoded by the second and fourth devices, and referred to the automatic formation procedure of generating predicted values of the pixel includes: a first device that generates the parent population by random generation procedure of generating predicted values, each of which is specified through a tree structure, a second device, which selects a set of procedures for generating the predicted values as parents from the parent population, and produces one or more of the procedures generating the predicted values as descendants on the basis of a predetermined method development tree structure, which puts the development of the selected procedure of generating predicted values of the Oia, where the existing generation function projected values may be a leaf node in the tree, the third device, which selects the procedure of generating predicted values having a minimum value of an evaluation of the procedures generating the predicted values as the parents and children, where the total information content to represent the tree structure and the amount of code estimated predicted value of the pixel obtained through a tree structure, is used as the cost of the evaluation and the selection procedure of generating the predicted value is the value of the best estimate of the encoding target image encoding, and stores the selected procedure to generate predicted values and one or more other procedures generate predicted values in the parent population and a fourth device that manages to repeat the processes carried out by the second and third device until the predetermined condition, and generates a procedure to generate predicted values, with the value of the best estimate, as a result of repetition as the final procedure of generating predicted values.

2. Device for encoding images according to claim 1, in which the second device mentioned is the first device automatic generation procedure of generating predicted pixel values forming procedure of generating predicted values as descendants on the basis of a predetermined method development of a tree structure, which carries out the development, where the function that prints the coordinates of the pixel in the image can be a leaf node of the tree.

3. Device for encoding images according to claim 1, in which the first device these devices and automatic generation procedure of generating predicted pixel values to generate the parent population so that the current function of generating predicted values was included in the parent population.

4. Device, the image decoding to decode encoded image data, encoded predicted pixel value generated by a predetermined procedure of generation of the predicted values, which predicts the value of the target pixel encoding using a previously decoded pixels, and the device comprises a first device that receives and decodes the coded data for the procedure of generating the predicted values generated by the operation of the device the automatic generation procedure of generating predicted pixel values, where the coded data is generated on the encoding side, a second device that generates a predicted value of each pixel in the prison in the target image decoding, based on the procedures for generating the predicted values are decoded from the first device and the third device which decodes encoded data signal residue prediction for each pixel, and the above-mentioned signal is calculated using the predicted pixel value generated by the generation procedure, the predicted values are decoded from the first device, where the coded data is generated on the encoding side, and a fourth device that reproduces the target image decoding based on the predicted pixel value generated by the second device, and signal the rest of the forecast, the decoded third device; and referred to the automatic formation procedure of generating predicted values of the pixel includes: a first device, which generates the parent population by random generation procedure of generating predicted values, each of which is specified through a tree structure, a second device, which selects a set of procedures for generating the predicted values as parents from the parent population, and produces one or more of the procedures generating the predicted values as descendants on the basis of ZAR the particular way to develop a tree structure, which puts the development of the selected procedure of generating predicted values, where the existing generation function projected values may be a leaf node in the tree, the third device, which selects the procedure of generating predicted values having a minimum value of an evaluation of the procedures generating the predicted values as the parents and children, where the total information content to represent the tree structure and the amount of code estimated predicted value of the pixel obtained through a tree structure, is used as the cost of the evaluation and the selection procedure of generating the predicted value is the value of the best estimate of the encoding target image encoding, and stores the selected procedure to generate predicted values and one or several other procedures generate predicted values in the parent population, and a fourth device that manages to repeat the processes carried out by the second and third device until the predetermined condition, and generates a procedure to generate predicted values, with the value of the best estimate, as a result of repetition as the final procedure of generating predicted values

5. Device, the image decoding according to claim 4, in which the second device these devices and automatic generation procedure of generating predicted pixel values forming procedure of generating predicted values as descendants on the basis of a predetermined method development tree structure, which carries out the development, where the function that prints the coordinates of the pixel in the image can be a leaf node of the tree.

6. Device, the image decoding according to claim 4, in which the first device these devices and automatic generation procedure of generating predicted pixel values to generate the parent population so that the current function of generating predicted values was included in the parent population.

7. Device for encoding image for encoding an image using the predicted pixel value generated by a predetermined procedure of generation of the predicted values, which predicts the value of the target pixel encoding using a previously decoded pixels, and the device comprises a first device, and, when you accept a request to encode the target image encoding device encodes frequent the offered destination image coding having a predefined size, using the existing procedures of generating predicted values of the pixel are formed not on the basis of the mode of development of the tree structure, a second device that provides a procedure for generating the predicted values with the value of the best estimate for encoding the decoded image obtained during the partial coding target image, encoding the first device through the operation of the device the automatic generation procedure of generating predicted pixel value, which evaluates the content to represent a tree structure is equal to zero, the third device that generates a predicted value of each pixel included in the remaining partial target image coding, not encoded by the first device, based on the procedures generating predicted values generated by the second device, the fourth device, which encodes the signal residue prediction for each pixel, and the above-mentioned signal is calculated based on the predicted pixel value generated by the third device, and the fifth device that sends data encoded by the first and fourth devices, and mentioned device is istwo automatic generation procedure of generating predicted values of the pixel includes: a first device, which generates the parent population by random generation procedure of generating predicted values, each of which is specified through a tree structure, a second device, which selects a set of procedures for generating the predicted values as parents from the parent population, and produces one or more of the procedures generating the predicted values as descendants on the basis of a predetermined method development tree structure, which puts the development of the selected procedure of generating predicted values, where the existing generation function projected values may be a leaf node in the tree, the third device, which selects the procedure of generating predicted values having a minimum value of an evaluation of the procedures generating the predicted values as parents and descendants, where the total information content to represent the tree structure and the amount of code estimated predicted value of the pixel obtained through a tree structure, is used as the cost of the evaluation and the selection procedure of generating the predicted value is the value of the best estimate of the encoding target image encoding, and stores the selected generation procedure PR is nosireebob values and one or more other procedures generate predicted values in the parent population, and a fourth device that manages to repeat the processes carried out by the second and third device until the predetermined condition, and generates a procedure to generate predicted values, with the value of the best estimate, as a result of repetition as the final procedure of generating predicted values.

8. Device for encoding images according to claim 7, in which the second device these devices and automatic generation procedure of generating predicted pixel values forming procedure of generating predicted values as descendants on the basis of a predetermined method development tree structure, which carries out the development, where the function that prints the coordinates of the pixel in the image can be a leaf node of the tree.

9. Device for encoding images according to claim 7, in which the first device these devices and automatic generation procedure of generating predicted pixel values to generate the parent population so that the current function of generating predicted values was included in the parent population.

10. Device, the image decoding to decode encoded image data, encoded with IP is the use of predicted values of the pixel generated through a predetermined procedure of generation of the predicted values, which predicts the value of the target pixel encoding using a previously decoded pixels, and the device comprises a first device that receives and decodes the coded data for the partial target image decoding, which has a predefined size, and encoded using an existing procedure of generating predicted values of the pixel are formed not on the basis of the mode of development of the tree structure, where the coded data is generated on the encoding side, a second device that provides a procedure for generating the predicted values with the value of the best estimate for encoding partial target image decoding received by the first device, through the operation device automatic generation procedure of generating predicted values of the pixel of claim 8, which evaluates the content to represent a tree structure is equal to zero, the third device that generates a predicted value of each pixel included in the remaining partial target image decoding, not decoded by the first device, based on the procedures for generations the predicted values, formed by the second device, the fourth device that decodes encoded data signal residue prediction for each pixel, and the above-mentioned signal is calculated using the predicted pixel value generated by the generation procedure, the predicted values are decoded from the second device, where the coded data is generated on the encoding side, and the fifth device that reproduces the remaining partial target image decoding, not decoded by the first device, based on the predicted pixel value generated by the third device, and signal the rest of the forecast, the decoded fourth device, and referred to the automatic formation procedure of generating predicted values of the pixel includes: a first device, which generates the parent population by random generation procedure of generating predicted values, each of which is specified through a tree structure, a second device, which selects a set of procedures for generating the predicted values as parents from the parent population, and produces one or more of the procedures generating the predicted values as descendants on the basis for which anee a certain way of development of the tree structure, which puts the development of the selected procedure of generating predicted values, where the existing generation function projected values may be a leaf node in the tree, the third device, which selects the procedure of generating predicted values having a minimum value of an evaluation of the procedures generating the predicted values as the parents and children, where the total information content to represent the tree structure and the amount of code estimated predicted value of the pixel obtained through a tree structure, is used as the cost of the evaluation and the selection procedure of generating the predicted value is the value of the best estimate of the encoding target image encoding, and stores the selected procedure to generate predicted values and one or several other procedures generate predicted values in the parent population, and a fourth device that manages to repeat the processes carried out by the second and third device until the predetermined condition, and generates a procedure to generate predicted values, with the value of the best estimate, as a result of repetition as the final procedure of generating predicted values

11. The device decoding images of claim 10, in which the second device these devices and automatic generation procedure of generating predicted pixel values forming procedure of generating predicted values as descendants on the basis of a predetermined method development tree structure, which carries out the development, where the function that prints the coordinates of the pixel in the image can be a leaf node of the tree.

12. The device decoding images of claim 10, in which the first device these devices and automatic generation procedure of generating predicted pixel values to generate the parent population so that the current function of generating predicted values was included in the parent population.



 

Same patents:

FIELD: information technology.

SUBSTANCE: disclosed is use of a parent population which is generated via random formation of a procedure for generating a predicted value, each indicated by a tree structure, and a set of procedures for generating a predicted value is selected as a parent from such a population. The procedure for generating a predicted value is generated as a descendant based on a certain method of development of the tree structure which develops selected procedures for generating a predicted value, where the existing function for generating a predicted value can be a tree end node. The procedure for generating a predicted value, having the best estimate cost, is selected from procedures for generating a predicted value as a parent and a descendant, and overall information content for representing the tree structure and volume of the code, estimated by the predicted pixel value, is used as a cost estimate, and the final procedure for generating a predicted value is formed by repeating the relevant operation.

EFFECT: high encoding efficiency.

28 cl, 14 dwg

FIELD: information technologies.

SUBSTANCE: method for motion vector coding includes the following stages: selection of the first mode as the mode of information coding about a predictor of the motion vector in the current unit, and in this mode information is coded, which indicates the motion vector predictor at least from one motion vector predictor, or selection of the second mode, in which information is coded, which indicates generation of a motion vector predictor on the basis of units or pixels included into a pre-coded area adjacent to the current unit; determination of the motion vector predictor of the current unit in accordance with the selected mode, and coding of information on the motion vector predictor of the current unit; and coding of the vector of difference between the motion vector of the current unit and predictor of the motion vector of the current unit.

EFFECT: increased efficiency of coding and decoding of a motion vector.

15 cl, 19 dwg

FIELD: information technologies.

SUBSTANCE: share of cast combinations of optimal forecasting modes, which shall be selected for spatially corresponding units of upper and lower layers is identified on the basis of the optimal forecasting mode, which was selected in process of traditional coding, and a table of compliance is developed, which describes interconnections between them. Combinations of selected optimal forecasting modes in the compliance table are narrowed on the basis of the value of the share of casts, in order to create information of compliance for forecasting modes, which describes combinations of narrowed optimal forecasting modes. In process of upper layer unit coding, the version of searching for the forecasting mode, searching for which shall be carried out in process of coding, is identified by referral to information of compliance for forecasting modes using as the key the optimal forecasting mode selected in process of coding of the spatially corresponding unit of the lower layer.

EFFECT: reduced versions of searching for a forecasting mode of an upper layer using correlations of optimal forecasting modes between layers.

7 cl, 14 dwg

FIELD: information technology.

SUBSTANCE: displacement vectors are searched for by searching for global displacement, breaking up the image into multiple layers of blocks, successive processing of the layers using various search schemes, using displacement vector prediction, as well as selecting displacement vectors based on efficiency of their further entropy coding.

EFFECT: quality improvement of efficiency of a video compressing system, especially at low bit losses, high output thereof.

2 cl, 8 dwg

FIELD: information technologies.

SUBSTANCE: video coding device is a video coding device for exposure of a video image to forecasting coding with compensation of motion, comprising a detection module, in order to detect accessible blocks for blocks having vectors of motion, from coded blocks adjacent to a block to be coded, and a number of available blocks, a selection module, in order to select one selective block from coded accessible blocks, a coder of selection information, to code information of selection, indicating the selective block, using a coding table, corresponding to the number of accessible blocks, and a coder of images, to expose the block to be coded to forecasting coding with compensation of motion using a vector of motion of the selective block.

EFFECT: reduction of additional information by information of selection of a motion vector with increased extents of freedom for calculation of a motion vector by selection of one of coded blocks.

10 cl, 14 dwg

FIELD: information technology.

SUBSTANCE: each re-encoded frame of a multiview video sequence, defined according to a predetermined encoding sequence, is presented as a set of non-overlapping units; at least one of already encoded frame is determined, which corresponds to said view and denoted as reference; synthesised frames are generated for the encoded and reference frames, wherein for each non-overlapping unit of pixels of the encoded frame, denoted as the encoded unit, a spatially superimposed unit inside the synthesised frame is determined, which corresponds to the encoded frame, denoted as a virtual unit, for which the spatial position of the unit of pixels in the synthesised frame which corresponds to the reference frame is determined, so that the reference virtual unit thus determined is the most accurate numerical approximation of the virtual unit; for the determined reference virtual unit, the spatially superimposed unit which belongs to the reference frame, denoted as the reference unit, is determined, and the error between the virtual unit and the reference virtual unit is calculated, as well as the error between the reference virtual unit and the reference unit; the least among them is selected and based thereon, at least one differential encoding mode is determined, which indicates which of the units found at the previous should be used to perform prediction during the next differential encoding of the encoded unit, and differential encoding of the encoded unit is carried out in accordance with the selected differential encoding mode.

EFFECT: providing differential encoding of a frame using a small volume of service information by taking into account known spatial connections between neighbouring views at each moment in time, as well as information available during both encoding and decoding.

5 cl, 15 dwg

FIELD: information technology.

SUBSTANCE: method of encoding an image using intraframe prediction involves selecting a pixel value gradient which is indicated by the image signal to be predicted from among a plurality of selected gradients; generating a predicted signal by applying the gradient in accordance with the distance from the reference prediction pixel, based on the gradient; intraframe encoding of the image signal to be predicted, based on the predicted signal; and encoding information which indicates the value of the selected gradient. As an alternative, the method involves estimating the pixel value gradient which is indicated by the image signal to be predicted, based on the image signal already encoded; generating a predicted signal by applying the gradient in accordance with distance from the reference prediction pixel, based on the gradient; and intraframe encoding of the image signal to be predicted, based on the predicted signal.

EFFECT: improved image compression efficiency.

20 cl, 55 dwg

FIELD: information technology.

SUBSTANCE: method of encoding a video signal comprises steps of: forming a predicted image for the current block; generating a weighted prediction coefficient for scaling the predicted image; forming a weighted prediction image by multiplying the predicted image with the weighted prediction coefficient; generating a difference signal by subtracting the weighted prediction image from the current block; and encoding the difference signal, wherein generation of the weighted prediction coefficient involves calculating the weighted prediction coefficient for which the difference between the base layer image, which corresponds to the current block, and the predicted image is minimal.

EFFECT: high efficiency of encoding a video signal by reducing the error of the current block, which must be compressed, and the predicted image.

31 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: deblocking filter 113 adjusts the value of disable_deblocking_filter-idc, slice_alpha_c0_offset_div2 or slice_beta_offset_div2 based on the Activity of an image calculated by an activity calculation unit 141, the total sum of orthogonal transformation coefficients of the image calculated by an orthogonal transformation unit 142, Complexity of the image calculated by the rate control unit 119, or the total sum of prediction errors of the image calculated by a prediction error addition unit 120.

EFFECT: improved image quality through correct deblocking.

8 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: disclosed is an image decoding method comprising steps of parsing network abstraction layer (NAL) units of a base view (S200); decoding an image of the base view (S202); parsing multiview video coding (MVC) extension parameters of a non-base view (S204); searching whether or not prefix NAL units for a base view are present (S205); either calculating MVC extension parameters for the base view when no prefix NAL units are present (S206) or parsing the MVC extension parameters of the base view when prefix NAL units for the base view are present (S207); and decoding the non-base view using the MVC extension parameters of the base view and the MVC extension parameters of the non-base view (S210).

EFFECT: providing multiview video coding methods of multiview video decoding methods, even when prefix NAL units are not used.

2 cl, 23 dwg

FIELD: information technologies.

SUBSTANCE: in the method of processing of raster images, including compression of an image by the method of "cut block coding" or its modifications, before the procedure of compressing coding they perform digital filtration, which increases sharpness of the compressed image, and after the decoding procedure they perform smoothing digital filtration of the decoded image.

EFFECT: improved quality of decoded raster images when methods used for their compressing coding on the basis of cut block coding or their modifications, improved current formalised criteria.

3 cl, 8 dwg

FIELD: radio engineering, communication.

SUBSTANCE: method of encoding transform coefficients includes: encoding the position and value of a last non-zero coefficient of a block; encoding at least one coefficient in accordance with a first coding mode if the amplitude of said at least one coefficient is less than or equal to a threshold; and determining a cumulative sum of amplitudes of previously coded non-zero coefficients that are greater than the threshold; and if the cumulative sum is less than a cumulative threshold value, and the position of the last non-zero coefficient is less than a location threshold: coding a subsequent coefficient in accordance with the first coding mode; otherwise, coding a subsequent coefficient in accordance with a second coding mode.

EFFECT: high encoding efficiency.

22 cl, 8 dwg

FIELD: information technology.

SUBSTANCE: methods and systems for processing document object models (DOM) and processing video content are provided. Information content which is represented by a DOM and which includes a scripting language associated with the information content is received and original content of the DOM is stored after execution of the scripting language. Further, video content is adapted for client devices. The scripting language associated with the information content can be sent to client device along with a modified DOM and processed video content. Pre-processing of the scripting language is carried out to identify nodes related to video content and to maintain all other original nodes, for example.

EFFECT: easier processing of video data.

23 cl, 12 dwg

Video camera // 2473968

FIELD: information technology.

SUBSTANCE: video camera has a portable housing having a light focusing lens, a light-sensitive device which converts the focused light into source video data, a storage device installed in the housing, and an image processing system configured to introduce predistortions into the source video data and compression thereof, wherein the compressed source video data remain essentially visual without loss after decompression, and also configured to store compressed source video data in the storage device.

EFFECT: reduced loss of quality of a compressed image during decompression and display.

22 cl, 18 dwg

FIELD: information technologies.

SUBSTANCE: device comprises a processor arranged as capable of realisation of a set of commands for calling a facility of intracycle filtration of blocking effect deletion and for universal correction of blocking effect in a decoded output signal during operation of a post-cycle filtration using the facility of intracycle filtration of blocking effect deletion, at the same time the universal correction of blocking effect includes the following: performance of an operation of strong filtration in respect to units in a decoded output signal for correction of an inherited blocking effect, at the same time units contain missed macrounits and units with a template of a coded unit, equal to zero, and inclusion of a facility of intracycle filtration of blocking effect removal for edges of a fragment of an image of fixed size, which are not arranged on the border of the unit of the appropriate intermediate macrounit, for correction of the inherited blocking effect; and a memory connected to the processor.

EFFECT: development of a method of universal correction of blocking effect, including inherited blocking effect.

19 cl, 23 dwg, 7 tbl

FIELD: information technologies.

SUBSTANCE: in the method for identification of contours of a moving object, in a video sequence of low resolution, information is used from auxiliary photographs of the specified object, taken with high resolution, mutual identification of contours is carried out from a photograph and a video sequence at appropriate moments of time in the same sequence with correlation and modelling of high-resolution frames.

EFFECT: increased resolution and dynamic range of a video sequence.

5 cl, 1 dwg

Virtual code window // 2463662

FIELD: information technology.

SUBSTANCE: method of encoding a graphic display to provide a unique, distinctive machine-readable code for a plurality of commodities involves obtaining an image of part of the graphic display. An electronic image of the temporary boundary around a certain part of the graphic display is formed relative a fixed trigger point. Part of the obtained image lying inside that boundary is processed to obtain a descriptor. Data are assigned to the descriptor. Further, that relationship is stored in a storage. The graphic display is fixed for a plurality of commodities and the temporary boundary is different for each commodity such that part of the graphic display which forms the code is different for each commodity.

EFFECT: high protection from copying, forgery or unauthorised reading of a graphic code.

7 cl, 10 dwg

FIELD: information technology.

SUBSTANCE: in the method, conversion of radiation intensity of matrix elements into binary codes is carried out in parallel and synchronously with all matrix elements at the same time, represented by triads of "radiation brightness-to-code" converters of three fundamental colours R, G, B, which convert radiation brightness to binary codes at the speed of light, and digitisation of the frame image ends with the end of the frame interval.

EFFECT: high speed of digitisation.

2 cl, 5 dwg

FIELD: physics.

SUBSTANCE: two-dimensional presentation of the inspected electronic image is divided into overlapping blocks; k-level wavelet transformation is performed on each block; horizontal, vertical, high-frequency and low-frequency coefficients of the performed wavelet transformation of a block are generated; statistical characteristics of the wavelet transformation coefficients are calculated, from which a vector of statistical characteristics of a block is formed; the vector of statistical characteristics of a block is compared with previously formed vectors of statistical characteristics of knowingly modified electronic images and with previously formed vectors of statistical characteristics of knowingly modified electronic images; a block is identified as modified if the difference between its vector of statistical characteristics and the closest previously formed vector of statistical characteristics of knowingly modified electronic images.

EFFECT: high accuracy of determining coordinates of the modified part of an electronic image.

5 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: block transform-based digital media codec more efficiently encodes wide dynamic range transform coefficients in two parts: a normalised coefficient and bin address. The normalised coefficient relates to a grouping of coefficient values of the wide dynamic range into bins, whereas the bin address is an index of the coefficient value within a bin. With careful selection of the bin size, the normalised coefficients have a probability distribution roughly similar to narrow range transform coefficients, which is better suited to variable length entropy coding. The codec uses variable length entropy coding to encode the normalised coefficients in a 'core' of the compressed bitstream, and fixed length coding to encode the bin address as a separate optional layer that can be omitted. The codec further adaptively varies the bin size of the grouping based on a backward adaptation process to adjust the normalised coefficients towards a probability distribution well suited for efficient variable length entropy coding.

EFFECT: efficient compression of wide-range transform coefficients.

29 cl, 12 dwg

FIELD: technology for encoding and decoding of given three-dimensional objects, consisting of point texture data, voxel data or octet tree data.

SUBSTANCE: method for encoding data pertaining to three-dimensional objects includes following procedures as follows: forming of three-dimensional objects data, having tree-like structure, with marks assigned to nodes pointing out their types; encoding of data nodes of three-dimensional objects; and forming of three-dimensional objects data for objects, nodes of which are encoded into bit stream.

EFFECT: higher compression level for information about image with depth.

12 cl, 29 dwg

Up!