Method and apparatus for image encoding and decoding using intraframe prediction

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. An apparatus for encoding an image using intraframe prediction comprises a unit for determining an intraframe prediction mode, which determines the intraframe prediction of the current unit to be encoded, wherein the intraframe prediction mode indicates a defined direction from a plurality of directions, wherein the defined direction is indicated by one number dx in the horizontal direction and a constant number in the vertical direction and a number dy in the vertical direction and a constant number in the horizontal direction; and a unit for performing intraframe prediction, which performs intraframe prediction applied to the current unit in accordance with the intraframe prediction mode, wherein the intraframe prediction includes a step of determining the position of adjacent pixels through a shift procedure based on the position of the current pixel and one of the parameters dx and dy, indicating the defined direction, wherein adjacent pixels are located on the left side of the current unit or on the upper side of the current unit.

EFFECT: high efficiency of compressing images through the use of intraframe prediction modes having different directions.

9 cl, 21 dwg, 4 tbl

 

The technical field to which the invention relates

[1] the Illustrative embodiments of the present disclosure relate to coding and decoding image, and more particularly, to a method and device of encoding and decoding image using an intraframe prediction, which can improve the efficiency of image compression through the use of modes of intra-frame prediction with multiple destinations.

The level of technology

[2] To encode the image in accordance with a compression standard image, such as a compression standard for moving video images (MPEG)-1, MPEG-2, MPEG-4 or standard advanced video coding (AVC) H. 264/MPEG-4, the image is divided into macroblocks. After encoding each of the macroblocks using any of the encoding modes with inter-frame prediction and intra-frame prediction corresponding to the encoding mode selected in accordance with the rate required to encode the macroblock, and in accordance with acceptable distortion between the original macroblock and a reconstructed macroblock, then the macroblock is encoded using the selected encoding mode.

[3] with the development of hardware for reproduction and preservation of high quality grap�technical content with high resolution increases the need for the video codec, which effectively encodes or decodes the high-quality video content with high resolution.

Disclosure of the invention

Technical problem

[4] conventional video codec, a video signal is encoded using a limited encoding mode, based on a macroblock having a predetermined size.

The solution to the problem

[5] Illustrative embodiments of the provide a method and apparatus of encoding and decoding image using an intraframe prediction through the use of modes of intra-frame prediction with multiple destinations.

[6] the Illustrative implementation options also provide a method and apparatus of encoding and decoding image using an intraframe prediction, which can reduce the amount of calculations performed in the process of intraframe prediction.

Advantageous effects of invention

[7] due To the fact that intra-frame prediction is performed in different directions, can improve the efficiency of image compression.

[8] May be reduced amount of calculations performed to determine the reference pixel in the process of intraframe prediction.

Brief description of the drawings

[9] Fig.1 depicts a block diagram of newspaper), an illustrated�pulling device image encoding in accordance with the illustrative option implementation;

[10] Fig.2 depicts a table illustrating multiple modes intra-frame prediction according to the size of the current block, in accordance with the illustrative option implementation;

[11] Fig.3 depicts a table for explaining modes of intra-frame prediction, which is applied to the block having a predetermined size, in accordance with the illustrative option implementation;

[12] Fig.4 depicts a graphical representation illustrating the directions of modes of intra-frame prediction, which is shown in Fig.3, in accordance with the illustrative option implementation;

[13] Fig.5 depicts a graphical representation for explaining the method of intra-frame prediction is performed with respect to the block, which is illustrated in Fig.3, in accordance with the illustrative option implementation;

[14] Fig.6 depicts a graphical representation for explaining modes of intra-frame prediction, which is applied to the block having a predetermined size, in accordance with another illustrative option implementation;

[15] Fig.7 depicts an illustrative graphical view for explaining modes of intra-frame prediction having different directions in accordance with the illustrative option implementation;

[16] Fig.8 depicts an illustrative graphical representation to explain the process of forming the predictor (extrapolation function) in a situation where continued line having a predetermined angle of inclination, passes between, and not through actual positions of neighboring pixels, in accordance with the illustrative option implementation;

[17] Fig.9 depicts an illustrative graphical representation to explain the process of forming the predictor (extrapolation function) in a situation where continued line having a predetermined angle of inclination, passes between the actual positions of neighboring pixels, in accordance with another illustrative option implementation;

[18] Fig.10 depicts an illustrative graphical representation for explaining the mode of bidirectional prediction, in accordance with the illustrative option implementation;

[19] Fig.11 depicts a graphical representation to explain the process of formation values of the prediction mode intra-frame prediction of the current block, in accordance with the illustrative option implementation;

[20] Fig.12 and 13 depict an illustrative graphical representation for explaining the conversion process for harmonization of intra-frame prediction blocks having time�ary size, in accordance with an illustrative variants of implementation;

[21] Fig.14 depicts an illustrative graphical representation to explain the process of converting modes intraframe prediction of the neighboring block in one of the typical modes intraframe prediction, in accordance with the illustrative option implementation;

[22] Fig.15 depicts a graphical representation for explaining the relationship between the current pixel and neighboring pixels that are located on a continued line with a direction (dx, dy), in accordance with the illustrative option implementation;

[23] Fig.16 depicts a graphical representation for explaining changes in neighboring pixel located on a continued line with a direction (dx, dy), according to the position of the current pixel, in accordance with the illustrative option implementation;

[24] Fig.17 and 18 depict a graphical representation for explaining the method of determining the direction mode of the intraframe prediction, in accordance with an illustrative variants of implementation;

[25] Fig.19 depicts a graphical representation of an algorithm illustrating a method of encoding image using an intraframe prediction, in accordance with the illustrative option implementation;

[26] Fig.20 depicts a block diagram, illustration�tion of the device decoding the image in accordance with the illustrative option implementation; and

[27] Fig.21 depicts a graphical representation of an algorithm illustrating a method of decoding image using an intraframe prediction, in accordance with the illustrative option implementation.

A preferred embodiment of the invention

[28] according to the aspect of the illustrative embodiment, is provided a method of encoding image using an intraframe prediction, comprising stages dividing the current frame image, at least one block having a predetermined size; determining, from among pixels of the neighboring block, which was previously recovered in front of the pixel, at least one unit pixel of the neighboring block continue along the line having a predetermined angle, around the pixel of at least one block; and predicting the pixel of at least one unit through the use of a particular pixel of the neighboring block.

[29] In accordance with another aspect of the illustrative embodiment, is provided a method of decoding image using an intraframe prediction, comprising stages dividing the current frame image, at least one block, have�rd a predetermined size; extracting from the bitstream information about the mode of intra-frame prediction, which indicates the mode of the intraframe prediction applied to at least one block; and performing intra-frame prediction for at least one block in accordance with the mode of the intraframe prediction indicated by the extracted information about the mode of intra-frame prediction, and in the mode of the intraframe prediction pixel adjacent unit predicts pixel, at least one block, the pixel of the neighboring block is determined from among pixels of the neighboring block, which was previously recovered in front of the pixel, at least one block, by using continued lines having a predetermined slope angle, around the pixel of at least one block.

[30] In accordance with another aspect of the illustrative embodiment, is provided a device for encoding image using an intraframe prediction, comprising a block of intra-frame prediction, which determines the neighboring pixel block from among neighboring pixels of the block that was previously restored before the current pixel of the image block by using continued lines with advanced�but a certain tilt angle, around the pixel of the current block, and predicts the pixel of the current block by using a certain pixel of the neighboring block.

[31] In accordance with another aspect of the illustrative embodiment, is provided a decoding device image using an intraframe prediction, comprising a block of intra-frame prediction, which extracts from the bitstream information about the mode of intra-frame prediction, which indicates the mode of the intraframe prediction applied to the current block of the image, and performs intra-frame prediction for a current block in accordance with the mode of the intraframe prediction indicated by the extracted information about the mode of intra-frame prediction, and in the mode of the intraframe prediction pixel of the neighboring block predicts the pixel of the current block, the pixel of the neighboring block is determined from among pixels of the neighboring block, which was pre-reduced before the pixel of the current block, by using continued lines having a predetermined angle around the pixel of the current block.

The implementation of the invention

[32] Hereinafter, with reference to the accompanying drawings, in which are shown illustrative options for implementation�tion, illustrative embodiments of will be described in more detail.

[33] Fig.1 depicts a block diagram illustrating the device 100 image encoding in accordance with the illustrative option implementation.

[34] As shown in Fig.1, the device 100 includes a block 110 intra-frame prediction unit 120 motion estimation, block 125 motion compensation block 130 frequency conversion, quantization block 140, an entropy encoder 150, the block 160 inverse quantization unit 170 of the inverse frequency conversion, the deblocking unit 180 and the block 190 spatial filtering.

[35] the Block 120 motion estimation and block 125 of the perform motion compensation interframe prediction, which divides the Current frame 105 of the current image into blocks, each of which has a predetermined size, and search for the values of the predictions of each of the blocks of the reference image frame.

[36] Block 110 intraframe prediction performs intra-frame prediction, which searches for the values of the prediction of the current block by using pixels of neighboring blocks of the current frame image. In particular, in addition to the traditional mode of the intraframe prediction unit 110 intraframe prediction advanced modes performs intra-frame prediction, the available� different directions through the use of parameters (dx, dy). Added modes intraframe prediction, in accordance with this illustrative variant implementation, will be explained later.

[37] the Residual values of the current block are generated based on the value of the prediction output from block 110 intra-frame prediction and block 125 motion compensation, and are output as quantized transform coefficients by block 130 frequency conversion and quantization block 140.

[38] the Quantized transform coefficients are restored to residual values by block 160 inverse quantization and block 170 inverse frequency conversion, after which the restored residual values are processed through the deblocking unit 180 and the block 190 spatial filtering and outputs a reference frame 195. Quantized transform coefficients can be output as bitstream 155 by means of an entropy encoder.

[39] Next will be carefully explained to the intra-frame prediction is performed by block 110 intra-frame prediction, which is illustrated in Fig.1. Method intra-frame prediction to improve the efficiency of image compression will be explained on the example of a codec that can perform encoding with compressed�eat through the use of the block, having a size that is larger or smaller than 16×16, and not by the example of traditional codec, such as H. 264, which performs coding on the basis of a macroblock having a size of 16×16.

[40] Fig.2 depicts a table illustrating multiple modes intra-frame prediction according to the size of the current block, in accordance with the illustrative option implementation.

[41] the Number applied to the block modes of intra-frame prediction may vary depending on unit size. For example, as shown in Fig.2, when the block size, which should be applied intra-frame prediction, N×N, the number of modes intraframe prediction, in fact, performed for each of the blocks having corresponding dimensions 2×2, 4×4, 8×8, 16×16, 32×32, 64×64 and 128×128, can be set equal 5, 9, 9, 17, 33, 5 and 5 (relative to example 2). In this regard, the number of the actual running modes intraframe prediction varies depending on the block size, because the header for information about the encoding mode using the prediction varies depending on unit size. In other words, in the situation when the block has a small size, despite the fact that the unit occupies a small part of the whole image, header to transmit additional information, such as régis� prediction block, with small size, can be large. Accordingly, if the block has a small size, is encoded through the use of a too large number of prediction modes, the speed can be increased, resulting in reduced compression efficiency. In addition, since the block is large, for example, more than 64×64, is often chosen as a block for the flat areas of the image when the block is large, is encoded through the use of too many prediction modes, compression efficiency may also be reduced.

[42] Accordingly, as shown in Fig.2, in rough classification of the sizes of blocks, at least three sizes N1×N1, 2=N1=8, N1 is an integer), N2×N2 (16=N2=32, N2 is an integer), and N3×N3 (64=N3, N3 is an integer), the number of modes of intra-frame prediction to be performed with respect to the block which has a size of N1×N1, will be A1 (A1 is a positive integer), the number of modes intraframe prediction, which will be executed in relation to block, which is of size N2×N2, will be A2 (A2 is a positive integer), and the number of modes of intra-frame prediction to be performed in relation to the unit that has� size N3×N3, will be A3 (A3 is a positive integer), in addition, it is preferable that the number of modes of intra-frame prediction to be performed depending on the size of each block was set such that it satisfied the equality A3=A1=A2. That is, roughly, divide the current image on the block that has the small size, the block size is average, and the block that is large, it is preferable that the block size is average, had the largest number of prediction modes, and the unit that is small in size, and the block that has a large size, had a relatively small number of prediction modes. However, this illustrative variant implementation is not limited to this, however, the block that has a small size, and the block that is large, can have a large number of prediction modes. The number of prediction modes, which varies depending on the size of each block illustrated in Fig.2 is illustrative, and may be changed.

[43] Fig.3 depicts a table for explaining modes of intra-frame prediction, which is applied to the block having a predetermined size, in accordance with the illustrative option implementation.

[44] As the image�of Fig.2 and 3, when intra-frame prediction is performed with respect to the block which has a size of 4×4, the block which has a size of 4×4, may have a vertical prediction (mode 0), horizontal prediction (mode 1), mode average (DC) prediction (mode 2) mode, a diagonal down left prediction (mode 3), diagonal mode predictions down right (mode 4), vertical mode predictions right (mode 5), the prediction mode horizontal-down (mode 6), vertical mode predictions left (mode 7) and the prediction mode horizontal-up (mode 8).

[45] Fig.4 depicts a graphical representation illustrating the directions of modes of intra-frame prediction, which is shown in Fig.3, in accordance with the illustrative option implementation. Fig.4 figure depicted at the end of the arrow indicates the corresponding value of the mode when the prediction is performed in the direction indicated by an arrow. In this case, the mode 2 is the mode-averaged (DC) prediction without direction, and therefore is not shown.

[46] Fig.5 depicts a graphical representation for explaining the method of intra-frame prediction is performed with respect to the block, which is illustrated in Fig.3, in accordance with the illustrative option implementation.

[47] �AK shown in Fig.5, the prediction block is generated by using neighboring pixels A-M current block available in mode intra-frame prediction, which is defined by the block size. For example, will be explained an encoding operation using a prediction of the current block, which has a size of 4×4, in mode 0 in Fig.3, i.e., in the vertical mode predictions. First, the pixel values A to D that are adjacent above the current block with a size of 4×4, thus are predicted to be equal to the values of the pixels of the current block, which has a size of 4×4. That is, the pixel value A is thus predicted to be equal to the values of four pixels included in the first column of the current block, which has a size of 4×4, the pixel value In the predicted therefore to be equal to the values of four pixels included in the second column of the current block, which has a size of 4×4, the pixel value With the thus predicted to be equal to the values of four pixels included in the third column of the current block, which has a size of 4×4, and the value of the pixel D is predicted so to be equal to the values of four pixels included in the fourth column of the current block, which has a size of 4×4. Then receive and encode the remaining balance between the actual W�aczeniami pixels, included in the original Current block with a size of 4×4, and the values of the pixels included in the Current block of size 4×4, predicted by using pixels A to D.

[48] Fig.6 depicts a graphical representation for explaining modes of intra-frame prediction, applied to the block which has a predetermined size, in accordance with another illustrative variant implementation.

[49] As shown in Fig.2 and 6, when the intra-frame prediction is performed with respect to the block which has a size of 2×2 or 128×128, the block which has a size of 2×2 or 128×128, may have 5 modes: a vertical prediction mode, horizontal prediction mode averaged (DC) prediction mode is the planar prediction mode, a diagonal down right predictions.

[50] meanwhile, if the block has a size of 32×32, includes 33 mode intra-frame prediction, as shown in Fig.2, it is necessary to specify the direction 33 modes of intra-frame prediction. For setting the modes intraframe prediction having different directions, different from the modes of intra-frame prediction, illustrated in Fig.4 and 6, the direction of prediction to select a neighboring pixel that is used as a reference pixel near the pixel in the block is set RVBR�the rotary parameters (dx, dy). For example, if each of the 33 prediction modes to represent the state of N (N is an integer from 0 to 32), the mode 0 can be specified in mode, vertical prediction mode 1 may be set as mode, horizontal prediction mode 2 can be specified as the mode-averaged (DC) prediction mode 3 can be set as the planar mode prediction, and each of 4-32 regimes can be defined as a prediction mode having a direction tan-1(dy/dx) is presented as one of the parameters (dx, dy), which are expressed as one of the following options: (1, -1), (1, 1), (1, 2), (2, 1), (1, -2), (2, 1), (1, -2), (2, -1), (2, -11), (5, -7), (10, -7), (11, 3), (4, 3), (1, 11), (1, -1), (12, -3), (1, -11), (1, -7), (3, -10), (5, -6), (7, -6), (7, -4), (11, 1), (6, 1), (8, 3), (5, 3), (5, 7), (2, 7), (5, -7) and (4, -3), as shown in table 1.

td align="left"> Mode 12
[51]Table 1
Mode#dxdyMode#dxdy
Mode 41-1Mode 181 -11
Mode 511Mode 191-7
Mode 612Mode 203-10
Mode 721Mode 215-6
Mode 81-2Mode 227-6
Mode 92-1Mode 237-4
Mode 102-11Mode 24111
Mode 115-7Mode 2561
10-7Mode 2683
Mode 13113Mode 2753
Mode 1443Mode 2857
Mode 15111Mode 2927
Mode 161-1Mode 305-7
Mode 1712-3Mode 314-3
Mode 0 mode is the vertical prediction mode 1 mode is the horizontal prediction mode 2 is the mode-averaged (DC) prediction mode 3 is f�Xia planar mode prediction and the mode 32 is a mode of bidirectional prediction

[52] the Last mode 32 can be defined as a mode of bidirectional prediction using bidirectional interpolation, as will be described later with reference to Fig.10.

[53] Fig.7 depicts an illustrative graphical view for explaining modes of intra-frame prediction, having different directions, in accordance with the illustrative option implementation.

[54] As has been described with reference to table 1, the modes of intra-frame prediction may have different directions tan-1(dy/dx) through the use of multiple parameters (dx, dy).

[55] As shown in Fig.7, neighboring pixels A and b located on a continued line 700 having the angle of tan-1(dy/dx) that is determined according to the parameters (dx, dy) of each mode shown in table 1, around the current pixel P, predicted in the current block, can be used as a predictor (extrapolation functions) for the current pixel P. In this case, preferably, the neighboring pixels A and B, used as the predictor, was the neighboring pixels of the block located on the top, left, right top and left bottom sides of the current block, which has been previously encoded and restored�flax. In addition, if the continued line 700 passes between, and not through actual positions of the adjacent pixels, the adjacent pixels that are closer to the current pixel P from among neighboring pixels that are close to continue the line 700, can be used as a predictor or prediction can be performed by using neighboring pixels that are close to continue the line 700. For example, the average value between adjacent pixels that are close to continue the line 700, or a weighted value that evaluates the distance between the intersection of the continued line 700 and neighboring pixels that are close to continue the line 700, can be used as a predictor for the current pixel P. in addition, as shown in Fig.7, it can be communicated in the elements of blocks adjacent pixels, for example, neighboring pixels A and B, will be used as a predictor for the current pixel P of the number of neighboring pixels on the X-axis and the adjacent pixels on the Y-axis, which are available in accordance with the directions of prediction.

[56] Fig.8 depicts an illustrative graphical representation to explain the process of forming the predictor (extrapolation functions), in a situation where continued line 800 having a pre-defined� angle, passes between, and not through actual positions of neighboring pixels, in accordance with the illustrative option implementation.

[57] As shown in Fig.8, if the continued line 800 having the angle of tan-1(dy/dx) that is determined according to the parameters (dx, dy) of each mode passes between the actual positions of the neighboring pixel 810 A and the adjacent pixel 820, then the weighted average value that evaluates the distance between the intersection of the continued line 800 and neighboring pixels 810 A and 820 B, located close to continue the line 800, can be used as a predictor for the current pixel P as described above. For example, when the distance between the intersection of the continued line 800 having the angle of tan-1(4dy/dx), and a neighboring pixel 810 And is equal to f, and the distance between the intersection of the continued line 800 and a neighboring pixel 820 B is equal to g, the predictor for the current pixel P can be obtained by computing (A*g+B*f)/(f+g). In this case, preferably, f and g can be normalized distance and use an integer. When using the software or hardware, the predictor for the current pixel P may be obtained by the shift operation (g*A+f*B+2)>>2. As shown in Fig.8, if the continued line 800 passes through the first quarter, located close to the adjacent p�xelu 810 A, from among the four parts obtained by dividing the distances between the actual positions of the neighboring pixel 810 A and the adjacent pixel 820 In four parts, the predictor for the current pixel P can be obtained by computing (3*A+B)/4. Such an operation can be performed by shift operations with account of rounding to the nearest whole number, like the calculation of (3*A+B+2)>>2.

[58] Fig.9 depicts an illustrative graphical representation to explain the process of formation of a predictor in a situation where continued line having a predetermined angle of inclination, passes between the actual positions of neighboring pixels, in accordance with another illustrative variant implementation.

[59] As shown in Fig.9, if the continued line having the angle of tan-1(dy/dx) that is determined according to the parameters (dx, dy) of each mode passes between the actual positions of the neighboring pixel 910 A and the adjacent pixel 920, the fragment located between adjacent 910 pixel and a neighboring pixel 920 may be divided into a predetermined number of regions, and the weighted average value that evaluates the distance between the intersection and the neighboring pixel 910 A and a neighboring pixel 920 In each divided region may be used as the value predicted�me. For example, as shown in Fig.9, the fragment located between adjacent 910 pixel and a neighboring pixel 920, can be divided into five fragments P1-P5 can be determined in a typical weighted average value that evaluates the distance between the intersection and the neighboring pixel a 151 and the neighboring pixel 152 In each slice using a typical weighted average can be used as a predictor for the current pixel P. in More detail, if the continued line passes through the fragment P1, the value of the neighboring pixel 910 And may be determined as a predictor for the current pixel P. If the continued line passes through a fragment of P2, then a weighted average (3*A+1*B+2)>>2, evaluates the distance between adjacent pixel 910 A and a neighboring pixel 920 and the mid-point of the fragment P2 may be determined as a predictor for the current pixel P. If the continued line passes through the P3 fragment, then the weighted average value (2*A+2*B+2)>>2, evaluates the distance between adjacent pixel 910 A and a neighboring pixel 920 and the mid-point of the P3 fragment may be determined as a predictor for the current pixel P. If the continued line passes through a fragment of P4, then the weighted average value (1*A+3*B+2)>>2, evaluates the distance between adjacent pixel 910 A and a neighboring pixel 920 � and the mid-point of the P4 fragment, can be determined as a predictor for the current pixel P. If the continued line passes through a fragment of P5, the value of the neighboring pixel 920 may be determined as a predictor for the current pixel P.

[60] in addition, if two adjacent pixel that is adjacent to the pixel A located on the upper side, and a neighboring pixel located on the left side, intersect with continued line 700, as shown in Fig.7, the average value of the neighboring pixel and A neighboring pixel can be used as a predictor for the current pixel P. alternatively, if the value of (dx*dy) is positive, it can be used by the neighboring pixel A located on the upper side, and if the value of (dx*dy) is negative, can be used adjacent pixel located on the left side.

[61] Preferably, the modes intraframe prediction having different directions, as shown in table 1, were pre-set at the end of the encoding and the decoding end, and that were transmitted exclusively corresponding to the index mode of the intraframe prediction set for each block.

[62] Fig.10 depicts an illustrative graphical representation for explaining the mode of bidirectional prediction, in accordance with �llustrative option implementation.

[63] As shown in Fig.10, in the mode of bidirectional prediction is calculated geometric mean value that evaluates the distance to the top, bottom, left and right boundaries of the current pixel P and pixels positioned on upper, lower, left and right border around the current pixel P, predicted in the current block, and the calculation result is used as the predictor for the current pixel P. that is, in the mode of bidirectional prediction of the geometric mean value of the distances to the top, bottom, left and right boundaries of the current pixel P and 1061 pixel A, pixel 1002, 1006 pixel D and the pixel E 1007, located on the top, bottom, left and right boundaries of the current pixel P may be used as a predictor for the current pixel 1060 R. In this case, since the mode of bidirectional prediction is one of the modes of intra-frame prediction, the neighboring pixels located on the upper and left sides, which have been previously encoded and then restored, should also be used as reference pixels in the prediction process. Consequently, the corresponding pixel values in the current block are not used as 1061 pixel A and pixel 1002 In, and uses the values of the virtual pixels generated by the ISP�of litvania neighboring pixels, located on the top and left sides.

[64] in More detail, the virtual pixel 1003, located on the bottom right side of the current block is calculated by using the average value of the neighboring pixel 1004 RightUpPixel located on the right on the upper side, and a neighboring pixel 1005 LeftDownPixel to the left on the bottom side adjacent to the current block, as shown in equation 1.

[65] the Equation 1

C=0.5(DownPixel+UpPixel)

[66] Equation 1 can be calculated by shift operation, such as C=0.5(LeftDownPixel+RightUpPixel+1)>>1.

[67] When the Current pixel 1060 P extends down a distance W1 to the left border, and the distance W2 to the right border of the current pixel 1060 P, the value of the virtual pixel 1061, located on the lower boundary can be defined by using the average value of the neighboring pixel 1005 LeftDownPixel to the left on the bottom side, and the pixel 1003 S. For example, a pixel value of 1061 And can be calculated by using the same equation presented in equation 2.

[68] Equation 2

A=(C*W1+DownPixel*W2/(W1+W2);

A=(C*W1+DownPixel*W2+((W1+W2)/2))/(W1+W2)

[69] As shown in equation 2, when the value of W1+W2 is being built in the second degree, like 2^n,

A=(C*W1+LeftDownPixel*W2+(W1+W2)/2))/(W1+W2) can be calculated by shift operation as A=(C*W1+LeftDownPixel*W2+2^(n-1))>>n without division.

<> [70] Similarly, when the current pixel 1060 P is expanded to the right by the distance h1 to the upper border of the current pixel 1060 P, and the distance h2 to the lower border of the current pixel 1060 P, the value of the virtual pixel 1002 R, located on the right boundary can be set by using the average value of the neighboring pixel 1004 RightUpPixel located on the right on the upper side, and the pixel 1003, at a distance of h1 and h2. For example, the value of the pixel 1002 In can be computed by using the same equation presented in equation 3.

[71] Equation 3

B=(C*h1+UpPixel*h2)/(h1+h2);

B=(C*h1+UpPixel*h2+(h1+h2)/2))/(h1+h2)

[72] As shown in equation 3, when the value of h1+h2 is built in the second degree, like 2^m,

B=(C*h1+RightUpPixel*h2+(h1+h2)/2))/(h1+h2) can be calculated by shift operation as B=(C*h1+RightUpPixel*h2+2^(m-1))>>m without division.

[73] After determining the values of the virtual pixel 1061 And located on the lower boundary of the current pixel 1060 P, and the virtual pixel 1002 In located on the right border of the current pixel 1060 P, through the use of equations 1-3, the predictor for the current pixel 1060 R can be determined by using the average value of A+B+D+E. in More detail, the weighted average value that evaluates the distance between the current pixel 1060 R and the virtual pixel 1061 A, virtual �Exelon 1002, pixel 1006 D 1007 pixel E, or the average value of A+B+D+E can be used as a predictor for the current pixel 1060 R. For example, if the size shown in Fig.10 block is 16×16 and uses a weighted average, the predictor for the current pixel 1060 R can be obtained as follows: (h1*A+h2*D+W1*B+W2*E+16)>>5. Accordingly, the bidirectional prediction is applied to all pixels in the current block, and in the mode of bidirectional prediction block is formed of a prediction of the current block.

[74] Since the coding prediction is performed in accordance with the modes of intra-frame prediction, which change depending on the block size, in accordance with the characteristics of the image may be achieved by more efficient compression.

[75] meanwhile, since in accordance with this illustrative variant implementation uses more modes intraframe prediction than the number of modes of intra-frame prediction used in traditional codecs, compatibility with traditional codec can become problematic. Accordingly, it may be necessary to transform the available modes intraframe prediction having different directions, one of fewer Regis�s intra-frame prediction. That is, if the number of available modes of intra-frame prediction of the current block is N1 (N1 is an integer), to create the compatibility modes available intra-frame prediction of the current block with the block that has N2 (N2 is an integer that is different from N1) modes of intra-frame prediction modes of intra-frame prediction of the current block can be converted to the mode of the intraframe prediction that have the most similar direction from N2 modes of intra-frame prediction. For example, it is assumed that the current block are available in total, 33 mode intra-frame prediction, as shown in table 1, and the mode intra-frame prediction, which ultimately applies to the current block is a mode 14, that is, (dx, dy)=(4, 3), having a direction tan-1(3/4)=36,87(degrees). In this case, the mapping mode of the intraframe prediction applied to the current block, with one of the 9 modes of intra-frame prediction, as shown in Fig.4, can be selected mode 4 (down right) with the direction that is most similar to the direction in 36,87 (degrees). That is, the mode 14, which are presented in table 1, can be converted to mode 4, which is illustrated in Fig.4. Similarly, if regionalnational predictions applied to the current block, selects the mode 15, that is, (dx, dy)=(1, 11), from among the 33 available modes of intra-frame prediction, is presented in table 1, since the direction mode of the intraframe prediction applied to the current block is tan-1(11)=84,80 (degrees), the mode 0 (vertical transformation), which is depicted in Fig.4 and has a direction that is most similar to the direction in 84,80 (degrees), can be converted to mode 15.

[76] meanwhile, the decoding unit is encoded using intra-frame prediction requires information about the mode of intra-frame prediction, by which was encoded current block. Accordingly, in the encoding process of the image information on the mode of intra-frame prediction of the current block is added to the bitstream, and at that time, if the information about the mode of the intraframe prediction is added to the bit stream unchanged for each block, the header is expanded, resulting in reduced compression efficiency. Accordingly, the information about the mode of intra-frame prediction of the current block which is determined as the result of encoding the current block cannot be transferred intact, can only be transferred to the value of the difference between the value �aktionscode mode intra-frame prediction and value prediction mode of the intraframe prediction predicted from neighboring blocks.

[77] If the mode of the intraframe prediction having different directions, in accordance with this illustrative variant implementation, the number of available modes of intra-frame prediction may vary depending on unit size. Accordingly, the prediction mode intra-frame prediction of the current block, it is necessary to convert the modes of intra-frame prediction of the adjacent blocks in typical modes of intra-frame prediction. In this case, it is preferable that the typical modes of intra-frame prediction may have fewer modes of intra-frame prediction of the number of modes of intra-frame prediction of the available neighboring blocks, or 9 modes of intra-frame prediction, as shown in Fig.14.

[78] Fig.11 depicts a graphical representation to explain the process of formation values of the prediction mode intra-frame prediction of the current block, in accordance with the illustrative option implementation.

[79] As shown in Fig.11, if the current block is the block 110, the mode intra-frame prediction of the current block 110 And can be predicted on the basis of modes of intra-frame prediction derived from neighboring blocks. For example, if a certain�Lenno mode intra-frame prediction, which is determined on the basis of the left block 111 In the current block 110 And is mode 3, and mode intra-frame prediction, which is determined on the basis of the upper block 112 C is mode 4, the mode intra-frame prediction of the current block 110 And can be predicted so that he was mode 3 having a smaller value from among the prediction modes of the upper block 112 C and the left block 111 V. If the mode of the intraframe prediction defined by actual coding using intra-frame prediction is performed with respect to the current block 110 And is mode 4, as information about the mode of the intraframe prediction is transmitted only difference from 1 mode 3, i.e. the value of the mode intra-frame prediction, predicted based on the neighboring blocks 111 and 112. When decoding image is generated in a similar fashion the value of the prediction mode intra-frame prediction of the current block, the value differences of modes transmitted through the bitstream, and is added to the value of the prediction mode of the intraframe prediction, as well as receive information on the mode of the intraframe prediction, in fact, applied to the current block. Despite the exclusive use of the adjacent blocks located on the top and left �Toroni current block, mode intra-frame prediction of the current block 110 And can be predicted through the use of other neighboring blocks as shown in Fig.11F and 11D.

[80] meanwhile, since the actual running modes intraframe prediction vary depending on the size of the block mode intra-frame prediction, predicted from the neighboring blocks cannot be mapped to mode intra-frame prediction of the current block. Accordingly, the prediction mode intra-frame prediction of the current block based on neighboring blocks having different sizes, essential unified process of transformation of modes of intra-frame prediction blocks having different modes of intra-frame prediction.

[81] Fig.12 and 13 depict an illustrative graphical representation for explaining the conversion process for harmonization of intra-frame prediction blocks having different sizes, in accordance with illustrative embodiments of the implementation.

[82] As shown in Fig.12, it is assumed that the current block 120 And has a size of 16×16, the left block 121 has a size of 8×8, and the upper block 122 has a size of 4×4. In addition, as shown in example 1 shown in Fig.2, it is assumed that the number of available modes intraframe prediction blocks have�their sizes 4×4, 8×8 and 16×16, is equal to 9, 9 and 33. In this case, since the number of available modes intraframe prediction of the left block 121 and the upper block 122 is different from the number of available modes of intra-frame prediction of the current block 120, the mode of the intraframe prediction, predicted on the basis of the left block 121 and the upper block 122, is not suitable for use as the value of a prediction mode intra-frame prediction of the current block 120 A. Thus, Fig.12 modes intraframe prediction adjacent block 121 and In the neighboring block 122 are changed respectively to the first and second typical modes of intra-frame predictions with the most similar direction from among a predetermined number of typical modes of intra-frame prediction, as shown in Fig. 14, and the mode having a smaller mode value is selected on the basis of the first and second typical modes of intra-frame prediction as the outcome of a typical mode of the intraframe prediction. Mode intra-frame prediction, which has the similar trend with total typical mode of the intraframe prediction, the selected mode is intra-frame prediction, which are available depending on the size of the current block 120 A, predicted � mode intra-frame prediction of the current block 120 A.

[83] In an alternative embodiment, as shown in Fig.13, it is assumed that the current block 130 And has a size of 16×16, the left block 133 In has a size of 32×32, and the upper block 132 C has a size of 8×8. In addition, as shown in example 1 shown in Fig.2, it is assumed that the number of available modes intraframe prediction blocks having a size of 8×8, 16×16 and 32×32 equals 9, 9 and 32. In addition, it is assumed that the mode of the intraframe prediction of the left block 133 is In mode 4, and mode of the intraframe prediction of the upper block 132 With mode is 31. In this case, since the modes intraframe prediction of the left block and 133 In the upper block 132 are incompatible With each other, each of the modes intraframe prediction of the left block and 133 In the upper block 132 is converted With one of the typical modes of intra-frame prediction, as shown in Fig.14. Since the mode 31, which is the mode of the intraframe prediction of the left block 133, has a direction (dx, dy)=(4, -3), as shown in table 1, mode 31 is converted to mode 5 having a most similar direction with tan-1(-3/4) from the typical modes of intra-frame prediction, shown in Fig.14, furthermore, because the mode 4 has a direction, which is similar to the direction of mode 4 from among typical�'s modes of intra-frame prediction, shown in Fig.14, the mode 4, that is, the mode of the intraframe prediction of the upper block 132, is converted to mode 4.

[84] Then mode 4 having a smaller mode value mode among 5, which is converted to the mode of the intraframe prediction of the left block 133, and mode 4, which is converted to the mode of the intraframe prediction of the upper block 132 C may be defined as the value of a prediction mode intra-frame prediction of the current block 130 a, and the quality of the information on the prediction mode of the current block 130 And can be encoded only the value difference between the actual mode of the intraframe prediction and the predicted mode intra-frame prediction of the current block 130 A.

[85] Fig.14 depicts an illustrative graphical representation to explain the process of converting modes intraframe prediction of the adjacent blocks in one of the typical modes intraframe prediction, in accordance with the illustrative option implementation. Fig.14 as typical modes of intra-frame prediction mode is set to 0, vertical prediction mode 1: horizontal prediction mode averaged (DC) predictions (not shown), mode 3-diagonal down left prediction mode 4 diagonal predictions down in�ight, mode 5 vertical prediction right mode 6 horizontal prediction down mode 7 vertical predictions left and horizontal mode 8 predictions up. However, the typical modes of intra-frame prediction is not limited to this, and can be specified with a different number of directions.

[86] As shown in Fig.14, is set in advance a predetermined number of typical modes of intra-frame prediction, wherein the modes of intra-frame prediction of the adjacent blocks is transformed into a typical mode of the intraframe prediction, having a most similar direction. For example, if a certain mode of the intraframe prediction of the neighboring block is 140 mode MODE_A intra-frame predictions with the direction, the 140 mode MODE_A intra-frame prediction of the neighboring block is converted to mode 1, having a most similar direction from among 9 preset typical modes 1-9 intra-frame prediction. If a certain mode of the intraframe prediction of the neighboring block is a regime of 141 MODE_B intra-frame predictions with the direction, the mode 141 MODE_B intra-frame prediction of the neighboring block is converted to mode 5 having a most similar direction from among 9 preset typical modes 1-9 vnutrikorovogo� predictions.

[87] Also, if the available modes intraframe prediction of the adjacent blocks are not similar, the modes of intra-frame prediction of the adjacent blocks are converted to typical modes of intra-frame prediction mode of the intraframe prediction with the lowest value of the mode, is selected as the outcome of a typical mode of the intraframe prediction of the adjacent blocks from among the converted modes intraframe prediction by neighboring blocks. Accordingly, the reason that the typical mode of the intraframe prediction has a smaller mode value, is that a smaller mode value is defined for frequently generated modes of intra-frame prediction. That is, if based on the neighboring blocks are predicted using various modes of intra-frame prediction, since the possibility of a mode of intra-frame prediction, having a smaller mode value, the most high, in the presence of various prediction modes as a predictor for the prediction mode of the current block, it is preferable to select a prediction mode having a smaller mode value.

[88] Sometimes, despite the fact that the typical mode intra-frame prediction is selected based on the neighboring blocks, a typical mode of the intraframe prediction is not �can be used as a typical mode of the intraframe prediction predictor mode for intra-frame prediction of the current block. For example, if the current block 120 And has 33 mode intra-frame prediction, and the typical mode of the intraframe prediction has only 9 typical modes of intra-frame prediction, as described with reference to Fig.12, the mode of intra-frame prediction of the current block 120 A, corresponding to the typical mode of the intraframe prediction, simply does not exist. In this case, like the transformation of modes of intra-frame prediction of neighboring blocks in a typical mode of the intraframe prediction, as described above, the mode of intra-frame predictions with the direction that is most similar to the direction of the typical mode of the intraframe prediction, selected from the available modes intraframe prediction depending on the size of the current block may be selected as the final predictor mode for intra-frame prediction of the current block. For example, if a typical mode of the intraframe prediction, which ultimately is selected based on the neighboring blocks shown in Fig.14, is mode 6, mode intra-frame predictions with the direction that is most similar to the direction of mode 6 among modes of intra-frame prediction, available according to the size of the current block, could eventually b�you selected as the predictor for the mode of intra-frame prediction of the current block.

[89] meanwhile, as described above with reference to Fig.7, if the predictor for the current pixel P is formed by using neighboring pixels located on a continued line of 700 or close to it, then continue the line 700 actually has a direction tan-1(dy/dx). Because when you use hardware or software division (dy/dx) is required to calculate the direction, the calculation is performed up to decimal places, thereby increasing the amount of computation. Accordingly, it is preferable that, when the direction of prediction for selecting adjacent pixels with a view to their use as reference pixels around the pixel in the block is specified by using parameters (dx, dy), as described with reference to table 1, the parameters dx and dy were asked to reduce the amount of computation.

[90] Fig.15 depicts a graphical representation for explaining the relationship between the current pixel and neighboring pixels that are located on a continued line with a direction (dx, dy), in accordance with the illustrative option implementation.

[91] As shown in Fig.15, it is assumed that the position of the current pixel 1510 P, located in the i-th coordinate (i is an integer) on the upper boundary of the current block, and the j-th coordinate (j is an integer number�) on the left boundary of the current block, is P(j, i), and the upper neighboring pixel and a left neighboring pixel, which are continued on the lines passing through the current pixel 1510 R and having a direction, i.e. the angle of inclination tan-1(dy/dx) are respectively the pixel 1520 and 1530 pixel B. in addition, if it is assumed that the position of the upper neighboring pixels correspond to the X-axis on the coordinate plane, and the position of the left neighboring pixels correspond to the Y-axis on the coordinate plane, they are defined through the use of trigonometric functions, when the upper neighboring pixel 1520 A, which intersects with continued line, you'll be in the position (j+i*dx/dy, 0), and the left neighboring pixel In 1530, which intersects with continued line, is located at position (0, i+j*dy/dx). Accordingly, to define the upper neighboring pixel 1520 A and the left neighboring pixel In 1530 for predicting the current pixel 1510 R requires the division is performed, such as dx/dy or dy/dx. Because the division is very complex, as described above, the calculation speed of the software or hardware can be reduced.

[92] Accordingly, the value of at least one of the parameters dx and dy, which reflects the direction of the prediction mode to determine the neighboring pixels used for intra-frame prediction may be computing�but by building in the second degree. That is, when n and m are integers, dx and dy parameters can be equal to 2^n and 2^m, respectively.

[93] As shown in Fig.15, if the left neighboring pixel In 1530 is used as the predictor for the current pixel 1510 R, and dx has a value of 2^n, j*dy/dx is required to determine (0, i+j*dy/dx), i.e. the position of the left neighboring pixel In 1530 becomes (i*dy)/(2^n), this division using this squaring is easily performed by the shift operation (i*dy)>>n, due to decreasing the amount of calculations.

[94] similarly, if the upper adjacent pixel 1520 And is used as a predictor for the current pixel 1510 P, and dy has a value of 2^m, i*dx/dy is required to determine the (j+i*dx/dy, 0), i.e. the position of the upper neighboring pixel 1520 And becomes (i*dx)/(2^m), this division using this squaring is easily performed by the shift operation (i*dx)>>m.

[95] Fig.16 depicts a graphical representation for explaining changes in neighboring pixel located on a continued line with a direction (dx, dy), according to the position of the current pixel, in accordance with the illustrative option implementation.

[96] One of the upper neighboring pixel and the left neighboring pixel located on a continued line through Current�the third pixel, is selected as a neighboring pixel, which is necessary for predicting, in accordance with the position of the current pixel and the angle of inclination of the continued line.

[97] As shown in Fig.16, if the current pixel 1610 is in position P(j, i) and predicted through the use of the neighboring pixel located on a continued line, with the angle of inclination, it is possible to predict the current pixel 1610 R used upper pixel A. If the current pixel 1620 is at position Q(b, a), for predicting the current pixel 1620 Q-left pixel is used V.

[98] If only component dy in the Y axis direction from the number of parameters (dx, dy) shows the direction of the prediction, erected in the second degree, as 2^m, the upper pixel A, which is shown in Fig.16, can be defined by the shift operation, etc., (j+(i*dx)>>m, 0) without division, and the left pixel is to be divided, as presented in (0, a+b*2Am/dx). Accordingly, to avoid division when forming predictor for all pixels of the current block, all the parameters dx and dy should be erected in the second degree.

[99] Fig.17 and 18 depict a graphical representation for explaining the method of determining the direction mode of the intraframe prediction, in accordance with illustrative embodiments of the implementation.

[100] In General, there are many cases to�Yes linear templates, presented in graphic or video, are vertical or horizontal. Accordingly, when the modes intraframe prediction having different directions are determined by using parameters (dx, dy), the encoding efficiency of an image may be improved by determining the values of the parameters dx and dy. For example, the absolute values of dx and dy are set so that the distance between the directions of predictions that are close to the horizontal or vertical direction, was short, and the distance between the prediction modes that are close to the diagonal direction, was long.

[101] in More detail, as shown in Fig.17, if dy has a constant value of 2^n, then the absolute value of dx may be set so that the distance between the directions of predictions that are close to the vertical direction, was short, and the distance between the prediction modes that are close to the horizontal direction, was longer. In other words the absolute value of dx may be set so that the distance between the directions of predictions that are close to the vertical direction, was short, and the distance between the prediction modes that are close to the diagonal direction (+45 or -45 degrees), was longer. �about is, if dy has a constant value, which is erected in the second degree, the distance can be set smaller as it approaches the absolute value of dx to 0, so the distance was decreasing as it approaches the direction of continued the line to the vertical direction and the distance can be set greater as the distance of the absolute value of dx from 0 to the distance was increased with approach direction continued the line to the horizontal direction. For example, as shown in Fig.17, if dy has a value of 2^4, i.e., 16, then the value of dx may be set to a 1, 2, 3, 4, 6, 9, 12, 16, 0, -1, -2, -3, -4, -6, -9, -12 and -16 so that the distance between continued lines close to the vertical direction, can be short, and the distance between continued lines close to the horizontal direction, could be long.

[102] similarly, if dx has a constant value of 2^n, then the absolute value of dy may be set so that the distance between the directions of predictions that are close to the horizontal direction, was short, and the distance between the prediction modes that are close to the vertical direction, was longer. In other words, the absolute value of dy may be set so that the distance between the directions of predictions close to the horizontal�sexual direction, he was short, and the distance between the prediction modes that are close to the diagonal direction (+45 or -45 degrees), was longer. That is, if dx is a constant value, which is erected in the second degree, the distance can be set smaller as it approaches the absolute value of dy to 0, so the distance was decreasing as it approaches the direction of continued the line to the horizontal direction and the distance can be set greater as the distance of the absolute value of dy from 0 to what the distance was increased with approach direction continued the line to the horizontal direction. For example, as shown in Fig.18, if dx has a value of 2^4, i.e., 16, then the value of dy may be set to a 1, 2, 3, 4, 6, 9, 12, 16, 0, -1, -2, -3, -4, -6, -9, -12 and -16 so that the distance between continued lines close to the horizontal direction, could be short, and the distance between continued lines close to the vertical direction, can be long.

[103] in addition, if the value of any of the parameters dx and dy is constant, then the value of the other remaining parameter can be set large, in accordance with the mode of prediction. In more detail, if the value of dy is constant, then the distance between the parameters of dx may be set large in advance about�certain value. For example, if the value of dy is constant and equals 16, then the value of dx may be set so that the difference of absolute values between different parameters dx is incremented by 1, as 0, 1, 3, 6 and 8. In addition, the angle between the horizontal direction and vertical direction can be divided into pre-defined elements, and so the boost can be set in each of the divided angles. For example, if the value of dy is constant, then the value of dx may be set to large on the value of "a" in the sector less than 15 degrees, more on the value of "b" between 15 and 30 degrees, and a large on the value of "c" in the sector more than 30 degrees. In this case, to receive the form, which is shown in Fig.17, the value of dx may be set in such a way as to satisfy the relation a<b<c.

[104] the prediction Modes that have been described with reference to Fig.15-18, can be defined as a prediction mode having a direction tan-1(dy/dx), by using parameters (dx, dy) as shown in tables 2-4.

[105]Table 2
dxdydx dydxdy
-323221323213
-263226323217
-213232323221
-173232-263226
-133232-213232
-93232-17
-53232-13
-23232-9
03232-5
23232-2
532320
932322
1332325
1732329

[106]Table 3
dxdydxdydxdy
-323219323210
-253225323214
-193232323219
-143232-253225
-103232-193232
-63232-14
-33232-10

-13232-6
03232-3
13232-1
332320
632321
1032323
1432 326

[107]Table 4
dxdydxdydxdy
-323223323215
-273227323219
-233232323223
-193232-273227
-153232-233232
-113232-19
-73232-15
-33232-11
03232-7
33232-3
732320
1132323
153232
19323211

[108] As has been described above with reference to Fig.15, the position of the current pixel P located in the i-th coordinate of the upper boundary of the current block and the j-th coordinate of the left border of the current block is P(j, i), and the upper neighboring pixel and A left neighboring pixel located on a continued line passing through the current pixel P and having the angle of tan-1(dy/dx), located at (j+i*dx/dy, 0) and (0, i+j*dy/dx), respectively. Accordingly, when performing intra-frame prediction by using the software or hardware, requires a calculation, such as i*dx/dy or j*dy/dx.

[109] If you want to calculate, such as i*dx/dy, the available values dx/dy or C*dx/dy obtained by multiplying a predetermined constant C, can be stored in the table, and the positions of neighboring pixels used for intra-frame prediction of the current pixel can be determined by using the values stored in the table which is previously prepared in the process of the actual intra-frame prediction. We �St', different values (dx, dy) defined in accordance with the prediction modes, as shown in table 1, and the available values i*dx/dy, reflecting the value of I as determined in accordance with the block size may be pre-stored in a table and can also be used in the intra-frame prediction. In more detail, if C*dx/dy has N distinct integers values, then N different numbers of values C*dx/dy can be saved as dyval_table[n] (n=0... integer to N-1).

[110] similarly, if you want to calculate, such as j*dy/dx, value dy/dx or C*dy/dx obtained by multiplying a predetermined constant C, can be pre-stored in the table, and the positions of neighboring pixels used for intra-frame prediction of the current pixel can be determined by using values stored in a table that is pre-organized in the process of the actual intra-frame prediction. That is, different values of (dx, dy) defined in accordance with the prediction modes, as shown in table 1, and the available values j*dy/dx, reflecting the values of j, as determined in accordance with the block size may be pre-stored in a table and can also be used for intra-frame PR�of wskazania. In more detail, if C*dy/dx has N distinct integers values, then N different numbers of values C*dy/dx can be saved as dxval_table [n] (n=0... integer to N-1).

[111] Also, after first storing the values of C*dx/dy or C*dy/dx in the table, the positions of the pixels of the neighboring block used for predicting the current pixel can be determined by using values stored in the table corresponding to i*dx/dy and j*dy/dx, without additional computations.

[112] For example, it is assumed that for the formation of the prediction modes in a form similar to that shown in Fig.17, the value of dy must be equal to 32, the value of dx must equal one of the following options: 0, 2, 5, 9, 13, 17, 21, 26 and 32, and the constant C should have a value of 32. In this case, since C*dy/dx is 32*32/dx and has one of the values 0, 512, 205, 114, 79, 60, 49, 39 and 32, in accordance with the value dx, value 0, 512, 205, 114, 79, 60, 49, 39 and 32 can be stored in a table and can also be used for intra-frame prediction.

[113] Fig.19 depicts a graphical representation of an algorithm illustrating a method of encoding image using an intraframe prediction, in accordance with the illustrative option implementation.

[114] As shown in Fig.19, in step 1910 the current frame image is divided at least at one Blo�, having a predetermined size. As described above, the current frame image macroblock is not limited to, having the size of 16×16, it can also be divided into blocks having dimensions 2×2, 4×4, 8×8, 16×16, 32×32, 64×64, 128×128 or more.

[115] In step 1920 pixel of the neighboring block used for prediction of each of the pixels in the current block is determined from among pixels of the neighboring block, which previously recovered through the use of a continued line having a predetermined angle of inclination. As described above, the position of the current pixel P located in the i-th coordinate of the upper boundary of the current block and the j-th coordinate of the left border of the current block is P(j, i), and the upper neighboring pixel and a left neighboring pixel located on a continued line passing through the current pixel P and having the angle of tan-1(dy/dx), located at (j+i*dx/dy, 0) and (0, i+j*dy/dx), respectively. To reduce the amount of compute dx/dy and dy/dx is required to determine the position of the neighboring pixel, it is preferable that a value of at least one of the parameters dx and dy, was erected in the second degree. In addition, if available, the values of dx/dy and dy/dx or values obtained by multiplying values of dx/dy and dy/dx at the predetermined constant, were advanced two�till then stored in the table the pixel of the neighboring block may be determined by searching the corresponding values in the table without additional computations.

[116] In step 1930, each of the pixels in the current block is predicted by using a certain pixel of the neighboring block. That is, the pixel value of the neighboring block is predicted as the value of the pixel of the current block and the prediction block of the current block is formed by repeatedly performing the above operations with respect to each of pixels in the current block.

[117] Fig.20 depicts a block diagram illustrating the device 2000 image decoding, in accordance with the illustrative option implementation.

[118] As shown in Fig.20, the device 2000 includes block 2010 parsing, block 2020 entropy decoding unit 2030 of the inverse quantization, the block 2040 of the inverse frequency conversion, block 2050 intra-frame prediction, block 2060 motion compensation, block 2070 deblocking and block 2080 spatial filtering. In this case, the block 2050 intraframe prediction corresponds to the decoding device image using an intraframe prediction.

[119] the Bit stream passes through 2005 block 2010 analysis, extracted coding information necessary to decode the graphical�their data for the current block, which is meant to decode. Coded graphic data is output as inverse quantized data through the block 2020 entropy decoding and block 2030 inverse quantization, and recovered as residual values through the block 2040 of the inverse frequency conversion.

[120] Block 2060 motion compensation and block 2050 intraframe prediction form the prediction block of the current block by using the parsed information about the coding of the current block. In particular, block 2050 intraframe prediction determines the pixel of the neighboring block is designed to be used in order to predict each of the pixels in the current block, from among pixels of the neighboring block, which previously recovered through the use of a continued line having a predetermined angle of inclination in accordance with the mode of the intraframe prediction is included in the bitstream. As described above, to reduce the amount of compute dx/dy and dy/dx is required to determine the position of the neighboring pixel, it is preferable that a value of at least one of the parameters dx and dy was erected in the second degree. In addition, block 2050 intra-frame prediction can pre-save the available values dx/dy and dy/dx or values, �must register by multiplying the values of dx/dy and dy/dx at the predetermined constant, in the table, determine the pixel of the neighboring block by finding appropriate values in the table, and to perform intra-frame prediction by using a certain pixel of the neighboring block.

[121] the prediction Block generated in block 2060 motion compensation or block 2050 intra-frame prediction, is added to the residual values to restore the current frame 2095. The restored current frame can be used as a reference frame 2085 next block by block 2070 deblocking and block 2080 spatial filtering.

[122] Fig.21 depicts a graphical representation of an algorithm illustrating a method of decoding image using an intraframe prediction, in accordance with the illustrative option implementation.

[123] As shown in Fig.21, in step 2110 the current frame image is divided, at least one block having a predetermined size.

[124] In step 2120 of the bit stream is retrieved, information about the mode of the intraframe prediction applied to the current block, which is suitable for decoding. Information about the mode of intra-frame prediction may be the value of the difference between the actual mode of the intraframe prediction and the predicted mode of vnutrikorovogo� predictions predicted based on neighboring blocks of the current block, or the values of the modes intraframe prediction having different directions, determined by using parameters (dx, dy), as described above. If the value of the difference mode is transmitted as the information about the prediction mode, the block 2050 intra-frame prediction can predict and determine the predicted mode intra-frame prediction of the current block on the basis of modes of intra-frame prediction of neighboring blocks that have been previously decoded, and to determine the mode of intra-frame prediction of the current block by adding the difference values of the mode extracted from the bit stream, the predicted value of the mode intra-frame prediction.

[125] In step 2130 block 2050 intraframe prediction determines the pixel of the neighboring block, which is intended for use for the purpose of prediction of each of the pixels in the current block from among neighboring pixels of the block, which previously recovered through the use of a continued line having a predetermined angle of inclination in accordance with the extracted prediction mode. As described above, the position of the current pixel P located in the i-th coordinate of the upper boundary of the current block � the j-th coordinate of the left border of the current block, is P(j, i), and the upper neighboring pixel and a left neighboring pixel located on a continued line passing through the current pixel P and having the angle of tan-1(dy/dx), located at (j+i*dx/dy, 0) and (0, i+j*dy/dx), respectively. To reduce the amount of compute dx/dy and dy/dx is required to determine the position of the neighboring pixel, it is preferable that a value of at least one of the parameters dx and dy was erected in the second degree. In addition, the available values of dx/dy and dy/dx or values obtained by multiplying values of dx/dy and dy/dx at the predetermined constant may be pre-stored in the table and the neighboring pixel block may be determined by searching the corresponding values in the table. Block 2050 intraframe prediction predicts the value of a certain pixel of the neighboring block as the value of the pixel of the current block and the prediction block of the current block is formed by repeatedly performing the above operations with respect to each of pixels in the current block.

[126] Illustrative options for implementation may be implemented as computer programs and can be implemented in mainframe computers that execute the programs by using a computer readable recording medium. Examples of vehicles�itemas recording medium include a magnetic recording medium (for example, memory ROM, floppy disks, hard disks, etc.) and optical recording medium (for example, a CD-ROM or DVD).

[127] the Device encoders and decoders of the illustrative embodiments may include a bus coupled to each unit, at least one processor (e.g., CPU, microprocessor, etc.) that is connected to the bus for controlling the operation of the devices for implementing the above-described functions and executing commands, and a memory connected to the bus to save the commands that are received and generated messages.

[128] Although the present invention is presented and described in detail with reference to preferred options for implementation, specialists in the art should understand that in form and details of the invention can be implemented with various changes without departing from the essence and scope of the invention as defined by the appended claims. Preferred options for implementation should be considered in a descriptive sense and not for purposes of limitation. Based on the foregoing, the scope of the invention is defined not by the detailed description of the invention but by the appended claims, and all differences within the scope will be considered as included in the present invention.

1. I�a device for coding images using intra-frame prediction, wherein the device contains:
block determine the mode of intra-frame prediction, which determines the mode intra-frame prediction of the current block subject to encoding, and the mode of the intraframe prediction indicates a specific direction of the plurality of directions, a direction indicated by one of the number of dx in the horizontal direction and a constant number in the vertical direction and the number of dy in the vertical direction and a constant number in the horizontal direction; and
unit perform intra-frame prediction, which performs intra-frame prediction for a current block in accordance with the mode of the intraframe prediction
moreover, the definition block mode intra-frame prediction and block perform intra-frame prediction are implemented by at least one processor,
moreover, intra-frame prediction contains the stage at which
define the position of the neighboring pixels by the shift operation based on the position of the current pixel and one of the parameters dx and dy, which indicates a certain direction, and adjacent pixels located on the left side of the current block or on the upper side of the current block.

2. The device according to claim 1, in which the specific direction indicating�tsya through the corner, equal tan-1(dy/dx), dx and dy are integers, and a value of at least one of the parameters dx and dy is a power of two.

3. Method of decoding a image using an intraframe prediction, comprising stages on which:
extracted from the bitstream mode intra-frame prediction of the current block, and the mode of the intraframe prediction indicates a specific direction of the plurality of directions, a direction indicated by one of the number of dx in the horizontal direction and a constant number in the vertical direction and the number of dy in the vertical direction and a constant number in the horizontal direction; and
perform intra-frame prediction for a current block in accordance with the mode of the intraframe prediction
moreover, intra-frame prediction contains the stage at which
define the position of the neighboring pixels by the shift operation based on the position of the current pixel and one of the parameters dx and dy, which indicates a certain direction, and adjacent pixels located on the left side of the current block or on the upper side of the current block.

4. A method according to claim 3, in which the specific direction specified by the angle equal to tan-1(dy/dx), dx and dy are integers, and the value�, at least one of the parameters dx and dy is a power of two.

5. A method according to claim 3, in which the specific direction specified by the angle equal to tan-1(dy/dx) (dx and dy are integers), and the absolute values of dx and dy are set so that the distance between the directions of predictions that are close to the horizontal direction or the vertical direction, was short, and the distance between the prediction modes that are close to the diagonal direction, was long.

6. A method according to claim 5, in which the specific direction specified by the angle equal to tan-1(dy/dx) (dx and dy are integers), a dy is a constant value that is a power of two, the distance between continued lines is reduced as it approaches the absolute value of dx to 0 so that the distance between continued lines is reduced as it approaches the direction of a certain line continued, determined in accordance with the values of dx and dy, in vertical direction, and the distance between continued lines increases with distance of the absolute value of dx from 0 so that the distance between continued lines increases as they approach a certain direction continued the line to the horizontal direction.

7. A method according to claim 5, in which the particular�th direction indicated by the angle equal tan-1(dy/dx) (dx and dy are integers), dx is a constant value that is a power of two, and the distance between continued lines is reduced as it approaches the absolute value of dy to 0 so that the distance between continued lines is reduced as it approaches the direction of a certain line continued, determined in accordance with the values of dx and dy, the horizontal direction, and the distance between continued lines increases with distance of the absolute value of dy from 0, what is the distance between continued lines increases as they approach a certain direction continued the line to the vertical direction.

8. A method according to claim 3, wherein the certain direction is indicated by the angle equal to tan-1(dy/dx) (dx and dy are integers), and the pixel located in the i-th coordinate (i is an integer) based on the upper boundary of at least one block, and in the j-th coordinate (j is an integer) based on the left border of the at least one block is in position (j, i), with an upper neighboring pixel and a left neighboring pixel, the near-line continued near the pixel (j, i), located in the (j+i*dx/dy, 0) and (0, i+j*dy/dx), respectively.

9. A method according to claim 8, wherein the intraframe mode preska�number of available values With*(dx/dy), C is a constant, is preserved, and the position of the upper neighboring pixel and the left neighboring pixel is determined by using the saved values.



 

Same patents:

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a method for bit-plane coding of signals, for example, an image or video signal in the DCT transform domain. The bit planes of the DCT blocks are transmitted plane by plane in order of significance. As each plane contains more signal energy than the less significant layers together, the resulting bitstream is scalable in the sense that it may be truncated at any position. The later the bitstream is truncated, the smaller the residual error when the image is reconstructed. For each bit plane, a zone or partition of bit plane is created that encompasses all the non-zero bits of the DCT coefficients in that bit plane. The partition is created in accordance with a strategy which is selected from a number of options depending on the content of the overall signal and/or the actual bit plane. A different zoning strategy may be used for natural images than for graphic content, and the strategy may vary from bit plane to bit plane. The form as well as other properties such as the size of each partition can thus be optimally adapted to the content. Two-dimensional rectangular zones and one-dimensional zigzag scan zones may be mixed within an image or even within a DCT block. The selected zone creating strategy is embedded in the bitstream, along with the DCT coefficient bits in the actual partition.

EFFECT: high efficiency of a scalable method of compressing signal content.

13 cl, 5 dwg

FIELD: radio engineering, communication.

SUBSTANCE: invention relates to means of detecting illegal use of a processing device of a security system, used to descramble various media data distributed over multiple corresponding channels. The method includes counting new messages ECMj,c, received by the processing device of the security systems for channels, other than channel i, after the last received message ECMi,p; verifying that the message ECMi,c was received during said time interval by verifying that the number of new messages ECMj,c, received for channels other than i, reaches or exceeds a given threshold greater than two; increasing the counter Kchi by the given value each time when after verification a message ECMi,c is received during a given time interval, immediately after a message ECMi,p, otherwise the counter Kchi is reset to the initial value; detecting illegal use once the counter Kchi reaches said threshold.

EFFECT: reducing the probability of illegal use of a processing device.

10 cl, 3 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. The method of decoding video comprises obtaining from a bit stream information on pixel value compensation in accordance with a pixel value band or a limiting value level, if information on pixel value compensation indicates a band, applying a compensation value of the predefined band obtained from the bit stream to the pixel included in the predefined band among pixels of the current block; and if information on pixel value compensation indicates a limiting value level, applying a compensation value of the predefined boundary direction obtained from the bit stream to the pixel in the predefined boundary direction among pixels of the current block, wherein the predefined band is one of bands formed by breaking down the full range of pixel values.

EFFECT: high quality of the reconstructed image.

3 cl, 22 dwg, 2 tbl

FIELD: physics, video.

SUBSTANCE: invention relates to means of encoding and decoding video. The method includes determining a first most probable intra-prediction mode and a second most probable intra-prediction mode for a current block of video data based on a context for the current block; performing a context-based adaptive binary arithmetic coding (CABAC) process to determine a received codeword, corresponding to a modified intra-prediction mode index; determining the intra-prediction mode index; selecting the intra-prediction mode.

EFFECT: high efficiency of signalling an intra-prediction mode used to encode a data block by providing relative saving of bits for an encoded bit stream.

50 cl, 13 dwg, 7 tbl

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a media device and a system for controlling access of a user to media content. Disclosed is a device (100, 200) for controlling access of a user to media content, the device comprising: an identification code output (102, 103, 202) for providing an identification code to the user, the identification code identifying the media device; a control code generator (104, 204) for generating a control code depending on the identification code and an access right; an access code input (106, 107, 206) for receiving an access code from the user. The access code is generated depending on the identification code and the access right by a certain access code device, and an access controller (108, 208) enables to compare the access code to the control code, and when the access code matches the control code, grants the user access to the media content in accordance with the access right.

EFFECT: managing user access to media content, wherein access is granted specifically on the selected media device.

14 cl, 6 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a method and an apparatus for controlling settings of a device for playback of a content item. Disclosed is a method of controlling settings of a rendering device for playback of a content item, said rendering device being configured to connect with at least one source device, said at least one source device providing at least one content item, wherein the method comprises steps of: generating a plurality of entries for said at least one source device, each of the plurality of entries corresponding to a different profile, each profile comprising settings for playback of a content item received from the corresponding source device. A user can request generation of a plurality of entries for the same source device and select one of said entries, wherein the rendering device is connected with the source device which corresponds to said selected entry; and settings of the rendering device for playback of the received content item are controlled according to the profile corresponding to said selected entry.

EFFECT: providing corresponding settings for playback of different types of content items.

9 cl, 2 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to video encoding/decoding techniques which employ a loop filter which reduces blocking noise. The technical result is achieved due to that a video encoding/decoding device, which encodes or decodes video using a loop filter, includes a deviation calculating unit which calculates deviation between a target noise cancellation pixel and a neighbouring pixel of the target pixel using a decoded image. A pattern form establishing unit limits the pattern form such that the less the deviation from the maximum deviation in the decoded image, the smaller the pattern form. When removing target pixel noise, using a weight coefficient in accordance with the degree of similarity between the pattern of the target pixel and the pattern of each search point in the form of a search and a weighted sum of pixel values at search points, the loop filter compares patterns using the limited pattern form and removes the target pixel noise.

EFFECT: reduced computational complexity of the noise cancellation filter, thereby preventing deterioration of encoding efficiency.

5 cl, 19 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to technology of automatic selection of extra data, for example, ad, guide data, extra data, data on operating performances. Thus, processing, storage and/or transmission resources can be saved. This device for automatic selection of extra data to be included in content comprises classifier connected with user profile and selection means connected with extra data base. Extra data of definite category is placed in appropriate or contrasting context depending on used interest in thus goods category. Profiles of user are automatically classified as profiles with either pronounces or weak interest in this category.

EFFECT: adapted selection of extra data to be included in the content for twofold decrease in total volume of extra data.

11 cl, 2 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to means of encoding and decoding images with prediction. The method includes receiving accessibility information of reference units of a current image and determining if the reference units are accessible for intra prediction according to the accessibility information. In the method, the accessibility information includes an indication of whether the reference unit is located within the image boundaries where the current image unit is located; whether the reference unit is located in the same layer as the current image unit; and whether the reference unit has already been encoded or decoded. In the method, reference units of the current image unit include a left side unit, an upper side unit and a upper left unit of the current image unit.

EFFECT: high efficiency of predicting an image unit.

16 cl, 8 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to techniques for encoding and decoding video images. Disclosed is a method of encoding image information containing motion data by selecting a motion vector from a group of at least three possible motion vectors for at least one current unit of a current image to be encoded. The method includes a step of determining an optimum selection subgroup comprising part of the possible motion vectors. Further, the method includes selecting a motion vector from the vectors of the optimum selection subgroup and inputting into said information data on allocation of a motion vector selected from the vectors of the optimum selection subgroup.

EFFECT: high efficiency of encoding and decoding video images by determining an optimum selection subgroup containing part of possible motion vectors.

12 cl, 8 dwg

FIELD: information technology.

SUBSTANCE: like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.

EFFECT: method for personalised filtration of content elements which does not require logic input or user identification procedures.

5 cl, 1 dwg

FIELD: information technology.

SUBSTANCE: like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.

EFFECT: method for personalised filtration of content elements which does not require logic input or user identification procedures.

5 cl, 1 dwg

FIELD: information technologies.

SUBSTANCE: method of a conversion system operation to manage digital rights to grant a license to a client's device corresponding to coded content consists in the following. The first content of the first type of digital rights content and the first license corresponding to the first content are converted to manage digital rights in order to generate the second content of the second type of digital rights content and the second license corresponding to the second content. A license request is received, corresponding to the second content distributed by means of superdistribution to a third party. The second license corresponding to the second content distributed by means of superdistribution is requested from a server corresponding to the second management of digital rights. The second license corresponding to the second content distributed by means of superdistribution is received and sent to a third party.

EFFECT: expansion of functional resources due to development of a license granting mechanism for appropriate content distributed by means of superdistribution.

17 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: network server of television server sets in random manner according to Internet protocol (IPTV) time of request for receiving main license within time period starting from time of broadcast transmission and ending at preset time in accordance with request for receiving license for playback of encrypted content, where request for receive comes from IPTV client terminal, and transmits to IPTV client terminal information about time of request for receiving main license and temporary license comprising temporary key of content which key corresponds to playback of broadcast transmission content from time of broadcast transmission start till preset time. License server transmits main license including content main key which corresponds to full playback of content according to request for receiving main license which request is executed using IPTV client terminal based on information about request for receive.

EFFECT: stabilisation of license server operation by eliminating concentration of license receive requests from large number of clients during time just after starting broadcast transmission of content.

6 cl, 11 dwg

FIELD: information technology.

SUBSTANCE: multimedia content purchasing system comprising: a memory area associated with a multimedia service; a multimedia server connected to the multimedia service via a data communication network; a portable computing device associated with a user; and a processor associated with the portable computing device, said processor being configured to execute computer-executable instructions for: establishing a connection to the multimedia server when the multimedia server and the portable computing device are within a predefined proximity; authenticating the multimedia server and the user with respect to the authenticated multimedia server; transmitting digital content distribution criteria; receiving, in response, promotional copies of one or more of the multimedia content items and associated metadata; and purchasing, when the multimedia server and the portable computing device are outside the predefined proximity, at least one of said one or more multimedia content items.

EFFECT: enabling flexible sharing of multimedia content between subjects.

17 cl, 9 dwg

FIELD: information technologies.

SUBSTANCE: device (600) to process stored data packets (110; 112) in a container of media data (104) and stored related meta information in a container of meta data (106); related meta information, including information on timing of transportation and information on location, indicating location of storage of saved data packets in the media data container (104); a device, comprising a processor (602) for detection, based on stored data packets (110; 112) and stored related meta information (124; 128); information on decoding (604; 704) for media useful load of stored data packets (110; 112), where information on decoding (604; 704) indicates, at which moment of time to repeatedly reproduce which useful load of stored data packets.

EFFECT: immediate accurate timing of synchronisation between different recorded media streams without complicated processing during each reproduction of recorded media streams.

21 cl, 12 dwg

FIELD: information technology.

SUBSTANCE: provided is an integrated interface device for performing a hierarchical operation for specifying a desired content list. The interface device has a function to display a content list, content specified by the content list, or the like by efficiently using a vacant area in the lower part of the display by displaying icons which display a hierarchical relationship, for example, "display in a row", in the upper part of the screen, thereby freeing a large space in the lower part of the display.

EFFECT: efficient use of the entire screen even after displaying an interface for performing an operation.

17 cl, 42 dwg

FIELD: radio engineering, communication.

SUBSTANCE: channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.

EFFECT: provision of a method to generate a recommendation for an additional element of content, the method is specially adapted for use with channels of individualised content.

13 cl, 1 dwg

FIELD: radio engineering, communication.

SUBSTANCE: channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.

EFFECT: provision of a method to generate a recommendation for an additional element of content, the method is specially adapted for use with channels of individualised content.

13 cl, 1 dwg

FIELD: information technology.

SUBSTANCE: wireless transmission system includes: a device (1) which wirelessly transmits AV content and a plurality of wireless recipient devices (5, 6) for reproducing the transmitted AV content. The device (1) for transmitting content has a group identification table which stores a group identifier for identification of a group formed by the wireless recipient device (5, 6). The device (1) adds the group identifier extracted from the group identification table to a control command for controlling recipient devices (5, 6) and wirelessly transmits the control command having the group identifier. The recipient devices (5, 6) receive the wirelessly transmitted control command from the device (1) if the corresponding group identifier has been added to the control command. The device (1) for transmitting content consists of a wired source device and a relay device which is connected by wire to the wired source device, and the relay device is wirelessly connected to the wireless recipient device and mutually converts the wired control command transmitted to the wired source device, and the wireless control command transmitted to the wireless recipient device, wherein the wired source device and the relay device are connected via HDMI (High-Definition Multimedia Interface).

EFFECT: providing the minimum required volume of transmitting control commands during wireless audio/video transmission.

21 cl, 13 dwg

Up!