|
Method and apparatus for image encoding and decoding using large transformation unit |
|
IPC classes for russian patent Method and apparatus for image encoding and decoding using large transformation unit (RU 2510945):
Method of encoding/decoding multi-view video sequence based on adaptive local adjustment of brightness of key frames without transmitting additional parameters (versions) / 2510944
Invention relates to computer engineering and specifically to digital signal processing techniques. Disclosed is a method for adaptive local adjustment of change in brightness of a key frame for encoding a multi-view video sequence, in which a pixel value of the current encoded unit belonging to the encoded frame and a pixel value of the reference unit belonging to the key frame are obtained. The method further involves obtaining recovered, i.e. encoded and then decoded, pixel values that are neighbouring with respect to the current unit of the encoded frame, and pixel values that are neighbouring with respect to the reference unit of the key frame. Pixel differing from the overall set of recovered pixels are then excluded from consideration according to a predetermined criterion.
Description of aggregated units of media data with backward compatibility / 2510908
Invention relates to audio-video data transmission systems on the basis of RTP-protocol. Data transmission system and method uses mechanism to designate such elements as redundant coded images, time level changeover points, points of access to gradual upgrading of encoding, identifiers of angle and points of random access to angle. Then, intermediate unit and/or receiver can apply this data to define if coded data particular sets may be transmitted and/or processed.
Image encoding apparatus, image decoding apparatus, image encoding method and image decoding method / 2510592
Loop filter 6 includes a region classification unit 12 for extracting an estimate value of each of the regions making up a local decoded image in order to classify each of the regions into a class to which the region belongs according to the estimate value. The loop filter 6 also includes a unit 13 for making up and processing a filter for each class to which one or more regions belongs, generating a Wiener filter, which minimises errors arising between the input image and the local decoded image in each of the regions to compensate for distortion on each region using the Wiener filter.
System for combining plurality of views of real-time streaming interactive video / 2510591
Apparatus for transmitting streaming interactive video comprises a plurality of servers running one or more twitch video games to obtain a plurality of streams of uncompressed low-latency streaming interactive video; a compressing unit which compresses one or more streams of the uncompressed streaming interactive video obtained by said servers into a new stream of compressed streaming interactive video for transmission in packet form through a network connection to a plurality of client devices associated with a corresponding plurality of users. A user provides a control input for at least one of the servers, wherein at least one user is geographically remote from the territory where at least one of the servers is located; wherein the new stream of compressed streaming interactive video is compressed with the worst to and from latency of 90 ms from the user control input before displaying the response to the user control input on the client device of the user at a transmission distance of up to 2414 km.
System and method of compressing streaming interactive video / 2510590
Server centre for hosting low-latency streaming interactive video includes a plurality of servers that run one or more twitch video games or applications; an inbound routing network that receives packet streams from client devices via a first network interface and routes said packet streams to one or more of said servers. The packet streams include user control input for at least one of the one or more twitch video games or applications, wherein the one or more of said servers is operable to compute video data in response to the user control input; a compression unit connected to receive the video data from the one or more of the servers and output compressed low-latency streaming interactive video therefrom; an output routing network that routes the compressed low-latency streaming interactive video to each client device over a corresponding communication channel via a second wireless interface, connected to the Internet. The compressed low-latency streaming interactive is compressed with the worst to and from signal latency of 90 ms for a transmission distance of up to 2414 km.
Method of encoding digital video image / 2510589
Method for encoding a digital video image, in which an initial digital video image, taken in any format and having any resolution exceeding that required, is encoded. During the encoding stage, the initial digital video image is segmented into a plurality of video frames. Each video frame of the plurality of video frames is segmented into a plurality of units consisting of pixels. An encoded digital video image is formed from a sequence of video frames as follows: each subsequent frame is added to the encoded video image if said frame is entirely different from the previous video frame; if each subsequent video frame repeats the previous video frame, then instead of said subsequent video frame being added to the formed digital video image, a command is added to repeat the previous video frame; if a video frame is not entirely different, then a command is added to the formed encoded video image to repeat the previous frame taking into account the differing units; the video is encoded in such a way that the pixels in the encoded digital video image are square in shape, regardless of the extent of compression; the encoded video image is saved on at least one media server.
Synchronising remote audio with fixed video / 2510587
Multimedia device including a separating entity configured to separate a multimedia stream into audio frames and video frames, a sequencing entity configured to add a sequence number to at least one audio frame, a transceiver configured to transmit audio frames to a remote audio device and a controller connected to a video player. The controller is configured to determine a delay associated with transmitting the audio frames to the remote audio device based upon the sequence number and to control the presentation of the video frames in the video player based on the delay.
Set top box, system and method for internet protocol television (iptv) channel recording and playing / 2510152
Set top box is configured to store information of channels and playing addresses of the channels downloaded from an electronic program guide (EPG) server; when receiving a channel recording and/or channel switching command of a user, acquiring a corresponding playing address according to a channel selected by the user, and acquiring a multimedia stream of a channel program corresponding to the playing address from a multimedia server for recording or playing; and when the user needs to record and play simultaneously in the same procedure, the set top box switches the recording to be performed in a background mode, and sets the playing to be performed in a viewing mode.
Encoding device, encoding method, recording medium and program therefor and decoding device, decoding method, recording medium and program therefor / 2510151
First decoding unit, capable of decoding second encoded data and generating a predictive picture, a high-frequency sampling processing unit capable of high-frequency sampling first predictive image data generated by the first decoding unit, for generating first predictive image data, sampled with high frequency, a second high-frequency sampling processing unit capable of high-frequency sampling predictive image data generated by the first decoding unit, for generating second predictive image data, sampled with high frequency, a selection unit capable of selecting first predictive image data or second predictive image data according to flag data as predictive image data for predicting successive image data, and a second decoding unit capable of decoding first encoded data using the predictive image data selected by the selection unit.
Method of processing digital file, particularly of image, video and/or audio type / 2510150
Disclosed is a method of processing a digital file of the image, video and/or audio type which comprises a phase for putting into line per colour layer and/or per audio channel of digital data of any audio, image and video file, a compression phase using an algorithm in which each compressed value VCn of position N is obtained by subtracting from the value Vn of same position N of the original file, a predetermined number of successive compressed values (VCn-1, VCn-2,…) calculated previously, and a restoration phase using an algorithm in which each restored value VDn of position N is obtained by adding to the value VCn,of the same position of the compressed file, a predetermined number of successive compressed values (VCn-1, VCn-2,…).
Method and apparatus for generating recommendation for content element / 2420908
Like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.
Method and apparatus for generating recommendation for content element / 2420908
Like or dislike of a content element played on a personalised content channel is determined based on feedback from the user; the profile is updated based on the determined like or dislike, wherein that profile is associated with the personalised content channel and contains a plurality of attributes and attribute values associated with said content element, where during update, if like has been determined, a classification flag associated with each of said attributes and attribute values is set; the degree of liking is determined for at least on next content element based on said profile; and that at least one next content element is selected for playing on the personalised content channel based on the calculated degree of liking.
Method to grant license to client device corresponding to coded content and system of conversion to manage digital rights, applying this method / 2421806
Method of a conversion system operation to manage digital rights to grant a license to a client's device corresponding to coded content consists in the following. The first content of the first type of digital rights content and the first license corresponding to the first content are converted to manage digital rights in order to generate the second content of the second type of digital rights content and the second license corresponding to the second content. A license request is received, corresponding to the second content distributed by means of superdistribution to a third party. The second license corresponding to the second content distributed by means of superdistribution is requested from a server corresponding to the second management of digital rights. The second license corresponding to the second content distributed by means of superdistribution is received and sent to a third party.
Server device, method of license distribution and content receiving device / 2447585
Network server of television server sets in random manner according to Internet protocol (IPTV) time of request for receiving main license within time period starting from time of broadcast transmission and ending at preset time in accordance with request for receiving license for playback of encrypted content, where request for receive comes from IPTV client terminal, and transmits to IPTV client terminal information about time of request for receiving main license and temporary license comprising temporary key of content which key corresponds to playback of broadcast transmission content from time of broadcast transmission start till preset time. License server transmits main license including content main key which corresponds to full playback of content according to request for receiving main license which request is executed using IPTV client terminal based on information about request for receive.
Connecting devices to multimedia sharing service / 2449353
Multimedia content purchasing system comprising: a memory area associated with a multimedia service; a multimedia server connected to the multimedia service via a data communication network; a portable computing device associated with a user; and a processor associated with the portable computing device, said processor being configured to execute computer-executable instructions for: establishing a connection to the multimedia server when the multimedia server and the portable computing device are within a predefined proximity; authenticating the multimedia server and the user with respect to the authenticated multimedia server; transmitting digital content distribution criteria; receiving, in response, promotional copies of one or more of the multimedia content items and associated metadata; and purchasing, when the multimedia server and the portable computing device are outside the predefined proximity, at least one of said one or more multimedia content items.
Device and method to process and read file having storage of media data and storage of metadata / 2459378
Device (600) to process stored data packets (110; 112) in a container of media data (104) and stored related meta information in a container of meta data (106); related meta information, including information on timing of transportation and information on location, indicating location of storage of saved data packets in the media data container (104); a device, comprising a processor (602) for detection, based on stored data packets (110; 112) and stored related meta information (124; 128); information on decoding (604; 704) for media useful load of stored data packets (110; 112), where information on decoding (604; 704) indicates, at which moment of time to repeatedly reproduce which useful load of stored data packets.
Integrated interface device and method of controlling integrated interface device / 2465740
Provided is an integrated interface device for performing a hierarchical operation for specifying a desired content list. The interface device has a function to display a content list, content specified by the content list, or the like by efficiently using a vacant area in the lower part of the display by displaying icons which display a hierarchical relationship, for example, "display in a row", in the upper part of the screen, thereby freeing a large space in the lower part of the display.
Method and system to generate recommendation for at least one additional element of content / 2475995
Channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.
Method and system to generate recommendation for at least one additional element of content / 2475995
Channel of individualised content makes it possible to play multiple elements of content (programs) meeting multiple selection criteria. At least one additional element of content is recommended by a mechanism (107) of recommendations, besides, at least one additional element of content meets less quantity of criteria. In the version of realisation at least one recommended additional element of content is selected, and multiple selection criteria are corrected by a planner (109) on the basis of at least one characteristic of a selected recommended additional element of content.
Wireless transmission system, relay device, wireless recipient device and wireless source device / 2480943
Wireless transmission system includes: a device (1) which wirelessly transmits AV content and a plurality of wireless recipient devices (5, 6) for reproducing the transmitted AV content. The device (1) for transmitting content has a group identification table which stores a group identifier for identification of a group formed by the wireless recipient device (5, 6). The device (1) adds the group identifier extracted from the group identification table to a control command for controlling recipient devices (5, 6) and wirelessly transmits the control command having the group identifier. The recipient devices (5, 6) receive the wirelessly transmitted control command from the device (1) if the corresponding group identifier has been added to the control command. The device (1) for transmitting content consists of a wired source device and a relay device which is connected by wire to the wired source device, and the relay device is wirelessly connected to the wireless recipient device and mutually converts the wired control command transmitted to the wired source device, and the wireless control command transmitted to the wireless recipient device, wherein the wired source device and the relay device are connected via HDMI (High-Definition Multimedia Interface).
|
FIELD: information technology. SUBSTANCE: image decoding method comprising determining coding units having a hierarchical structure for decoding an image, at least one prediction unit for predicting each of the coding units, and at least one transformation unit for inversely transforming each of the coding units, by using information about a division shape of a coding unit, at least one prediction unit and at least one transformation unit; obtaining transformation coefficients by parsing from the bitstream and reconstructing encoded data of the at least one prediction unit by performing entropy decoding, inverse quantisation, and inverse transformation on the transformation coefficients; and performing intra prediction or inter prediction on the reconstructed encoded data and reconstructing the encoded video, wherein the coding units are split hierarchically according to a depth of the coding unit, and wherein the at least one transformation unit comprises a transformation unit having a larger size than the size of the at least one prediction unit. EFFECT: high efficiency of decoding images. 4 cl, 18 dwg
The technical field to which the invention relates. Exemplary embodiments of the invention relate to a method and apparatus for encoding and decoding images, in particular to a method and apparatus for encoding and decoding image by converting the image in the pixel region in the coefficients in the frequency domain. The level of technology In order to perform image compression, most of the methods and devices for encoding and decoding image encode the image by converting the image in the pixel region coefficients in the frequency domain. The discrete cosine transform (DCT), which is one of the methods of frequency conversion, is a well-known technique that is widely used for image compression or sound. Way, the image encoding using DCT, applies to perform DCT on an image in the pixel region, forming discrete cosine coefficients, quantization formed of discrete cosine coefficients and performing entropy encoding on the generated discrete cosine coefficients. The invention The solution of the problem Exemplary embodiments of provide a method and apparatus for encoding and decoded what I image more effective discrete cosine transform (DCT), as well as provide a machine-readable recording medium recorded thereon a program for executing the method. Useful inventions In accordance with one or more exemplary embodiments of the implementation there is a possibility to install the unit conversion so that it was larger than the prediction unit, and to perform DCT so that the image has been effectively compressed and encoded. Brief description of drawings The above and other signs of exemplary embodiments will become more apparent from description of exemplary embodiments with reference to the accompanying drawings, in which: Figure 1 is a block diagram of the device, the image encoding in accordance with an exemplary embodiment; Figure 2 is a diagram of a device for decoding an image according to another exemplary embodiment; Figure 3 is a diagram of a hierarchical encoding unit in accordance with another exemplary embodiment; Figure 4 is a block diagram of the encoder of the image based on the unit of encoding, in accordance with another exemplary embodiment; Figure 5 is a block diagram of the decoder of the image based on the unit of encoding, in accordance with another exemplary embodiment; F. the 6 illustrates the maximum encoding unit, sub-units of the coding and prediction unit in accordance with another exemplary embodiment; Fig.7 is a diagram of the encoding unit and the unit conversion in accordance with another exemplary embodiment; Figa and 8B illustrate a form of separation maximum units of coding units, prediction units, and units conversion in accordance with another exemplary embodiment; Fig.9 is a block diagram of the device, the image encoding in accordance with another exemplary embodiment; Figure 10 is a diagram of the transform module; Figa-11C illustrate the types of units conversion in accordance with another exemplary embodiment; Fig illustrates different units conversion in accordance with another exemplary embodiment; Fig is a block diagram of a device for decoding an image according to another exemplary embodiment; and Fig is a flowchart of the sequence of operations of the encoding method of the image in accordance with an exemplary embodiment. Fig is a flowchart of the operational sequence of the method of decoding an image, in accordance with another exemplary embodiment. Disclosure of inventions The fit is accordance with an aspect of exemplary variant of the invention, a method of encoding an image, including operations, which specify unit conversion by selecting multiple adjacent units prediction, transform the set of neighboring units predictions in the frequency domain in accordance with unit conversion, form the coefficients of the frequency components, quantuum coefficients of frequency components and perform entropy encoding on the quantized coefficients of frequency components. The operation, which specify unit conversion may be performed on the basis of depth, indicating the degree of size reduction that occurs gradually from the maximum unit current coding sequence of macroblocks or the current frame to sub-units of the coding that contains the set of neighboring units predictions. The operation, which specify unit conversion can be performed by selecting multiple adjacent units predictions on which prediction is performed in accordance with the same prediction mode. The same prediction mode may be a mode of mutual prediction mode or internal predictions. The encoding method of the image may further include the operation, which specify the best unit conversion pic what edstam repeating the aforementioned operations on different units conversion while the above operations include operations on which specify unit conversion by selecting multiple adjacent units prediction, transform the set of neighboring units predictions in the frequency domain in accordance with unit conversion and form the coefficients of the frequency components, quantuum coefficients of frequency components and perform entropy encoding on the quantized coefficients of frequency components. In accordance with another aspect of an exemplary variant of implementation of the proposed device, the image encoding, comprising a transformation module to set the unit conversion by selecting multiple adjacent units to the prediction of the conversion of many neighboring units predictions in the frequency domain in accordance with unit conversion and formation of the coefficients of frequency components, the quantization module for quantizing the coefficients of the frequency components and the entropy encoding module for performing entropy encoding on the quantized coefficients of frequency components. In accordance with another aspect of exemplary variant implementation of the method of decoding images, including operations, which perform entropy decoding to the rates of frequency components, which are formed by transformation to the frequency domain in accordance with unit conversion, performs reverse quantization coefficients of frequency components, perform the inverse transform coefficients of the frequency components in the pixel region and recreate the set of neighboring units predictions contained in the unit conversion. In accordance with another aspect of an exemplary variant of the proposed device for decoding image that includes an entropy decoder for performing entropy decoding of the coefficients of the frequency components, which are formed by transformation to the frequency domain in accordance with unit conversion module, an inverse quantizer for inverse quantization of the coefficients of the frequency components in the pixel region and recreating many of the neighboring units predictions contained in the unit conversion. In accordance with another aspect of exemplary variant implementation is provided a computer readable recording medium recorded thereon a program for executing the methods of encoding and decoding the image. The implementation of the invention Next, with reference to the accompanying drawings will be described in detail exemplary embodiments of implementation. In the example the variants of implementation, the term "unit" may include, but may not apply to a single unit of a certain size, depending on the context in which it is used, and the term "image" can refer to a still image (frame), applied to video or moving image, that is the video itself. Figure 1 is a block diagram of an apparatus 100 for encoding an image in accordance with an exemplary embodiment. According to Figure 1, the device 100 includes a module 110 maximum separation unit encoding module 120 to determine the depth of encoding, the encoder 130 of the image data and an encoder 140 information encoding. The module 110 of the separation of the maximum encoding unit may perform the separation of the current frame or sequence of macroblocks based on the maximum units of coding, which is a unit of encoding the largest size. That is, the module 110 of the separation of the maximum encoding unit may perform the separation of the current frame or sequence of macroblocks to obtain at least one maximum units of encoding. In accordance with an exemplary embodiment, the encoding unit may be represented using the maximum units of encoding and depth. As described above, the maximum unit kodirovaniyadlya unit coding with the largest size among the coding units of the current frame, the depth specifies the size of the sub-units of the coding obtained through hierarchical reduction unit coding. As the depth increases, the encoding unit may decrease in size from the maximum encoding unit to the minimum unit of coding, and the maximum depth of the unit of encoding is specified as the minimum depth and the depth of the minimum unit of encoding is specified as the maximum depth. Because the size of the unit of coding is reduced from the maximum encoding unit as the depth increases, the sub-unit encoding the k-th depth may include many sub-units coding (k+n)-th depth (where k and n are integers equal to or greater than 1). By increasing the frame size, which must be encoded, encoding the image into larger units of coding can lead to a higher degree of image compression. However, if a larger unit of encoding is fixed, the image can be efficiently encoded, given the constantly changing characteristics of the image. For example, when coded flat area, such as the sea or the sky, the larger the unit of encoding, the higher may be the compression ratio. However, when encoded in a complex area such as people or buildings, the smaller the unit of coding is eating above may be the compression. Thus, in accordance with an exemplary embodiment, for each frame or sequence of macroblocks is set different maximum unit of the image encoding and different maximum depth. Since the maximum depth indicates the maximum number of times, which may decrease the unit of encoding, the size of each of the minimum encoding unit included in the maximum unit of encoding can be set variably in accordance maximum depth. The module 120 to determine the depth of encoding specifies the maximum depth. The maximum depth can be determined on the basis of the calculation of cost Distortion to the Transmission Rate (R-D). The maximum depth may be determined differently for each frame or sequence of macroblocks or for each maximum units of coding. A certain maximum depth is provided to the encoder 140 information encoding, and image data according to maximum coding units are provided to the encoder 130 of the image data. The maximum depth refers to the unit of encoding with a smaller size, which may be included in the maximum unit of encoding, i.e. the smallest unit of encoding. In other words, the maximum encoding unit may be sec is Lena on sub-units of the coding having different sizes, based on different depths. This is described in more detail below with reference to Figa and 8B. In addition, sub-units of the coding, with different sizes, which are included in the maximum encoding unit may predskazivati or converted on the basis of the processing units having different sizes. In other words, the device 100 can perform a variety of processing operations for encoding image-based processing units with different sizes and different shapes. To encode the image data is performed such processing operations as prediction, transform and entropy encoding, for each operation can be used in the handling unit of the same size or for each operation can be used handling units of different sizes. For example, the device 100 may choose to have a prediction unit of a coding unit of processing that is different from the unit of encoding. When the size of the unit of encoding is a 2N×2N (where N is a positive integer), the unit of processing for prediction may be 2N×2N, 2N×N, N×2N and N×N. in Other words, the motion prediction may be performed based on the processing unit having a shape in which at least one of the height or width of the units of encoding are divided into 2 equal parts. D is more, the processing unit, which is the basis for predictions, is defined as "the unit of prediction." The prediction mode may be at least one of the internal mode, the mutual mode and skip, and a specific mode of prediction can be performed with respect to only one of the predictions of a particular size and shape. For example, the internal mode can be performed only with respect to units predictions of size 2N×2N and N×N, the form of which is square. Additionally, the mode with the pass can only be applied to the prediction unit size 2Nx2N. If the unit of coding, there are many units predictions the following predictions for each unit of prediction can be selected prediction mode with the smallest error encoding. Alternatively, device 100 may perform a frequency transform on image data on the basis of the processing units having a size other than one encoding. With respect to the frequency conversion in the unit of coding, frequency conversion can be performed based on the processing unit having a size equal to or smaller size unit of encoding. Next, a processing unit, which is the basis for frequency conversion set is seen as a "unit conversion". The frequency transform may be a Discrete Cosine Transform (DCT) or the Conversion of karunen-Loev (KLT). The module 120 to determine the depth coding can determine sub-unit of encoding included in the maximum encoding unit, using optimization RD, based on the Lagrange multiplier. In other words, the module 120 to determine the depth coding can define multiple sub-units of the coding obtained by dividing the maximum encoding unit, with many sub-units of encoding are different sizes according to their depths. The encoder 130 of the image data generates a bit stream by encoding the maximum encoding unit forms-based separation, i.e. forms that share the maximum encoding unit, as defined by the module 120 to determine the depth of encoding. The encoder 140 information encoding encodes information about the mode of the maximum coding unit encoding specified by module 120 to determine the depth of encoding. In other words, the encoder 140 information encoding generates a bit stream by encoding information about the form of maximum separation unit of encoding information on the maximum depth and information about the encoding mode of the sub-units of the coding applied to ka is the water depth. Information about the encoding mode of the sub-units of the coding may include information about the unit of prediction sub-unit of encoding information about the prediction mode for each unit of predictions and information about the unit conversion sub-unit of the encoding. Since the maximum in each unit of coding, there are sub-units of the coding of different sizes, and information about the encoding mode must be defined for each sub-unit of encoding, for one maximum units encoding can be specified information about at least one encoding mode. The device 100 may form sub-units of the coding by dividing equally as height and width of the maximum encoding unit into two in accordance with increase in depth. That is, when the size of the unit of encoding k-th depth is 2N×2N, the size of the unit of coding (k+1)th depth is N×N. Thus, the device 100 according to an exemplary embodiment may determine the optimal form of separation for each of the maximum encoding unit based on the maximum size of coding units and the maximum depth, taking into account the characteristics of the image. Through variable adjustment of the size of the maximum encoding unit taking into account the nature of istic image and encoding the image by dividing the maximum units of encoding sub-unit of coding at different depths, can more efficiently be encoded image different permissions. Figure 2 is a block diagram of an apparatus 200 for decoding an image in accordance with an exemplary embodiment. According to Figure 2, the device 200 includes a module 210 receiving the image data, the module 220 retrieve information encoding and decoder 230 of the image data. Module 210 receiving the image data receives the image data on the basis of the maximum coding units, by analyzing the bit stream received by device 200, and outputs the image data decoder 230 of the image data. Module 210 receiving the image data can extract information about the maximum unit of encoding the current frame or sequence of macroblocks from the header of the current frame or sequence of macroblocks. In other words, the module 210 receiving image data and divides the bit stream at the maximum units of encoding so that the decoder 230, the image data can be decoded image data based on the maximum units of encoding. Module 220 retrieve information encoding by analyzing the bit stream that is received by device 200, extracts from the header of the current frame with information about the maximum encoding unit, maximum depth, shape times the population of the maximum encoding unit, the encoding mode of the sub-units of the coding. Information about the shape of the separation and information about the encoding mode are provided to the decoder 230 of the image data. Information about the shape of the separation of the maximum encoding unit may include information about the sub-coding units having different sizes based on the depths included in the maximum encoding unit, and the information about the encoding mode may include information about the unit of prediction based on sub-units of the coding information of the prediction mode and information unit conversion. The decoder 230, the image data recovers the current frame by decoding the image data of each of the maximum encoding unit based on the information extracted by the module 220 retrieve information encoding. The decoder 230, the image data can be decoded sub-units of the coding included in the maximum encoding unit, based on the information about the form of maximum separation unit coding. The decoding process may include a process of divination, including intra prediction and motion compensation, and the process of inverse transformation. The decoder 230, the image data performs intra prediction or mutual prediction on the basis of information about the unit pre the stories and information about a prediction mode, to predict the prediction unit. The decoder 230, the image data can also perform the inverse transform for each sub-unit of the encoding on the basis of information about the unit conversion sub-unit of the encoding. Figure 3 illustrates the hierarchical encoding unit in accordance with an exemplary embodiment. According to Figure 3, the hierarchical encoding unit in accordance with an exemplary embodiment may include a unit of coding, whose width / height are 64×64, 32×32, 16×16, 8×8, and 4×4. In addition to these units coding with completely square shape, can be the unit of encoding, in which the width to the height of amount 64×32, 32×64, 32×16, 16×32, 16×8, 8×16, 8×4 and 4×8. According to Figure 3, for data 310 of the image for which the resolution is 1920×1080, the size of the maximum encoding unit is installed as a 64×64, and the maximum depth is set as 2. For data 320 image, whose resolution is 1920×1080, the size of the maximum encoding unit is installed as a 64×64, and the maximum depth is specified as 4. For data 330 images, for which the resolution is 352×288, the maximum size of the encoding unit is set as 16×16, and the maximum depth is set as 1. When the resolution is high or a large amount of data, predpochtitel is about, but not necessarily to the maximum size of a unit of coding was relatively large to increase the degree of compression and accurately display the image characteristics. Accordingly, in the data 310 and 320 images with a resolution higher than the data 330 of the image, as the size of the maximum encoding unit may be selected of size 64×64. The maximum depth indicates the total number of levels in the hierarchical coding units. Since the maximum depth data 310 of the image is 2, the unit 315 encoding data 310 of the image may include the maximum encoding unit, in which the size of the major axis is 64, and the sub-unit of encoding, in which the dimensions on the major axis are 32 and 16, in accordance with increasing depth. On the other hand, since the maximum depth data 330 of the image is 1, the unit 335 encoding data 330 of the image may include the maximum encoding unit, in which the size of the major axis is 16, and units of coding, which has dimensions on the major axis is 8, in accordance with increasing depth. However, since the maximum depth data 320 of the image is 4, the unit 325 encoding data 320 of the image may include the maximum edenization, for which the size of the major axis is 64, and sub-units of the coding, for which the dimensions along the major axis are 32, 16, 8 and 4 in accordance with increasing depth. Since with increasing depth image is encoded based on smaller sub-units encoding the variant is characterized by the implementation is applicable to the encoding of images, including scenes with more fine detail. Figure 4 is a block diagram of the encoder 400 image on the basis of units of encoding, in accordance with an exemplary embodiment. Module 410 internal prediction performs intra prediction on the units predictions of the internal mode in the current frame 405, and a module 420, the motion estimation module 425 motion compensation performs mutual prediction and motion compensation over the unit of encoding mutual mode, using the current frame 405 and the reference frame 495. The value of the remainder are formed on the basis of units of predictions issued by module 410 internal prediction module 420, the motion estimation module and 426 motion compensation, and the generated value issued balance as of the quantized transform coefficients by means of passage through the module 430 conversion and module 440 quantization. The quantized transform coefficients are restored to the values which the STATCOM through pass-through module 460 inverse quantization module 470 inverse frequency transform, and reconstructed values of the residue subjected to post-processing by passage through the module 480 deblocking module 490 low-pass filtering and are given as a reference frame 495. The quantized transform coefficients may be output as a bitstream 455 by passage through the entropy encoder 450. To perform encoding based on the encoding method in accordance with an exemplary embodiment, the components of the encoder 400 of the image, i.e. the module 410 internal prediction module 420, the motion estimation module 425 motion compensation module 430 conversion module 440 quantization, entropy encoder 450, the module 460 inverse quantization module 470 inverse frequency transformation module 480 deblocking module 490 low-pass filter performs the processes of encoding, based on the maximum units of coding, sub-units of the coding on the basis of the depths of the prediction unit and the unit conversion. Figure 5 is a block diagram of a decoder 500 image-based encoding unit, in accordance with an exemplary embodiment. Bit stream 505 passes through the module 510 analysis to analyze the encoded image data that must be decoded, and information coding required on the I decode. Encoded image data are given as inversely quantized data by passage through the entropy decoder module 520 and 530 of the inverse quantization and restored to balance by passage through the module 540 inverse frequency transform. The value of balance is restored on the basis of units of coding by combining with the internal prediction module 550 internal predictions or the motion compensation module 560 motion compensation. Restored the units of encoding are used to predict the following units of coding or the next frame by passage through the module 570 deblocking module 580 low-pass filtering. To perform decoding based on the method of decoding in accordance with an exemplary embodiment, the components of the decoder 500 of the image, i.e, the module 510 analysis, the entropy decoder 520, the module 530 inverse quantization module 540 inverse frequency conversion module 550 internal prediction module 560 motion compensation module 570 deblocking module 580 low-pass filter performs a process of decoding image based on the maximum units of coding, sub-units of the coding on the basis of depth, unit forecast the Oia and the unit conversion. In particular, the module 550 internal prediction module 560 motion compensation determine the prediction unit and a prediction mode in a sub-unit of encoding, whereas the maximum encoding unit and a depth, and a module 540 inverse frequency transform performs the inverse transformation, taking into account the size of the unit conversion. 6 illustrates the maximum encoding unit, sub-unit encoding and a prediction unit in accordance with an exemplary embodiment. The device 100 and the device 200 in accordance with an exemplary embodiment uses a hierarchical encoding unit to perform encoding and decoding, given the characteristics of the image. The maximum encoding unit and the maximum depth can adaptively set in accordance with characteristics of the image, or variably set in accordance with the requirements of the user. The hierarchical structure 600 units of coding in accordance with an exemplary embodiment illustrates the maximum unit 610 coding, whose height and width are 64, and the maximum depth is 4. The depth increases along the vertical axis of the hierarchical structure 600 units of coding, and as the depth increases to decrease the height and width of the sub-e is the INIC from 620 to 650 coding. Unit maximum prediction unit 610 coding and sub-units 620 to 650 coding is shown along the horizontal axis of the hierarchical structure 600 units of encoding. Maximum unit 610 encoding has a depth of 0 and the size of the unit of encoding, i.e. the height and width of size 64×64. The depth increases along the vertical axis and present: sub-unit 620 encoding, whose size is 32×32, and the depth is equal to 1; sub-unit 630 coding, whose size is 16×16, and the depth is equal to 2; sub-unit 640 coding, whose size is 8×8, and the depth is equal to 3; and sub-unit 640 coding, whose size is 4×4, and the depth is 4. Sub-unit 650 coding, whose size is 4×4, and the depth is 4, is the minimum unit of coding, and the minimum encoding unit may be divided into units of predictions, each of which is smaller than the minimum unit of encoding. According to Fig.6, the examples of the prediction unit is shown along the horizontal axis according to each depth. That is, the unit of maximum prediction unit 610 encoding, the depth of which is equal to 0, there may be a prediction unit, the size of which is equal to the unit 610 encoding, i.e. a 64×64, or unit 612 predictions, whose size is 64×32, unit 614 predictions, the amount of which SOS is defaults to 32×64, or unit 616 predictions, whose size is 32×32, which have a size smaller unit 610 encoding, the size of which is 64×64. Unit prediction unit 620 encoding, the depth of which is equal to 1, and the size is 32×32, there may be a prediction unit, the size of which is equal to the unit 620 encoding, i.e. the 32×32, or unit 622 predictions, whose size is 32×16, unit 624 predictions, whose size is 16×32, or unit 626 predictions, whose size is 16×16, which have a size smaller unit 620 encoding, whose size is 32×32. Unit prediction unit 630 encoding, the depth of which is equal to 2, and the size is 16×16, there may be a prediction unit, the size of which is equal to the unit 630 encoding, i.e. the 16×16, or unit 632 predictions, whose size is 16×8, unit 634 predictions, whose size is 8×16, or unit 636 predictions, whose size is 8×8, which have a size smaller unit 630 coding, whose size is 16×16. Unit prediction unit 640 encoding, the depth of which is equal to 3, and the size is 8×8, there may be a prediction unit, the size of which is equal to the unit 640 encoding, ie 8×8, or unit 642 predictions, whose size is 8×4, unit 644 predictions, the size of which is is 4×8, or unit 646 predictions, whose size is 4×4, which have a size smaller unit 640 coding, whose size is 8×8. In conclusion, unit 650 coding, the depth of which is equal to 4, and the size is 4×4 is the smallest unit and encoding unit encoding the maximum depth, a prediction unit unit 650 coding can be unit 650 predictions, whose size is 4×4, or unit 652 predictions, whose size is 4×2, unit 654 predictions, the size of which is 2×4, or unit 656 predictions, the size of which is 2×2. Fig.7 illustrates the unit of coding, and unit conversion, in accordance with an exemplary embodiment. The device 100 and the device 200, in accordance with an exemplary embodiment, performs encoding by using the maximum units of encoding, or using sub-units of coding, which is equal to or smaller than the maximum encoding unit and which is obtained by dividing the maximum units of encoding. During encoding, the size of the unit conversion for frequency conversion is selected in such a way as not to be larger than the size of the respective units of coding. For example, when a unit 710 encoding has a size of 64×64, the frequency of pre what education can be, using unit 720 transform having a size of 32×32. Figa and 8B illustrate a form of separation units of coding units, prediction units, and units conversion in accordance with an exemplary embodiment. Figa illustrates the encoding unit and the prediction unit in accordance with an exemplary embodiment. The left part Figa shows the shape of the separation, the selected device 100, in accordance with an exemplary embodiment, in order to encode the maximum unit 810 coding. The device 100 divides the maximum unit 810 coding in various forms, performs encoding, and selects the optimal form of separation by comparing the results of encoding the various forms of separation with each other on the basis of the costs of R-D. When the optimum is the maximum coding unit 810 coding, as it is, the maximum unit 810 coding can be encoded without dividing the maximum unit 810 encoding in accordance with what is illustrated in Figa and 8B. As depicted in the left part Figa, the maximum unit 810 encoding, the depth of which is equal to 1, is encoded by dividing it into sub-units of the coding, the depth of which is equal to or greater than 1. That is, the maximum unit 810 coding is divided into 4 with the b-unit coding the depth of which is equal to 1, and all or some sub-units of the coding, the depth of which is equal to 1, is divided into sub-units of the coding, the depth of which is equal to 2. Sub-unit coding, located in the upper left part, and a sub-unit of coding, located in the lower left part of the sub-units of the coding, the depth of which is equal to 1, is divided into sub-units of the coding, the depth of which is equal to or greater than 2. Some of the sub-units of the coding, the depth of which is equal to or greater than 2, can be divided into sub-units of the coding depth greater than or equal to 3. The right side Figa shows the shape of the separation units predictions for the maximum unit 810 encoding. As shown in the right part Figa, unit 860 predictions for the maximum unit 810 encoding may be divided differently than the maximum unit 810 coding. In other words, the unit of prediction for each of the sub-coding units may be smaller than the corresponding sub-unit of encoding. For example, the prediction unit to sub-unit 854 coding at the bottom right side of the sub-units of the coding, the depth of which is equal to 1, can be smaller sub-units 854 coding. In addition, units predictions for some (814, 816, 850 and 852) of sub-units 814, 816, 818, 828, 850 and 852 coded what I the depth of which is equal to 2, can be smaller than the respective sub-units 814, 816, 850, and 852 coding. In addition, units predictions for the sub-units 822, 832 and 848 encoding, the depth of which is equal to 3, can be smaller than the respective sub-units 822, 832 and 848 coding. The prediction unit may have a shape with which the corresponding sub-units of encoding are divided into two equal parts the height or width, or have a form, with which the corresponding sub-units of encoding are divided into four equal parts in height and width. Figv illustrates the prediction unit and the unit of conversion, in accordance with an exemplary embodiment. The left part Figv shows the shape of the separation units predictions for the maximum unit 810 encoding, shown in the right part Figa, and the right part Figv shows the shape of the separation units maximum conversion unit 810 encoding. As shown in the right part Figv, the shape of the separation unit 870 conversion can be set differently from the unit 860 predictions. For example, although the prediction unit to unit 854 encoding, the depth of which is equal to 1, chosen with a shape in which the height of the unit 854 coding is divided into two equal parts, unit conversion can be selected of the same size as the unit 854 coding. Similar is the rule, although units predictions for units 814 and 850 coding, the depth of which is equal to 2, selected with a shape whereby the height of each of the units 814 and 850 coding is divided into two equal parts, unit conversion can be selected of the same size as the original size of each of the units 814 and 850 encoding. Unit conversion can be selected smaller than the unit of prediction. For example, when the prediction unit to unit 852 encoding, the depth of which is equal to 2, is selected shape in which the width of the unit 852 is divided into two equal parts, unit conversion can be selected with the form in which the unit 852 coding is divided into four equal parts by the height and width, and which is smaller than the shape of the prediction unit. Fig.9 is a block diagram of an apparatus 900 for encoding an image in accordance with another exemplary embodiment. According to Fig.9, the device 900, the image encoding in accordance with the present exemplary embodiment includes a module 910 conversion module 920 quantization and entropy encoder 930. Module 910 conversion takes unit of the image processing pixel region and converts the unit of the image processing in the frequency domain. Module 910 conversion takes many units forecast the Oia, includes the value of the remainder generated by an internal prediction or mutual prediction, and converts the unit of prediction in the frequency domain. The transformation into the frequency domain coefficients are formed of frequency components. In accordance with this exemplary embodiment, the conversion into the frequency domain can occur through discrete cosine transformation (DCT) or the conversion of karunen-Loev (KLT), and the DCT or KLT formed the coefficients of the frequency domain. Further, the transformation into the frequency domain may be DCT, however, a specialist in the relevant field should be obvious that the transformation into the frequency domain can be any conversion related to the transformation of the image from the pixel domain into the frequency domain. Also, in accordance with the present exemplary embodiment, the module 910 conversion specifies the unit conversion by grouping multiple units predictions and performs the conversion in accordance with unit conversion. This process will be described in detail with reference to Figure 10, 11A, 11B, and 12. Figure 10 is a diagram of the module 910 conversion. According to Figure 10, the module 910 conversion includes module 1010 selection module 1020 done is of the conversion. Module 1010 selection sets the unit conversion by selecting multiple adjacent units predictions. Device, the image encoding in accordance with a related technical field, performs intra prediction or mutual prediction on the basis of a block having a predetermined size, i.e. on the basis of the prediction unit, and performs DCT on the basis of size that is less than or equal to a given unit of prediction. In other words, the device, the image encoding in accordance with a related technical field, performs the DCT using the units conversion, which is less than or equal to the unit of prediction. However, because of the many parts of the header information added to the units conversion, extension service data grow with decreasing units conversion, thereby deteriorating the compression operation, the image encoding. In order to solve this problem, the device 900, the image encoding in accordance with the present exemplary embodiment groups the set of neighboring units predictions in the translation unit and performs the conversion in accordance with the unit of conversion, which is formed by the grouping. There is a high probability that the neighboring prediction unit may include about the be otherness value balance, and thus, if the neighboring units predictions are grouped in the unit of conversion, and then above them is converted, it can be considerably increased the compression encoding operation. For this increase module 1010 selects the neighboring prediction unit, which will be grouped in unit conversion. This process will be described in detail with reference to Figa-11C and 12. Figa-11C illustrate the types of unit conversion in accordance with another exemplary embodiment. According Figa-11C, the unit 1120 predictions for unit 1110 coding may be in the form of separation, obtained by dividing in half the width of the unit 1110 coding. Unit 1110 coding can be a maximum encoding unit or may be a sub-unit of the encoding size smaller than the maximum unit of encoding. As illustrated in Figa, the size of the unit 1130 conversion may be less than the unit 1120 predictions, or, as illustrated in Figv, the size of the unit 1140 predictions can be equal to the unit 1120 predictions. As illustrated in Figs, the size of the unit 1150 conversion may be greater than the unit 1120 predictions. That is, units with 1130 at 1150 conversion can be established without reference to unit 1120 has given them the ability the project. Figs illustrates an example in which the unit 1120 prediction is defined by grouping multiple units 1120 predictions included in the unit 1110 coding. However, the translation unit can be set higher than the encoding unit when multiple units predictions, which are not included in one unit of coding, and in many units of encoding are installed as one unit conversions. In other words, as described with reference to Figa-11C, unit conversion can be set equal to or less than the size of the unit of encoding, or greater than the size of the unit of coding. That is, unit conversion can be specified without reference to the prediction unit and the unit of encoding. Although Figa-11C illustrate examples in which the translation unit has a square shape, however, in accordance with the method of grouping neighboring units predictions, unit conversion may have a rectangular shape. For example, in the case where the prediction unit is not specified as having a rectangular shape as illustrated Figa-11S, and is defined as having four square form obtained by division into four parts unit 1110 coding, the upper and lower units of the prediction or the left and right units predictions are grouped so about the time, the translation unit may have a rectangular shape in which a horizontal direction or a vertical direction is longer. With reference to Figure 10, there are no restrictions on the criterion based on which module 1010 selection selects the neighboring units predictions. However, in accordance with an exemplary embodiment of the module 1010 may perform the selection of the unit of conversion on the basis of depth. As described above, the depth indicates the degree of size reduction that occurs gradually from the maximum unit current coding sequence of macroblocks or the current frame to sub-units of the coding. As described above with reference to Figure 3 and 6, as the depth increases decreases the size of the sub-units of the coding, and, consequently, decreases the prediction unit included in a sub-unit of encoding. In this case, if the conversion is performed in accordance with the unit of conversion, which is less than or equal to the prediction unit, the compression encoding operation of the image deteriorates, because in every translation unit is added to the header information. Thus, with respect to sub-unit of encoding at a depth corresponding to a predetermined value, preferably, but not necessarily to the unit foreseeable, the project, included in the sub-unit coding were grouped and asked as unit conversion, and then it has been converted. For this module 1010 selection specifies the unit of conversion on the basis of the depth of the sub-units of the coding. For example, in the case where the depth of the unit 1110 coding on Figs more than k, the module 1010 select groups of units 1120 predictions and sets them as the unit 1150 conversion. In accordance with another exemplary embodiment, the module 1010 may choose to group the set of neighboring units predictions, over which runs the prediction, according to the same prediction mode, and can set them as one unit conversions. Module 1010 select groups of the neighboring prediction unit on which prediction is based on the internal predictions or mutual prediction, and then sets them as one unit conversions. Because there is a high probability that the neighboring prediction unit, over which is the prediction according to the same prediction mode, include the same value of the remainder, there is a possibility to group neighboring prediction unit in the unit of conversion, and then to convert over neighboring units forecast the deposits. When the module 101 selection specifies the unit of conversion, the module 1020 conversion converts the neighboring prediction unit in the frequency domain in accordance with unit conversion. Module 1020 conversion performs DCT on neighboring units conversion in accordance with the unit of conversion, and generates the discrete cosine coefficients. According to Fig.9, the module 920 quantization quantum the coefficients of the frequency components generated by the module 910 conversion, for example, discrete cosine coefficients. Module 920 quantization can quantize the discrete cosine coefficients, which are entered in accordance with a predetermined quantization step. Entropy encoder 930 performs entropy encoding of the coefficients of the frequency components quantized module 920 quantization. Entropy encoder 930 may perform entropy coding on discrete cosine coefficients through the use of context-based adaptive variable arithmetic coding (CABAC) or context-dependent adaptive coding with variable length (CAVLC). The device 900, the image encoding can determine the best unit conversion through repeated execution of the DCT, quantization and entraping the coding over different units conversion. To determine the optimal unit conversion procedure of choice neighboring units predictions can be repeated. Optimal unit conversion can be determined, taking into account the computation cost of RD, as described in detail with reference to Fig. Fig illustrates different units conversion in accordance with another exemplary embodiment. According Pig, the device 900, the image encoding repeatedly performs an operation of coding over different units conversion. As illustrated in Fig, unit 1210 coding can predskazivati and coded based on the unit 1220 predictions, having a smaller size than the unit 1210 coding. The conversion is performed on the remainder, which are formed by the predictions, and here, as illustrated Fig, DCT may be performed on the remainder on the basis of different units conversion. The first illustrated unit 1230 conversion is the same size as the unit 1210 encoding, and has a size obtained by grouping all units predictions included in the unit 1210 encoding. The second illustrated unit 1240 conversion has a size obtained by dividing in half the width of the unit 1210 coding, and sizes, obtained by what redstem grouping every two units predictions adjacent to each other in the vertical direction, respectively. The third illustrated unit 1250 conversion has a size obtained by dividing in half the height of the unit 1210 coding, and sizes, obtained by grouping every two units of the prediction adjacent to each other in the horizontal direction, respectively. Fourth illustrated unit 1260 conversion is used when the conversion is performed on the basis of the fourth illustrated unit 1260 transformation that has the same size as the unit 1220 predictions. Fig is a block diagram of a device 1300 decoded image in accordance with another exemplary embodiment. According Pig, the device 1300 decoded image in accordance with the present exemplary embodiment includes an entropy decoder 1310, the module 1320 inverse quantization module 1330 inverse transformation. Entropy decoder 1310 performs entropy decoding on the coefficients of the frequency components in a predetermined unit conversion. As described above, with reference to Figa-11C and 12, the predefined translation unit may be a unit conversion, formed by groups who ovci many neighboring units predictions. As described above with reference to the device 900, the image encoding unit conversion may be formed by grouping neighboring units predictions based on the depth or may be formed by grouping multiple adjacent units predictions on which prediction is performed according to the same prediction mode, that is, in accordance with the mode of internal prediction mode or mutual prediction. Many units predictions may not be included in one unit of coding, and incorporated into many of the units of encoding. In other words, as described above with reference to Figa-11C, unit conversion, which is entropy decoded by the entropy decoder 1310, can be set as equal to or smaller than the size of the unit of encoding, or larger units of encoding. As described above with reference to Fig, unit conversion may be the best unit conversion, selected by repeating the procedure of grouping the many nearby units predictions and through repeated execution of the transformation, quantization and entropy decoding on different units conversion. Module 1320 inverse quantization back quantum coefficients of frequency components which e is tropine entropy decoded by the decoder 1310. Module 1320 inverse quantization back quantum entropy decoded coefficients of the frequency components in accordance with the quantization step, which was used when encoding unit conversion. Module 1330 inverse transform performs the reverse translation back the quantized coefficients of the frequency components in the pixel region. The inverse transform module may perform inverse DCT on the back of the quantized discrete cosine coefficients (i.e., inversely quantized coefficients of frequency components) and can then recreate the unit conversion in the pixel region. Recreated translation unit may include neighboring prediction unit. Fig is a flowchart of the sequence encoding the image in accordance with an exemplary embodiment. According Pig, in operation 1410, the device, the image encoding specifies the unit conversion by selecting multiple adjacent units predictions. Device, the image encoding can select multiple adjacent units predictions in accordance with the depth or can select multiple adjacent units predictions on which prediction is performed according to the same prediction mode. the operation 1420, device, the image encoding converts the neighboring prediction unit in the frequency domain, in accordance with unit conversion installed on the operation 1420. Device for encoding image groups of the neighboring prediction unit performs DCT on neighboring units predictions and thus forms a discrete cosine coefficients. In operation 1430, the device, the image encoding quantum coefficients of frequency components generated in operation 1420, in accordance with the quantization step. In operation 1440, the device, the image encoding performs entropy encoding on the coefficients of the frequency components quantized in operation 1430. Device, the image encoding performs entropy coding on discrete cosine coefficients using CABAC or CAVLC. Way, the image encoding in accordance with another exemplary embodiment may further include an operation for setting the optimal unit conversion by repeating operations 1410-1440 over different units conversion. That is, by repeating the conversion, quantization, and entropy encoding on different units conversion, as illustrated in Fig, there is the possibility to set the optimum unit conversion. Fig is a flowchart of the operational sequence of the method of decoding images in accordance with another exemplary embodiment. According Pig, in operation 1510, the device for decoding image performs entropy decoding on the coefficients of the frequency components with respect to a predetermined unit conversion. The coefficients of the frequency components may be discrete cosine coefficients. In operation 1520, a device for decoding image back quantum coefficients of frequency components, which are entropy decoded in operation 1510. The device decoding the image back quantum discrete cosine coefficients by using the quantization step used in the encoding operation. In operation 1530, the decoding device of the image is inversely converts the coefficients of the frequency components, which were inversely quantized in operation 1520, in the pixel region and then recreates the unit conversion. Recreated unit conversion is set by grouping multiple adjacent units predictions. As described above, unit conversion can be set by grouping neighboring units predictions based on the depth or may be specified by gr is Pyromania many neighboring units predictions prediction over which is performed according to the same prediction mode. In accordance with one or more exemplary embodiments of the implementation, it is possible to set the unit conversion so that it was larger than the prediction unit, and performing DCT so that the image could be effectively compressed and encoded. Exemplary embodiments of can also be embodied as computer-readable codes on a computer-readable recording media. The computer-readable recording medium is any data storage device that can store data which can then be read by a computer system. Examples of machine-readable recording media include read-only memory (ROM), random access memory (RAM), CD-ROM, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium can also be distributed across a networked computer systems so that the computer readable code is stored and executed a distributed way. For example, each of the devices, the image encoding device to the decoding of the image, the image encoder and image decoder in accordance with one or more options for implementation may include a bus associated with each module in which trojstva, as illustrated in Fig.1-2, 4-5, 9-10 and 14, and at least one processor associated with the bus. Each of the devices, the image encoding device to the decoding of the image, the image encoder and image decoder in accordance with one or more options for implementation may include the memory associated with the at least one processor, which is connected with the bus to store commands, received messages, or generated messages and execute commands. Although the present invention is shown and described in detail with reference to his exemplary embodiments of the specialist in the relevant field should be clear that in relation to them can be made various changes in form and detail without departing from the essence and scope of the invention as defined by the attached claims. Exemplary embodiments of implementation should be considered in an explanatory sense, but not in order restrictions. As such, the scope of the invention is defined not by the detailed description of the invention and the attached claims, and all differences lying within the volume shall be considered as included in the present invention. 1. The method of decoding image containing phases in which 2. The method according to claim 1, in which at least one prediction unit contains many units predictions, and 3. The method according to claim 1, in which the size of at least one unit of conversion is different from a size of at least one prediction unit and the size of the unit of encoding. 4. The method according to claim 1, wherein the video in the coded video, coded on the basis of the information about the maximum size of a unit of coding and depth of the unit of encoding, and the encoding unit is hierarchically split into units of encoding depth coding in accordance with the depths
|
© 2013-2014 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English. |