Method for determining movement vectors in direct prediction mode

FIELD: engineering of systems for encoding moving image, namely - methods for encoding moving image, directed at increase of encoding efficiency with use of time-wise remote supporting frames.

SUBSTANCE: in the method in process of encoding/decoding of each block of B-frame in direct prediction mode movement vectors are determined, using movement vector of shifted block in given frame, utilized for encoding/decoding B-frame, and, if type of given frame is time-wise remote supporting frame, one of movement vectors, subject to determining, is taken equal to movement vector of shifted block, while another one of movement vectors, subject to determining, is taken equal to 0.

EFFECT: increased encoding efficiency in direct prediction mode, decreased amount of information bits for frame, wherein a change of scene occurs.

2 cl, 6 dwg

 

The technical field to which the invention relates.

[0001] the Present invention relates to systems for encoding a moving image, and in particular to methods of encoding the moving image, aimed at improving coding efficiency using remote time reference frames.

Description of the prior art

[0002] For optimal compression and coding sequence of moving images need to recognize the scene change in the sequence of images. This is because in many cases the use of video, for example, in the transmission of news, sports events, dialogues interview using photography and multi-point video conferencing, have to deal with a repetition of the scene change. Such a scene change may occur over the entire frame, and in a certain part of the frame.

[0003] Upon detection of a scene change method of encoding a digital image can be adjusted accordingly. For example, as the similarity between a frame in which a scene change, and the frame in the previous stage is very small, the frame with the scene change code through mode intra-frame prediction according to which the encoding of the frame is performed with the use of predictions only on the coded samples within the same frame, and not by mode interframe prediction, in which the frame encoding with motion compensation on previously decoded reference frames.

[0004] At a more detailed level the frame in which a scene change occurs over the entire area of the image represents the original frame, the encoding of which is carried out in the mode of intra-frame prediction on all blocks. If a scene change occurs only at a certain part of the frame, all the blocks within those areas where there is a change of scene, encode mode intra-frame prediction. Because when using intra-frame prediction is generated a greater number of bits in comparison with the regime interframe prediction, it is very often for the sequence in which occur scene change, there are fatal problems in the transmission of information at a low speed in bits/s

[0005] generally, when using In-frame to the coding system of a moving picture encoding order different from the order of playback.

[0006] figure 1 shows a display order, wherein each frame display using two b-frames. As can be seen from figure 1, all frames to be displayed, the first display source frame I. Two In-frame B1 and B2 show consistently followed the original frame I. P frame P3 display is up In after-frames. Next, perform the following operations, as described above. In other words, the fourth and fifth frames B4 and B5 show after the P-frame P3. Then display the P frame P6.

[0007] However, the sequence encoding the digital image does not coincide with the order in which they appear. In other words, a P-frame encode previously In the frame.

[0008] figure 2 presents the sequence encoding, in which each frame display using two b-frames. As shown in figure 2, if encode the original frame I, P frame P3 will encode the two B-frames B1 and B2, which are displayed to the P-frame P3. Then encode frames P6, B4, B5, P9, B7, B8, R, B10 and B11.

[0009] In this case, for b-frames are used five of the encoding modes, namely mode intra-frame prediction mode, the forward prediction mode, backward prediction mode of bidirectional prediction and the direct mode prediction. In the mode of bidirectional prediction using two reference frames, which may be located either before or after In-frame, or one of them may be placed In front of the frame, and the second after the image.

[0010] it Should be noted that in the direct mode prediction for the preservation of the continuity of motion in the transition between two adjacent frames using temporal redundancy. In other words, in the direct mode is the first prediction of the forward motion vector and the motion vector back direct mode prediction for In-frame determined by the motion vector offset of the block in the next frame, immediately after In-frame. This direct mode prediction does not require any additional bits of information, for example, information about the movement, so the transfer rate in bit/s may be reduced.

[0011] In this case, the forward motion vector MVfand the motion vector ago MVbstandard direct mode prediction gain by scaling the motion vector MV with regard to the time interval between frames, where MV is the motion vector offset of the block in the next frame. In other words, the forward motion vector MVfand the motion vector ago MVbis obtained using the following equations 1 and 2:

[0012] Equation 1:

[0013] Equation 2:

where MV is the motion vector offset of the block in the next frame, MVf- the forward motion vector in the direct mode prediction for B-frames, MVbthe motion vector back in the direct mode prediction for B-frames, TRd- the time interval between the subsequent frame and the reference frame indicated by the motion vector offset of the block in the next frame, and TRb- the time interval between In-frame and reference frame indicated by the motion vector offset of the block in the next frame.

[0014] as a result, the direct mode prediction is an encoding mode for p the receipt of two blocks with motion compensation using two motion vectors MV fand MVband gives a predicted block by averaging or interpolation calculations on the two blocks with motion compensation.

SUMMARY of the INVENTION

[0015] the Aim of the present invention is to provide such a method of coding a moving image, which would largely eliminate one or more problems related to limitations and disadvantages of the known similar means.

[0016] Accordingly, the objective of the present invention is to develop a method of coding a moving image, allowing to increase the coding efficiency in the direct mode prediction using remote time reference frame for frame.

[0017] Another objective of the present invention is to develop a method of encoding a moving image based on the use mode interframe prediction to reduce the number of bits of information for the frame in which the scene change.

[0018] Additional advantages, objectives and features of the present invention in part will be discussed in the following description and in part will become apparent to experts, after studying the following materials or in the practical implementation of this invention. The objectives and other advantages of us who Otsego of the invention can be understood on the basis of analysis of specific structural schema, presented in the description and claims and the attached drawings.

[0019] To achieve the above objectives and other advantages and in accordance with the purpose of the present invention, implemented and described in detail in the present description, a method for determining motion vectors in direct mode prediction for B-frames is that in the process of coding each block In the frame using direct mode prediction vary the motion vectors in direct mode prediction for B-frames depending on the type of the underlying buffer in which to store the reference frame indicated by the motion vector offset of the block in a given frame.

[0020] it is Desirable that a given frame has been one of the closest in time reference frames used for encoding In-frame.

[0021] the Type of the reference frame is set using the index of the reference frame, a pre-defined for the displaced block in a given frame.

[0022] the index Value of the reference frame remain in the system buffer.

[0023] If the motion vector is defined for the offset of the block in a given frame, points to a remote time reference frame, the forward motion vector in the direct mode prediction for B-frames will be the motion vector offset of the block in a given frame, and the motion vector back in PR mode is considered as the prediction for B-frames is equal to zero.

[0024] the motion Vector is defined for the offset of the block in a given frame, stored in the system buffer.

[0025] If the motion vector is defined for the offset of the block in a given frame, indicates the approximate time reference frame, the motion vectors in direct mode prediction for B-frames is determined by scaling the motion vector of the shifted block in a given frame taking into account the time interval between frames.

[0026] the motion Vector is defined for the offset of the block in a given frame, stored in the system buffer.

[0027] In accordance with another aspect of the present invention a method for determining motion vectors in direct mode prediction for B-frames is that in the process of coding each block In the frame using direct mode prediction determine the motion vectors in direct mode prediction for B-frames depending on the type of the underlying buffer in which is stored a given frame.

[0028] the Base buffer includes a basic remote buffer time reference frames and the basic buffer close to the time reference frames.

[0029] it is Desirable that a given frame has been one of the keyframes, close or remote in time.

[0030] If the specified frame is the base buffer remote time reference frame, the motion vector forward mode direct forecast the Oia for In-frame will be the motion vector offset of the block in a given frame, and the motion vector back in the direct mode prediction for B-frames is equal to zero.

[0031] If the specified frame is the base buffer close to the time reference frames, motion vectors in direct mode prediction for B-frames is determined depending on the type of the underlying buffer in which to store the reference frame indicated by the motion vector offset of the block in a given frame.

[0032] the Type of the reference frame is set using the index of the reference frame, a pre-defined for the displaced block in a given frame.

[0033] the index Value of the reference frame stored in the system buffer.

[0034] If the motion vector is defined for the offset of the block in a given frame, points to a remote time reference frame, the forward motion vector in the direct mode prediction for B-frames will be the motion vector offset of the block in a given frame, and the motion vector back in the direct mode prediction for B-frames is equal to zero.

[0035] the motion Vector is defined for the offset of the block in a given frame, stored in the system buffer.

[0036] If the motion vector is defined for the offset of the block in a given frame, indicates the approximate time reference frame, the motion vectors in direct mode prediction for B-frames is determined by scaling the motion vector of the shifted block in the acceptable frame taking into account the time interval between frames.

[0037] the motion Vector is defined for the offset of the block in a given frame, stored in the system buffer.

[0038] In accordance with another aspect of the present invention a method of encoding a P-frame of a moving image mode interframe prediction is that: (a) detect a scene change in R-frame, and (b) if the P-frame is a scene change, then the P-frame encoding with reference to a remote time reference frame.

[0039] Preferably, a P-frame in which a scene change, was one of the frames with a complete change of scene or a partial scene change.

[0040] If a P-frame with the change of scene is shot with a partial scene change, then the blocks within the area with the change of scene, encode using a remote time reference frame.

[0041] the Base buffer in which to store the remote time reference frame, is a buffer for storing a frame that is encoded before a specified time.

[0042] If a P-frame with the change of scene is shot with a partial scene change, then the blocks that are located within that territory, where a scene change does not occur, encode using a similar time frame.

[0043] the Base buffer in which store is closest in time reference frame, is a buffer for storing a frame that is encoded after a specified time.

[0044] In compliance and with another aspect of the present invention a method of encoding a sequence of moving images in the coding system of a moving image is (a) detect a scene change in P-frame, (b) if the P-frame is a scene change, then the P-frame encoding interframe mode with reference to a remote time reference frame, (C) in the process of coding each block in the frame using direct mode prediction in accordance with the procedure of encoding determines the type of the underlying buffer in which to store the specified frame, and (d) determine the motion vectors in direct mode prediction for In-frame in accordance with the type of the underlying buffer and encode In-frame in the direct mode prediction.

[0045] the motion Vector is defined for the offset of the block in a given frame, stored in the system buffer.

[0046] If the specified frame is the base buffer remote time reference frames according to the operation (d), the forward motion vector in the direct mode prediction for B-frames will be a motion vector offset of the block in a given frame, and the motion vector back in the direct mode prediction for B-frames is equal to zero.

[0047] If the specified frame is the base buffer close to the time of reference frames according to the operation (d), the motion vectors in direct mode prediction for B-frames is determined depending on the type of the underlying buffer in which to store the reference frame indicated by the motion vector offset of the block in a given frame.

[0048] T is p a keyframe set using the index of the reference frame, pre-defined offset of the block in a given frame.

[0049] the index Value of the reference frame stored in the system buffer.

[0050] If the motion vector is defined for the offset of the block in a given frame, points to a remote time reference frame, the forward motion vector in the direct mode prediction for B-frames will be a motion vector offset of the block in a given frame, and the motion vector back in the direct mode prediction for B-frames is equal to zero.

[0051] the motion Vector is defined for the offset of the block in a given frame, stored in the system buffer.

[0052] If the motion vector is defined for the offset of the block in a given frame, indicates the approximate time reference frame, the motion vectors in direct mode prediction for B-frames is determined by scaling the motion vector determined for the offset of the block in a given frame, taking into account the time interval between frames.

[0053] the motion Vector is defined for the offset of the block in a given frame, stored in the system buffer.

[0054] the P-frame in which a scene change is one of the frames with full or partial scene change.

[0055] If a P-frame with the change of scene is shot with a partial scene change, then the blocks which belong to the area of the image, where the origin Taiwan which leads to a scene change, encode using a remote time reference frame.

[0056] the Base buffer in which to store the remote time reference frame, is a buffer for storing a frame that is encoded before a specified time.

[0057] If a P-frame with the change of scene is shot with a partial scene change, then the blocks which belong to the area of the image where no scene change, encode using a similar time frame.

[0058] the Base buffer in which store is closest in time a keyframe is a buffer for storing a frame that is encoded after a specified time.

[0059] the Base buffer close to the time of the reference frame buffer is FIFO type.

[0060] the Specified frame for coding in direct mode prediction in b-frame is one of the reference frames used for encoding In-frame.

[0061] In accordance with another aspect of the present invention a method of predicting the P-frame mode interframe prediction is that (a) reads the reference frame, the pre-decoded mode interframe prediction; and (b) predicted P-frame using motion compensation relative to a reference frame, while supporting frame includes at least a remote time reference frame.

[0062] the Support frame includes BL is ski time reference frame.

[0063] If the P-frame is a full or a partial scene change, then encode it with a reference to a remote time reference frame.

[0064] If a P-frame in which the scene change is a frame with a partial scene change, the blocks belonging to the site, where there is a scene change, encode using a similar time frame.

[0065] If there is a change of scene, the blocks belonging to the site, where there is a change of scene, encode using remote time reference frames.

[0066] In accordance with another aspect of the present invention a method for determining motion vectors in direct mode prediction is that: (a) read offset block in a given frame, while the offset unit determines a motion vector that points to the reference frame; and (b) determine at least one motion vector in the direct mode prediction based on the type of the reference frame, the base frame includes a remote time reference frame.

[0067] At least one motion vector equal to the motion vector offset of the block, if the type of the reference frame is a remote time reference frame.

[0068] At least one motion vector equal to zero, if the type of the reference frame is a remote time reference frame.

[069] At least one motion vector is at least one motion vector in the direct mode prediction for B-frames.

[0070] If the type of the reference frame is the closest in time reference frame, at least one motion vector in the direct mode prediction is obtained by scaling the motion vector due to a displaced block.

[0071] it Should be understood that both the foregoing General description and the subsequent detailed description of the present invention are illustrative and explanatory and are intended for a more thorough understanding of the claimed invention.

BRIEF DESCRIPTION of DRAWINGS

[0072] the Accompanying drawings, which are included to facilitate understanding of the invention and are an integral part of this application, illustrate example(s) of the invention and together with the description serve to explain the principle of the invention. In the accompanying drawings:

[0073] figure 1 shows a sequence of display, in which each frame is displayed by using two In-frame;

[0074] figure 2 shows the sequence encoding, in which each frame is displayed by using two In-frame;

[0075] on figa and 3B presents a block diagram illustrating a method of encoding a sequence of moving images in the coding system of a moving image in accordance with a preferred embodiment of the present invention;

[0076] figure 4 pok is Zan method of encoding a sequence of moving images, in which the scene change, in accordance with the preferred embodiment of the present invention; and

[0077] figure 5 shows the encoding of b-frame direct mode prediction in accordance with a preferred embodiment of the present invention.

DETAILED description of the INVENTION

[0078] In this description are described in detail preferred embodiments of the present invention, which are illustrated with the help of the attached drawings. If possible, to designate same or similar elements in the drawings will be used the same number of positions.

[0079] Before describing the exemplary embodiment of the present invention, should begin with a clarification of terms, namely: the frame in which a scene change occurs on the whole image is determined as a frame with a complete change of scene, and the frame in which a scene change occurs only in some part of the image is determined as a frame with a partial scene change.

[0080] On figa and 3B shows a flowchart illustrating a method of encoding a sequence of moving images in the coding system of a moving image in accordance with a preferred embodiment of the present invention. As seen on figa and 3B, the frames one after another type of sequence is lnasty the moving image (S111).

[0081] Next, determine the type of personnel (S114). In other words, determine whether the input frame P-frame or b-frame. In this example implementation of the present invention, it is assumed that the encoding with respect to the original image to perform in advance.

[0082] If the frame is a P frame, determine whether this P-frame of a scene change (S117). In this case, the scene change is determined by comparing the current P-frame by frame (P or B)displayed immediately before the P-frame.

[0083] as a result of the determination made in step S117, if the P-frames there is a complete change of scene, then the current P-frame is a frame with a complete change of scene. Accordingly, if a P-frame is defined as a frame with a complete change of scene, then coding is performed with reference to a remote time reference frame (S120).

[0084] If the current P-frame is not a frame with a complete change of scene, determine whether this frame by frame with a partial scene change (S123).

[0085] If a P-frame is a frame with a partial scene change, then the blocks which belong to the area of the image where the scene change code with reference to a remote time reference frame returns to step S120 (S126).

[0086] the Blocks on the image area within which a scene change does not occur, encode with reference to close by the time the new keyframe (S129, S132).

[0087] In this case, the remote time reference frame is a frame which is stored in the base buffer for remote time reference frames, and close in time a keyframe is a frame that is stored in the base buffer for close on time support frames.

[0088] the Base buffer for close to the time of the reference frame buffer is FIFO type (first in first out), in which the frame entered first, first applied to the output, in this case frames encoded before the expiration of a relatively short period of time, store in the base buffer for close on time support frames.

[0089] the Frames are encoded before the expiration of a relatively long period of time, store in the buffer for remote time reference frames. The first images from the respective sets of scenes, namely the original frame, frame with a complete change of scene, a frame with a partial scene change and the like, stored in the base buffer for remote time reference frames.

[0090] If the underlying buffer for remote time reference frames no frames with a complete change of scene or frame with a partial scene change, then the frame with the change of scenes can be saved separately.

[0091] Accordingly, as shown in figure 4, the source frame 10, which is the first frame with a full scene change in a sequence of stages A1, the first frame when Olney the scene change P50 in the sequence of stages B1 and the first frame with a partial scene change-R120 can be stored in the base buffer for remote time reference frames. In this case, the sequence of scenes is a sequence of similar frames. For example, when displaying discussion programs on the screen first appears, the leading, then you receive the participant And, then again, you receive the lead, and again the discussion participant A. Scene leading appears for the first time, refers to the sequence of scenes And scene then appears participant But refers to the sequence of scenes Century Scene leading reappears, refers to the sequence of scenes And scene reappears participant But refers to the sequence of scenes Century, As indicated above, when a scene change, a P-frame encode mode frame-to-frame prediction with reference to close or remote time reference frame instead of intra-frame prediction. This approach allows to reduce the number of required bits of information and, accordingly, to increase the coding efficiency.

[0092] steps S117-S132 will be made with reference to figure 4. As shown in figure 4, if you encode a P-frame P200 is a frame with a complete change of scene, belonging to the sequence of scenes B2, then close the keyframes are stored in the base buffer for close to the time reference frames, do not use the so For this reason, the frame with a complete change of scene P200 is the first frame in the sequence of stages B2 and the sequence of scenes for a frame with a complete change of scene P200 differs from close to the time reference frames, such as R, R, R and so on, which belong to the sequence of scenes A2. Thus, the similarity of the frame with a complete change of scene P200 with family time keyframes belonging to the sequence of scenes A2 is significantly reduced and accurate coding on such reference frames becomes impossible.

[0093] In this case, a P-frame encoding mode frame-to-frame prediction with reference to different reference frames P50 and-R120 belonging sequence of stages B1, which coincides with the sequence of scenes B2.

[0094] on the other hand, if the P-frame R there is a partial scene change, then the encoding is carried out in a different way depending on two conditions. In other words, the blocks belonging to the site, where there is a partial scene change, encode mode frame-to-frame prediction with reference to a remote time reference frames P50-R120 and stored in the base buffer for remote time reference frames. Blocks belonging to the area where the scene change does not occur, encode mode frame-to-frame prediction with reference to a close time frame is R, R, R and so forth, stored in the base buffer for close on time support frames.

[0095] As described above, after encoding one P-frame enter the next frame (S159). If this frame is In-frame, then perform test of five prediction modes (mode intra-frame prediction mode, the forward prediction mode, backward prediction mode bi-directional prediction mode and a direct prediction) and one of them is selected as the optimal encoding mode (S135, S138). This description will be considered, mainly, the direct mode prediction.

[0096] First read one of the blocks In the frame (S141). Read the rest of the blocks may be performed sequentially. Then check the type of the underlying buffer in which to store the given frame.

[0097] the Desired frame is selected from the previous frame with respect to the given In-frame sequence encoding regardless of the sequence display. In other words, a given frame is one of the reference frames used for encoding In-frame. Therefore, a given frame may be close or remote time reference frame. Closest-in-time reference frames can be placed before or after the In-frame in display order, they are stored in the base buffer for close to the time reference frames. Deleted by BP is like keyframes stored in the base buffer for remote time reference frames. If the specified frame is a reference frame, remote in time, the forward motion vector in the direct mode prediction for B-frames will be the motion vector offset of the block in a given frame. The motion vector back in the direct mode prediction for B-frames is equal to zero (S150). However, if the specified frame is closest to the time reference frame, it reads the index of the reference frame and the motion vector is defined for the offset of the block in a given frame (S144). Referred to the index of the reference frame and the motion vector determined beforehand and stored in the system buffer. In accordance with the index of the reference frame to determine whether the motion vector offset of the block in a given frame to a remote time reference frame (S147). As already noted, the keyframes are stored in the base buffer, which includes a buffer close to the time reference frames and the buffer for remote time reference frames.

[0098] If the motion vector offset of the block in a given frame indicates to the remote time reference frame, b-frame encoding using the following expressions 3 and 4 (S150).

[0099] Expression 3:

MVf=MV,

where MV is the motion vector offset of the block in a given frame, a MVf- the forward motion vector in the direct mode prediction for B-frames.

[0100] Expression 4:

MVb=0,

MV - the motion vector offset of the block in a given frame, a MVbthe motion vector back in the direct mode prediction for B-frames.

[0101] in Other words, if the motion vector offset of the block in a given frame indicates to the remote time reference frame, the forward motion vector in the direct mode prediction for B-frames will be a motion vector offset of the block in a given frame, and the motion vector back will be zero.

[0102] As shown in figure 5, in step S150, if the motion vector offset of the block in a given frame P200 points to a remote time reference frame P50 parameters TRdand TRbno sense in standard formulas 1 and 2. In other words, because the parameters TRdand TRbdescribe the time interval (including even another sequence of scenes A2) between the specified frame P200 belonging sequence of scenes B2, and remote time reference frame P50 belonging to the same sequence of stages B1, the forward motion vector and the motion vector back in the direct mode prediction cannot be calculated using parameters such as TRdand TRb.

[0103] Referring to figure 5, should be further noted the following. With the introduction of two b-frames in a sequence of moving images and their encoding the first code is a P-frame P200, standing before the frames B1 and B2 in the order of encoding. In this case, since the P-frame P200 is a frame with a complete change of scene, then encode this frame interframe mode remote time reference frame P50 stored in the base buffer for remote time reference frames. In accordance with the sequence encoding the next frame to be coded is a still frame B1. Because the frame B1 belongs to the sequence of scenes A2, the majority of blocks encode mode prediction forward in near-time reference frames belonging to a sequence of scenes A2, or bidirectional prediction mode, in which both the reference frame belong to the sequence of scenes A2. However, the mode of the intra-frame prediction mode, backward prediction or bidirectional prediction for P-frame P200 belonging to a different sequence of scenes B2, as well as direct mode prediction intended for obtaining motion vectors of direct mode prediction on the displaced block in the P frame P200, probably cannot be used as the encoding modes for the blocks of the frame B1.

[0104] In contrast, since not only the frame B2, but also a specified frame P200 used to determine motion vectors in direct mode prediction for frame B2 belong to the same sequence of scenes B2, as an encoding method for most blocks of the frame B2 selects the direct mode prediction. In other words, after receiving the motion vector of each block in a given frame P200 mode interframe prediction on the remote time reference frame P50 belonging to the same sequence of scenes B2, the motion vectors of direct mode prediction for frame B2 is determined by the motion vector offset of the block in a given frame P200. Because the frame B2 and the specified frame P200 belong to the sequence of scenes B2, and the remote time reference frame P50 also belongs to the sequence of stages B1 and the similarity between sequences of stages B1 and B2 are very high for most of the blocks in the frame B2 as a way of encoding can be set to direct mode prediction. The coding efficiency for the frame B2, respectively, increases.

[0105] on the other hand, if the motion vector offset of the block in a given frame indicates the approximate time reference frame, b-frame encoding using standard expressions 1 and 2. In this case, as close in time to the reference frame stored in the corresponding buffer, belongs to the same sequence of scenes that frame, and another sequence of scenes between a given frame and close time reference frame is not), is there, the forward motion vector and the motion vector back direct mode prediction determined using standard expressions 1 and 2, associated with parameters TRdand TRbrepresenting time intervals.

[0106] If the code for one of the blocks In the frame, the next block In the frame will be read and encode consistently (S156). Similar operations are performed for all blocks In the frame. After encoding In-frame to the next frame and enter code to provide the encoding of the moving image (S159).

[0107] As shown above, according to the proposed in the present invention the method of coding a moving image, the forward motion vector and the motion vector back in the direct mode prediction for B-frames determined differently based on the reference block indicated by the motion vector offset of the block in a given frame. With respect to In-frame as the coding method, providing improved coding efficiency in General use, mainly, the direct mode prediction.

[0108] According to the proposed in the present invention the method of coding a moving image, a P-frame in which a scene change, encode mode interframe prediction using motion compensation in a remote time-support frames, so that when isit the number of bits of information and to improve the coding efficiency.

[0109] For specialists in the art it is obvious that in the present invention various changes and deviations. It is implied that the present invention includes modifications and deviations, provided that they are within the scope of the attached claims, including with regard to equivalent features.

1. The method for determining motion vectors in direct mode prediction for B-frames, namely, that in the process of encoding/decoding of each block In a frame in the direct mode prediction determine the motion vectors in direct mode prediction using the motion vector offset of the block in a given frame used to encode/decode a b-frame, in this case, if the type of the specified frame is a remote time reference frame, one of the motion vectors to be determined is equal to the motion vector offset block, and the other of the motion vectors to be determined is equal to 0.

2. The method according to claim 1, in which the mentioned one motion vector to be determined is the forward motion vector, and the other motion vector to be determined is the motion vector ago.

3. The method of determining motion vectors for a block In a picture in direct mode prediction, containing op is adelene motion vectors in direct mode prediction using the motion vector offset of the block in a given frame, used to encode/decode a B-frame, if the type of the specified frame is closest in time reference frame, if the motion vector offset of the block indicates the remote time reference frame, one of the motion vectors to be determined is equal to the motion vector offset block, and the other of the motion vectors to be determined is equal to 0, and if the motion vector offset of the block indicates the approximate time reference frame, the motion vectors determined by scaling the motion vector of the shifted block.

4. The method according to claim 3, in which the mentioned one motion vector to be determined is the forward motion vector, and the other motion vector to be determined is the motion vector ago.

5. The method according to claim 3, in which the motion vector is shifted block in a given frame is stored in the system buffer.



 

Same patents:

FIELD: engineering of systems for encoding moving images, namely, methods for encoding moving images, directed at increasing efficiency of encoding with use of time-wise remote supporting frames.

SUBSTANCE: method includes receiving index of supporting frame, standing for supporting frame, pointed at by other block, providing movement vector for determining movement vector of current block, and determining movement vector of current block with utilization of supporting frame index, denoting a supporting frame.

EFFECT: increased efficiency of encoding in direct prediction mode, decreased number of information bits for frame, in which scene change occurs.

3 cl, 6 dwg

FIELD: technology for compacting and unpacking video data.

SUBSTANCE: for each pixel of matrix priority value is determined, pixels difference value is calculated, priority values utilized for calculating value of pixels priority are combined in one pixels group, pixel groups are sorted, pixel groups are saved and/or transferred in accordance to their priority in priority matrix, while aforementioned operations are constantly repeated, while values of pixel group priorities are repeatedly determined anew, priority matrix for any given time contains pixel groups sorted by current priorities, and also preferably stored first and transferred are pixel groups, which have highest priority and still were not transferred.

EFFECT: simple and flexible synchronization at different transfer speeds, width of transfer band, resolution capacity and display size, respectively.

2 cl, 8 dwg, 1 tbl

FIELD: engineering of circuit for compressing image signals, using blocks and sub-blocks of adaptively determined sizes of given coefficients of discontinuous cosine transformation.

SUBSTANCE: block size-setting element in encoder selects a block or sub-block of processed input pixels block. Selection is based on dispersion of pixel values. Blocks with dispersions greater than threshold are divided, while blocks with dispersions lesser then threshold are not divided. Transformer element transforms pixel values of selected blocks to frequency range. Values in frequency range may then be quantized, transformed to serial form and encoded with alternating length during preparation for transmission.

EFFECT: improved computing efficiency of image signals compression stages without loss of video signals quality levels.

4 cl, 5 dwg

FIELD: engineering of devices for transforming packet stream of information signals.

SUBSTANCE: information signals represent information, positioned in separate, serial packets of digital format data. These are transformed to stream of information signals with time stamps. After setting of time stamps, which are related to time of arrival of data packet, time stamps of several data packets are grouped as packet of time stamps, wherein, in accordance to realization variant, size of time stamps packet equals size of data block.

EFFECT: improved addition of data about time stamps to data packets with fixed size.

6 cl, 29 dwg

FIELD: methods and devices for memorization and processing of information containing video images following one another.

SUBSTANCE: from each image recorded prior to current time appropriately at least one image area is selected and aperture video information is recorded with placement information. from video-information at least one mixed image is generated with consideration of appropriate placement information. Mixed image is utilized for display in accordance to movement estimation, movement compensation or error masking technology frames.

EFFECT: decreased memory resource requirements for memorization of multiple previously received images.

3 cl, 4 dwg

FIELD: multimedia technologies.

SUBSTANCE: method includes at least following stages: determining, whether current value of processed coefficient of discontinuous cosine transformation is equal or less than appropriate threshold value, used in current time for quantizing coefficients of discontinuous cosine transformation of image blocks of common intermediate format, and if that is so, then value of discontinuous cosine transformation coefficient is set equal to zero, then currently used threshold value is increased for use as threshold value with next processing of discontinuous cosine transformation coefficient, in opposite case currently used threshold value is restored to given original threshold value, which is used as threshold value for next processing of discontinuous cosine transformation coefficient; determining, whether increased threshold value is greater than given upper limit of threshold value, and if that is so, than increased threshold value is replaced with given upper limit.

EFFECT: higher quality.

8 cl, 4 dwg

FIELD: protection of video information against unauthorized copying.

SUBSTANCE: proposed method using watermarks to protect video information against unauthorized copying by changing scale of pattern in the course of copying includes introduction of watermark in original video signal with different scales. Watermark is maintained in each scale for preset time interval sufficient to enable detector circuit in digital-format video recorder to detect, extract, and process information contained in watermark. Watermark scale is changed by end of preset time interval preferably on pseudorandom basis to ensure appearance of each of all scales in predetermined scale variation range as many times as determined in advance. In this way definite scale possessing ability of watermark recovery to initial position and size can be identified and used for watermark detection.

EFFECT: enhanced reliability, facilitated procedure.

24 cl, 7 dwg

FIELD: converting code of received video sequence using extrapolated movement data received from video sequence.

SUBSTANCE: proposed method for code conversion involves reception of first bit stream of compressed picture data having some coding parameters. These parameters may relate to GF-structure of picture frames, picture frame size, to parameter showing if frames presented in input bit stream are picture fields or frames, and/or if they form picture frames presented in bit stream, direct or interlaced sequence. First and second movement vectors are obtained from input bit stream and used together with weighting coefficients to extrapolate third movement vector for output bit stream of compressed picture data. Output bit stream that differs from input one by one or more parameters is outputted as converted-code output signal.

EFFECT: provision for minimizing of or dispensing with movement estimation in code conversion process.

22 cl, 4 dwg, 1 tbl

FIELD: coding elementary-wave data by means of null tree.

SUBSTANCE: proposed method includes generation of elementary-wave ratios pointing to image. In the process bits of each elementary-wave ratio are associated with different bit places so that each place is associated with one of bits of each elementary-wave ratio and associated bits are coded with respect to each place of bits to point to null tree roots. Each place of bits is also associated only with one of bits of each elementary-wave ratio. Computer system 100 for coding elementary-wave ratios by means of null tree has processor 112 and memory 118 saving program that enables processor 112 to generate elementary-wave ratios pointing to image. Processor 112 functions to code bits of each place to point to null tree roots associated with place of bits.

EFFECT: enhanced data compression speed.

18 cl, 7 dwg

The invention relates to the field of shorthand, more specifically to the introduction and detection of watermark data in the data stream, such as, for example, video stream

FIELD: engineering of systems for encoding moving images, namely, methods for encoding moving images, directed at increasing efficiency of encoding with use of time-wise remote supporting frames.

SUBSTANCE: method includes receiving index of supporting frame, standing for supporting frame, pointed at by other block, providing movement vector for determining movement vector of current block, and determining movement vector of current block with utilization of supporting frame index, denoting a supporting frame.

EFFECT: increased efficiency of encoding in direct prediction mode, decreased number of information bits for frame, in which scene change occurs.

3 cl, 6 dwg

FIELD: television.

SUBSTANCE: device has blocks: first interface block, providing receipt of data about switching of programs by subscriber, electronic watch block, first memory block for archiving data about time of viewing of each selected program, second memory block, containing electronic addresses of broadcast companies, block for rearranging data about viewing time, processor, forming packet of data about which TV program and time of its viewing, third interface block, providing output along phone network of data about viewing time of each TV program to server of company, which broadcast current TV program.

EFFECT: higher efficiency.

1 dwg

FIELD: television.

SUBSTANCE: device has scaling block, two delay registers, block for forming pixel blocks, buffer register, block for calculating movement vectors, two subtracters, demultiplexer, enlargement block, pulsation filtering block, mathematical detectors block, multiplexer, reverse scaling block, as a result of interaction of which it is possible to detect and remove some series of TV frames from programs, which cause harmful effect to viewer, specifically pulsations of brightness signals and color signals with frequency 6-13 Hz.

EFFECT: higher efficiency.

1 dwg

The invention relates to radio engineering and is intended for the discharge of compressed clock signal to a device for separating signal and the clock signal, essentially agreed with a synchronizing signal encoding device

The invention relates to the fields of electronics, communications, computer science, television, interactive television, video telephony and video conferencing

The invention relates to a method of dimensional television broadcast in the frequency band flat TV

The invention relates to image compression to reduce bandwidth requirements of digital video decoder

The invention relates to a noise reduction device in the system of video coding

The invention relates to the technique of moving image decoders

FIELD: engineering of systems for encoding moving images, namely, methods for encoding moving images, directed at increasing efficiency of encoding with use of time-wise remote supporting frames.

SUBSTANCE: method includes receiving index of supporting frame, standing for supporting frame, pointed at by other block, providing movement vector for determining movement vector of current block, and determining movement vector of current block with utilization of supporting frame index, denoting a supporting frame.

EFFECT: increased efficiency of encoding in direct prediction mode, decreased number of information bits for frame, in which scene change occurs.

3 cl, 6 dwg

Up!