Method and device for weighted prediction for scalable video signal coding

FIELD: information technology.

SUBSTANCE: scalable video signal encoder includes an encoder for encoding a block in an enhancement layer of a picture by applying a same weighting parametre to an enhancement layer reference picture as that applied to a lower layer reference picture used for encoding a block in a lower layer of the picture. The block in the enhancement layer corresponds to the block in the lower layer, and the enhancement layer reference picture corresponds to the lower layer reference picture.

EFFECT: high efficiency of weighted prediction for scalable coding and decoding a video signal with possibility of storing different sets of weighting parametres for the same reference picture in the enhancement layer.

31 cl, 6 dwg

 

CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority on provisional application No. 60/701464 for the grant of a U.S. patent, filed July 21, 2005 and entitled "METHOD AND APPARATUS FOR WEIGHTED PREDICTION FOR SCALABLE VIDEO CODING" ("METHOD AND apparatus FOR WEIGHTED PREDICTION FOR SCALABLE CODING of VIDEO"), which is included in the materials of the present application by reference in its entirety.

The technical FIELD TO WHICH the INVENTION RELATES.

The present invention generally relates to encoding and decoding video signal, and more specifically to methods and apparatus for weighted prediction for scalable encoding and decoding of video.

The prior art INVENTIONS

Standard Advanced of video coding (AVC) part 10 standard 4 Expert group on the film image (MPEG-4) International organization for standardization/International electrotechnical Commission (ISO/IEC)/H.264 standard International telecommunication Union - telecommunication sector (ITU-T) (hereinafter, "standard MPEG4/H.264" or simply "H.264 standard") is the first international standard for encoding video for inclusion in the tool Weighted prediction (WP). Weighted prediction is applied to improve the efficiency cadiovascular scalable coding of video (SVC), designed as a complement of the H.264 standard, also applies weighted prediction. However, the SVC standard does not define explicitly the ratio of the weights of the basic level and levels of improvement.

Weighted prediction (WP) is supported in the main, extended and direct the profile of the H.264 standard. The use of WP is specified in the parameter set sequence for a series of P and SP consecutive macroblocks using field weighted_pred_flag (flag weighted prediction)and for a series of consecutive macroblocks, using field weighting_bipred_idc (identification number of double predictions with weight processing). There are two modes WP, explicit mode and implicit mode. Explicit mode is supported in series P, SP and In the successive macroblocks. Implicit mode is supported only in a series of consecutive macroblocks.

Single weighting factor and the offset associative associated with each index of the reference image for each color component in each series of consecutive macroblocks. In explicit mode, these parameters WP can be encoded in the header of a series of consecutive macroblocks. In implicit mode, these parameters are derived on the basis of the relative distance of the current image and the reference image.

For each macroblock or segment macroblock p is imename weights based on the index of the reference image (or indexes, in the case of dual-prediction of the current macroblock or segment of the macroblock. The indexes of the reference image encoded or bit stream, or can be displayed, for example, macroblocks alternate or direct mode. The use of an index of a reference image for alarm, what weights should be applied is the effective bit rate in comparison with the demand index weighting parameter in the bit stream, because the index of the reference image already available on the basis of other required fields in the bitstream.

Many different ways scalability have been widely studied and standardized, including scalability, SNR (signal/noise), spatial scalability, temporal scalability, and fine-grained scalability, profiles scalability of MPEG-2 and H.264, or are being developed at present in addition to the H.264 standard.

For spatial, temporal and related to SNR scalability introduces a high degree of inter-level predictions. Macroblocks with internal and external encoding can predskazivati using the corresponding signals of the previous levels. Moreover, the description of motion in each level can be used to predict describe the motion of the DL the next level of improvement. These technologies fall into three categories: inter-prediction inner textures, the inter-prediction motion and inter-level differential prediction.

In the United scalable model video (JVSM) 2.0, the macroblock-level improvements may apply inter prediction using the motion scaled basic level, using either "BASE_LAYER_MODE" ("basic level"), or "QPEL_REFINEMENT_MODE" ("the Mode of improving the quality codec QPEL"), as in the case of binary (two-level) spatial scalability. When using inter-prediction motion vector of the motion (including the index of the reference image and associative associated weights) appropriate (subject to increase sampling) MB (macroblock) in the previous level is used for motion prediction. If the level of improvement and its previous level have different values pred_weight_table() (table weights predictions), you need to save different sets of weight parameters for the same reference image improvement.

The INVENTION

To overcome these and other drawbacks and difficulties of the prior art taken by the present invention which is directed to methods and apparatus weighted prediction for astaburuaga encoding and decoding of video.

According to the aspect of the present invention, a scalable video decoder. The scalable video decoder includes a decoder for decoding a block in the level of image enhancement by applying to a reference image level improvements such as weight parameter, as applied to a reference image of the lower level is used for decoding a block in the lower level of the image. The block in the level of improvement corresponds to a block in the lower level and the reference image level of improvement corresponds to a reference image of the lower level.

According to another aspect of the present invention, a method for scalable decoding of the video signal. The method includes the step of decoding a block in the level of image enhancement by applying to a reference image of the same weight parameter of the improvement as applied to a reference image of the lower level is used for decoding a block in the lower level of the image. The block in the level of improvement corresponds to a block in the lower level and the reference image level of improvement corresponds to a reference image of the lower level.

According to another aspect of the present invention, a storage medium containing scalable data signal, shkodrov is installed on it, includes the block that is encoded in the rate of improvement of the image formed by applying to a reference image level improvements such as weight parameter, as applied to a reference image of the lower level used to encode a block in the lower level of the image. The block in the level of improvement corresponds to a block in the lower level and the reference image level of improvement corresponds to a reference image of the lower level.

These and other aspects, features and advantages of the present invention will become apparent from the subsequent detailed description of exemplary embodiments, which should be read in connection with the accompanying drawings.

BRIEF DESCRIPTION of DRAWINGS

The present invention may be better understood in accordance with the following exemplary figures, in which:

figure 1 shows a block diagram for an exemplary encoder United scalable model video (JVSM) 2.0, which can be applied these principles;

figure 2 shows a block diagram for an exemplary decoder, which can be applied these principles;

figure 3 - block diagram of the operational sequence of the method for the approximate method for scalable encoding of the video block of the image using a weighted prediction according to note is rnym of the embodiment of the present principles;

4 is a block diagram of the operational sequence of the method for the approximate method for scalable decoding of the video block of the image using a weighted prediction according to an exemplary embodiment of the present principles;

5 is a block diagram of the operational sequence of the method for an exemplary method of decoding a syntax level_idc and profile_idc in accordance with an exemplary embodiment of the present principles; and

6 is a block diagram of the operational sequence of the method for an exemplary method of decoding constraints weighted prediction for the level of improvements in accordance with an exemplary embodiment of the present principles.

DETAILED DESCRIPTION

The present invention is directed to methods and apparatus weighted prediction for scalable encoding and decoding of video.

In accordance with the principles of the present invention, disclosed are methods and apparatus that re-use weights baseline for weighted prediction of improvement. Mainly, options for implementation in accordance with these principles can save on memory and/or complexity for both encoder and decoder. Moreover, embodiments of in accordance with the present paragraph is inciple can also save bits at very low bit rate.

The present description illustrates the principles of the present invention. Respectively, will be taken into account that the specialists in the art will be able to develop different layouts, which, although not described and not shown in the materials of this application explicitly embody the principles of the invention and incorporated in its nature and scope.

All examples and conditional language contained in the materials of this application, are intended for educational purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor in the development of this technology, and should be construed as existing without limitation such clear examples and conditions.

Moreover, all expressions in the materials of the present application, outlining the principles, aspects and embodiments of the invention, and specific examples are intended to encompass both structural and functional equivalents. Additionally, it is understood that such equivalents include currently known equivalents as well as equivalents developed in the future, that is, any developed elements that perform the same function, regardless of structure.

Thus, for example, specialists in the field of those who Nicky will be taken into account, structural scheme presented in the materials of the present application represent conceptual views of illustrative circuits embodying the principles of the invention. In this way, will be taken into account that any flowchart of sequences of operations, methods, diagrams, sequence of operations, chart, navigation, pseudocode, and the like represent various sequences of operations, which, essentially, can be provided on computer-readable media and so executed by a computer or processor, shown or not such computer or processor is explicitly.

The functions of the various elements shown in the figures, can be provided through the use of specialized hardware, and the hardware, allowing for the execution of the software in connection with the proper software. When provided by a processor, the functions may be provided by a single dedicated processor, a single shared processor, or a large number of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to indicate the link exclusively on the hardware, allowing for the execution of software is a lot of support, and may implicitly include, without limitation, hardware, digital signal processors ("DSPS", "DSP"), permanent memory ("ROM", "ROM") for storing software, random access memory ("RAM", "RAM") and non-volatile memory.

Other hardware, standard and/or custom-made, can also be included in the composition. Similarly, any of the switches shown in the figures are merely conceptual. Their function can be performed through the operation of program logic, thanks to a dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technology is chosen by the designer as more specifically understood from the context.

In the claims of this document, any element expressed as a means for performing prescribed functions, is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that perform the same function or b) software in any form, therefore, concluding in itself, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. Izaberete the Oia, as defined by such claims, characterized by the fact that the functionality provided by the various means listed, combined and mixed together thereby to which obliges the claims. Accordingly, it is believed that any means that can provide such functionality equivalent to that shown in the materials of this application.

In accordance with the variants of the implementation of these principles, discloses a method and a device that reuses weights baseline for improvement. Because the basic level is simply subjected to a reduced sample rate option level improvements, it is advantageous, if the level of improvement and the basic level have the same set of weight parameters for the same reference image.

In addition, other advantages/characteristics provided in these rules. One of the advantages/characteristics is that only one set of weight parameters must be stored for each level of improvement, which can save memory usage. In addition, when using inter-prediction motion, the decoder needs to know which set of weight parameters is used. The reference table can be used DL the storage of the necessary information.

Another advantage of the/sign is to reduce the complexity of both the encoder and decoder. In the decoder, options for the implementation of these principles can reduce the complexity of parsing and table lookups to locate the correct set of weight parameters. In the encoder, embodiments of the present principles can reduce the complexity of using different algorithms and, thus, the decision tree for estimation of weight parameters. When you use the update phase, taking into account the weight of the predictions, the assumption of numerous weight parameters for the same index of the reference image will make obtaining information about the movement on the stage facing the update in the decoder and the update phase in the encoder is more complex.

Another another advantage/sign is that at very low bit rate, embodiments of the present principles may also have a slight advantage coding efficiency, because the weight parameters are not passed in the header of a series of consecutive macroblocks for improvement explicitly.

Referring to figure 1, an exemplary encoder version 2.0 of the United scalable model video (JVSM) (hereinafter, "JSVM2.0"), to which is applied the present invention, in General, the azan number 100 links. The encoder 100 JSVM2.0 uses three spatial levels and temporal filtering with motion compensation. The encoder 100 JSVM includes a two-dimensional (2D) thinner 104, 2D thinner module 106 and 108 temporal filtering with motion compensation (MCTF), each of which has an input for receiving data 102 of the video.

Exit 2D thinner 106 connected in signal communication with the input module 110 MCTF. The first output module 110 MCTF connected in signal communication with the input of the encoder 112 of the motion, and the second output module 110 MCTF connected in signal communication with the input module 116 predictions. The first output of the encoder 112 movement connected signal communication with the first input of the multiplexer 114. The second output of the encoder 112 movement connected signal communication with a first input of the encoder 124 movement. The first output module 116 predictions connected signal communication with the input of the spatial Converter 118. The output of the spatial Converter 118 connected in signal communication with a second input of the multiplexer 114. The second output module 116 predictions connected signal communication with the input of the interpolator 120. The output of interpolator connected in signal communication with the first input module 122 predictions. The first output module 122 predictions connected signal communication with the input of the spatial Converter 126. The output of the spatial p is OBRAZOVATEL 126 connected in signal communication with a second input of the multiplexer 114. The second output module 122 predictions connected signal communication with the input of the interpolator 130. The output of the interpolator 130 is connected in signal communication with the first input module 134 predictions. The output module 134 predictions is connected to the signal connection between the space Converter 136. The output of the spatial Converter connected in signal communication with a second input of the multiplexer 114.

Exit 2D thinner 104 connected in signal communication with the input module 128 MCTF. The first output module 128 MCTF connected in signal communication with a second input of the encoder 124 movement. The first output of the encoder 124 movement connected signal communication with the first input of the multiplexer 114. The second output of the encoder 124 movement connected signal communication with a first input of the encoder 132 movement. The second output module 128 MCTF connected in signal communication with a second input module 122 predictions.

The first output module 108 MCTF connected in signal communication with a second input of the encoder 132 movement. The output of the encoder 132 movement connected signal communication with the first input of the multiplexer 114. The second output module 108 MCTF connected in signal communication with a second input module 134 predictions. The output of multiplexer 114 generates an output bit stream 138.

For each spatial level, is the temporal decomposition with motion compensation. This Razlog the tion provides temporal scalability. Information about the movement of the lower spatial levels can be used for motion prediction in the upper levels. For coding texture, spatial prediction between adjacent spatial levels can be used to eliminate the redundancy. The differential signal resulting from internal predictions or external prediction with motion compensation, is encoded by conversion. The differential signal base level of quality ensures a minimum quality of recovery in each spatial level. This basic level of quality can be encoded in compliant H.264 stream, if not applicable no inter-prediction. For scalability, quality levels improve additionally coded. These levels of improvement can be chosen to provide a coarse - or fine-grained scalability, quality (SNR).

Referring to figure 2, an exemplary scalable video decoder to which can be applied to the present invention, generally indicated by the reference number 200. The input of the demultiplexer 202 is available as input in a scalable decoder 200 of the video signal, for receiving a scalable bit stream. The first output of the demultiplexer 202 connected in signal communication with an input SNR scalable entropy decode the and 204 with the inverse spatial transform. The first output SNR scalable entropy decoder 204 with the inverse spatial transformation of the connected signal communication with the first input module 206 predictions. The output module 206 predictions connected signal communication with the first input module 208 inverse MCTF.

The second output SNR scalable entropy decoder 204 with the inverse spatial transformation of the connected signal communication with the first input of the decoder 210 of the motion vector (MV). The output of decoder 210 MV connected in signal communication with a second input module 208 inverse MCTF.

The second output of the demultiplexer 202 connected in signal communication with an input SNR scalable entropy decoder 212 with the inverse spatial transform. The first output SNR scalable entropy decoder 212 with the inverse spatial transformation of the connected signal communication with the first input module 214 predictions. The first output module 214 predictions connected signal communication with the input module 216 interpolation. The output module 216 interpolation connected in signal communication with a second input module 206 predictions. The second output module 214 predictions connected signal communication with the first input module 218 inverse MCTF.

The second output SNR scalable entropy decoder 212 with the inverse spatial transformation of the connected signal SV is sew to the first input of decoder 220 MV. The first output of the decoder 220 MV connected in signal communication with a second input of the decoder 210 MV. The second output of decoder 220 MV connected in signal communication with a second input module 218 inverse MCTF.

The third output of the demultiplexer 202 connected in signal communication with an input SNR scalable entropy decoder 222 with the inverse spatial transform. The first output SNR scalable entropy decoder 222 with the inverse spatial transformation of the connected signal communication with the input module 224 predictions. The first output module 224 predictions connected signal communication with the input module 226 interpolation. The output module 226 interpolation connected in signal communication with a second input module 214 predictions.

The second output module 224 predictions connected signal communication with the first input module 228 inverse MCTF. The second output SNR scalable entropy decoder 222 with the inverse spatial transformation of the connected signal communication with the input of the decoder 230 MV. The first output of the decoder 230 MV connected in signal communication with a second input of the decoder 220 MV. The second output of decoder 230 MV connected in signal communication with a second input module 228 inverse MCTF.

The output signal of module 228 inverse MCTF is available as an output of the decoder 200, for a signal of level 0. The output module 218 inverse MCTF is available as an output on the encoder 200, to signal level 1. The output module 208 inverse MCTF is available as an output of the decoder 200, for a signal of level 2.

In the first exemplary embodiment in accordance with the present principles, a new syntactic structure is not used. In this first exemplary embodiment, the rate of improvement reuses the weight of the base level. The first rough version of the implementation can be implemented, for example, as the limitations of the profile or level. The requirement can also be specified in the parameter sets of the sequence or image.

In the second exemplary embodiment in accordance with these principles, one element of the syntactic structure, base_pred_weight_table_flag, is introduced into the syntactic structure of the header is a sequence of macroblocks in a scalable improvement, as shown in table 1, so that the encoder may adaptively choose which mode is used for weighted prediction based on a series of consecutive macroblocks. When base_pred_weight_table_flag not present, base_pred_weight_table_flag, it will be assumed equal to 0. When base_pred_weight_table_flag equal to 1, it indicates that the level of improvement reuses pred_weight_table() from its previous level.

The table illustrates a syntax structure for weighted preds the management for scalable coding of video.

TABLE
Slice_header_in_scalable_extension( ) {CKeyword
first_mb_in_slice2ue(v)
slice_typeue(v)
pic_para meter_set_id2ue(v)
if( slice_type == PR ) {
num_mbs_in_slice__minus12ue(v)
luma chroma_sep_flag2u(1)
}
frame_numu(v)
if( !frame_mbs_only_flag ) {
field_pic_flag2u(1)
if( field_pic_flag )
bottom_field_flag2u(1)
}
if nal_unit_type = = 21 )
idr_pic_id2ue(v)
if( pic_order_cnt_type = = 0 ) {
pic_order_cnt_lsb2u(v)
if( pic_order_present_flag && !field_pic_flag )
delta_pic_order_cnt_bottom2se(v)
}
if( pic_order__cnt_type = = 1 && !delta_pic_prder_always_zero_flag ) {
delta_pic_order_cnt[0]se(v)
if( pic_order_present_flag && !field_pic_flag )
delta_pic_order_cnt[ 1 ]2se(v)
}
if( slice_type != PR ) {
if( redundant_pic_cnt_present_flag )
redundant_pic_cnt2ue(v)
if( slice_type = = EB )
direct_spatial_mv_pred_flag2u(1)
key_picture_flag2u(1)
decomposition_stages2ue(v)
base_id_plus12ue(v)
if( base_id_plus1 != 0 ) {
adaptive_pediction_flag 2u(1)
}
if( slicejype = = EP || slice_type = = EB) {
num_ref_idx_active_override_flag2u(1)
if( num_ref_idx_actjve_override_flag ) {
num_ref_idx_l0_active_minus12ue(v)
if( slice_type = = EB )
num_ref_idx_l1_active_minus12ue(v)
}
}
ref_pic_Iist_reordering( )
for( decLvl = temporal level; decLvl <decomposition_stages; decLvl++ )
{
num_ref_idx_update_10_active[ decLvl + 1 ] ue(v)
num_ref_idx_update_l1_active [ decLvl+ 1 ]ue(v)
}
if(( weighted_pred_flag&&slice_type = = EP ) || (weighted bipred idc = = 1 && slice type = = EB ))
{
if(( base_id_plus1 != 0) && (adaptive_prediction_flag = = 1 ))
base_pred_weight_table_flag2u(1)
if ( base_pred_weight_table_flag = = 0 )
pred_weight_table( )2
}
if( nal_ref_idc != 0 )
dec_ref_pic_marking( )2
if( entropy_cding_mode_fiag & & slice_type != EI)
cabac_init_idc2ue(v)
}
slice_qp_delta2se(v)
if( deblocking_filter_control_present_flag ){
disable_deblockrng_filter_idc2ue(v)
if( disable_deblocking_filter_idc != 1 ){
slice_alpha_c0_offset_div22se(v)
slice_beta_offset_div22se(v)
}
}
if (slice_type !=PR)
if (num_slice_groups_minus1 > 0 && slice_group_map_type> =3 && slice_group_map_type<=5)
slice_group_change_cycle2u(v)
if( slice_type != PR && extended_spatial_scalability > 0 ) {
if ( chroma_format_idc >0) {
base_chroma_phase_x_plus12u(2)
base_chroma_phase_y_plus12u(2)
}
if (extended_spatial_scalabiliry = =2) {
scaled_base_left_offset2se(v)
scaled_base_top_offset2se(v)
scaled_base_right_offset2se(v)
scaled_base_bottom_offset2 se(v)
}
}
SpatialScalabiliryType = spatial_scalability_rype( )
}

In the decoder, when the level of improvement should reuse the weight of the basic level, you reassign pred_weight_table() with base (or previous) level pred_weight_table() in the current level of improvement. This process is used for the following cases: in the first case, the same reference index image in the base level and improve specifies different reference picture; in the second case, the reference image used in the level of improvement does not have the corresponding pair in the base level. For the first case, the number of counter order image (ROS) is used to establish compliance with the weighting parameters from baseline to the proper index of the reference image improvement. If you are using multiple weights in the base level weights with the lowest index of a reference image, preferably, but not necessarily, results which are in line first. For the second case, it is assumed that the field base_pred_weight_table_flag set to 0 for the reference image, which is not available in the level of improvement. Reassigning pred_weight_table() from the base (or previous) level pred_weight_table() in the current level of improvement is obtained, as set forth below. The process is indicated by reference as the process of inheritance for pred_weight_table(). In particular, this process of inheritance is triggered when the field base_pred_weight_table_flag is 1. The results of this process are as follows:

- luma_weight_LX[] (with X being 0 or 1)

- luma_offset_LX[] (with X being 0 or 1)

- chroma_weight_LX[] (with X being 0 or 1)

- chroma_offset_LX[] (with X being 0 or 1)

- luma_log2_weight_denom

- chroma_log2_weight_denom

The extraction process for the base image starts with basePic as a result. For X being replaced by 0 or 1, the following applies.

Let base_luma_weight_LX[] is the value of the item luma_weight_LX[] syntactic structure of the underlying image basePic.

Let base_luma_offset_LX[] is the value of the item luma_offset_LX[] syntactic structure of the underlying image basePic.

Let base_chroma_weight_LX[] is the value of the item chroma_weight_LX[] syntactic structure of the underlying image basePic.

Let base_chroma_offset_LX[] is the value of the item chroma_offset_LX[] syntactic structure of the underlying image basePic.

Let base_luma_log_weight_denom is the value of the item luma_log2_weight_denom syntactic structure of the underlying image basePic.

Let base_chroma_log2_weight_denom is the value of the item chroma_log2_weight_denom syntactic structure of the underlying image basePic.

Let BaseRefPicListX is a list RefPicListX reference indexes basic picture basePic.

For each reference index refldxLX in the list RefPicListX reference indexes (cycle from 0 to num_ref_idx_IX_active_minus1) current series of consecutive macroblocks, its associative associated weights in the current series of consecutive macroblocks are inherited as set forth below:

Let refPic is an image which is specified by the link through refldxLX.

Let refPicBase, the reference image corresponding basic level, is considered existing if there is an image for which fair all of the following conditions :

Element dependency_id syntax patterns for image refPicBase variable equal DependencyIdBase picture refPic.

Element quality_level syntactic structure for image refPicBase variable equal QualityLevelBase picture refPic.

Element fragment_order syntactic structure for image refPicBase variable equal FragmentOrderBase picture refPic.

- The value of PicOrderCnt(refPic) equal to the value of PicOrderCnt(refPicBase).

- There is an index baseRefIdxLX, equal to the lowest available reference index in the list BaseRefPicListX reference indexes basic level to the second specifies the link refPicBase.

If was found that refPicBase exists, the following applies:

- baseRefIdxLX marked as invalid for subsequent stages of the process.

luma_log2_weight_denom = base_luma_log2_weight_denom(1)
chroma_Jog2_weight_denom = base_chroma_log2_weight_denom(2)
luma_weight_LX[refldxLX] = base_luma_weight_LX[baseRefldxLX](3)
luma_offset_LX[refldxl_X] = base_luma_offset_LX[baseRefIdxLX](4)
chroma_weight_LX[refldxLX][0] = base_chroma_weight_LX[baseRefldxLX3[0](5)
chroma_offset_LX[refldxLX][0] = base_chroma_offset_LX[baseRefldxl_X][0](6)
chroma_weight_LX[refldxLX][1] = base_chroma_weight_LX[baseRefldxLX][1](7)
chroma_offset_LX[refldxLX][1] = base_chroma_offset_LX[baseRefldxLX][1 ](8)

Otherwise

luma_log2_weight_denom = base_lum_log2_weight_denom (9)
chroma_log2_weight_denom = base_chroma_log2_weight_denom(10)
luma_weight_LX[refldxLX] = 1<<luma_log2_weight_denom(11)
luma_offset_LX[refldxLX] = 0(12)
chroma_weight_LX[refldxLX][0] = 1<<chroma_log2_weight_denom(13)
chroma_offset_LX[refIdxLX][0] = 0(14)
chroma_weight_LX[refldxLX][1] = 1 <<chroma_log2_weight_denom(15)
chroma_offset_LX[refIdxLX][1] = 0(16)

Following is one exemplary way to implement the process of inheritance:

for( baseRefIdxLX = 0; baseRefIdxLX <= base_num_ref_idx_IX_active_minus1; baseRefIdxLX ++)

base_ref_avail[baseRefIdxLX ] = 1

for( refIdxLX = 0; refIdxLX <= num_ref_idx_IX_active_minus1; refIdxLX ++ ) {

base_weights_avail_flag[refIdxLX ] = 0

for( baseRefIdxLX =0; baseRefIdxLX <= base_nurn_ref_idx_LX_active_minus1;

baseRefIdxLX ++) {

if(base_ref_avail[baseRefIdxLX]&& (PicOrderCnt(RefPicListX[refIdxLX])== PicOrderCnt(BaseRefPicListX[baseRefIdxLX])

)) {

to apply equations (1) through (8)

base_ref_avail[baseRefIdxLX] = 0

base_weights_avail_flag[refIdxLX] = 1

break;

}

if (base_weights_avail_flag[refIdxLX] = = 0) {

to apply equations (9) through (16)

}

}

(17)

If the image is level and improve the image of the base level have the same segmentation series of consecutive macroblocks, reassigning pred_weight_table() from the base (or bottom) level pred_weight_table() in the current level of improvement can be done on the basis of a series of consecutive macroblocks. However, if the level of improvement and the basic level have different segmenting a sequence of macroblocks, reassigning pred_weight_table() from the base (or bottom) level pred_weight_table() in the current level of improvement should be based on the macroblock. For example, when the base level and the level of improvement have the same two segments of the series of consecutive macroblocks, the process of inheritance can be called once for a sequence of macroblocks. In contrast, if the base level has two segments, and the level of improvement has three segments, the process of inheritance is called on the basis of the macroblock.

Referring to figure 3, an exemplary method for scalable encoding of the video block of the image using a weighted prediction in General is indicated by the number 300 references.

The initial stage 305 starts the encoding of the current from the expression level of improvement (EL), and transfers control to step 310 decision. Stage 310 decision determines whether there is or there is no image of the base layer (BL) for the current image EL. If so, then control is passed to the functional stage 350. Otherwise, control is passed to the functional stage 315.

Functional stage 315 receives the weight of the image BL and passes control to a functional stage 320. Functional stage 320 reassigns the pred_weight_table() BL on pred_weight_table() level improvements and passes control to a functional stage 325. Functional stage 325 sets the field base_pred_weight_table_flag equal to true ("true") and transfers control to the functional stage 330. Functional stage 330 weighs reference image obtained weights and passes control to a functional stage 335. Functional stage 335 records field base_pred_weight_table_flag in the header are a series of consecutive macroblock, and passes control to step 340 decision-making. Stage 340 decision-making determines is equal to or no field base_pred_weight_table_flag true. If so, then control is passed to the functional stage 345. Otherwise, control is passed to the functional stage 360.

Functional stage 350 calculates the weight for the EL image and transfers control to the functional stage 355. Functional stage 355 sets the field base_pred_weight_table_flag equal value is false ("false"), and passes control to a functional stage 330.

Functional stage 345 encodes the image EL using the weighted reference picture, and passes control to end-stage 365.

Functional stage 360 records the weight in the header are a series of consecutive macroblock, and passes control to a functional stage 345.

Referring to figure 4, an exemplary method for scalable decoding of the video block of the image using a weighted prediction generally indicated by the reference number 400.

The initial stage 405 starts the decoding of the current image level improvements (EL), and passes control to a functional stage 410. Functional stage 410 parses the fields base_pred_weight_table_flag in the header are a series of consecutive macroblock, and passes control to step 415 decision. Step 415 decision determines is or not the unit field base_pred_weight_table_flag. If so, then control is passed to the functional stage 420. Otherwise, control is passed to the functional stage 435.

Functional stage 420 copies the weight of the corresponding image baseline (BL) in the EL image and transfers control to the functional stage 425. Functional stage 425 reassigns the pred_weight_table() image BL on pred_weight_table() EL image and transfers control to the functional stage 430. Functional stage 430 decodes from the representation of EL using the obtained weights and passes control to end-stage 440.

Functional stage 435 parses weight parameters, and passes control to a functional stage 430.

Referring to figure 5, an exemplary method of decoding syntactic structures level_idc and profile_idc generally indicated by the number 500 references.

The initial stage 505 passes control to a functional stage 510. Functional stage 510 parses the syntactic structures level_idc and profile_idc and passes control to a functional stage 515. Functional step 515 determines the limit weighted prediction for the level of improvement on the basis of the syntactical analysis of the functional stage 510, and passes control to end-stage 520.

Referring to Fig.6, an exemplary method of decoding constraints weighted prediction for the level of improvement in General specified number 600 references.

The initial step 605 passes control to a functional stage 610. Functional stage 610 parses the syntactic patterns for weighted prediction for the level of improvements and passes control to end-stage 615.

Next, description will be given of some of the many attendant advantages/features of the present invention, some of which were mentioned above. For example, one advantage/signs is scalable coding is vsic video which includes an encoder for encoding a block in the level of image enhancement by applying to a reference image level improvements such as weight parameter, as applied to a particular reference image of the lower level used to encode a block in the lower level image, the block in the level of improvement corresponds to a block in the lower level and the reference image level of improvement corresponds to a specific reference image of the lower level. Another advantage/feature is the scalable video encoder as described above, the encoder encodes the block in the level of improvement through selection between explicit regime of weight parameters and implicit mode weighted parameters. Another another advantage/feature is the scalable video encoder as described above, the encoder imposes the constraint that the reference image level improvements always applies the same weighting parameter, as applied to a particular reference image of the lower level, when the block is in the level of improvement corresponds to a block in the lower level and the reference image level of improvement corresponds to a specific reference image of the lower level. Moreover, another advantage/feature is the scalable is the video encoder, with the restriction, as described above, if the limitation is defined as the limit of the profile or level or indicated in the set of parameters of the image sequence. Additionally, another advantage/feature is the scalable video encoder as described above, the encoder adds syntactic structure in the header are a series of consecutive macroblocks for a series of consecutive macroblocks in the level of improvement to selectively apply to a reference image level improvements still weighing the option or other weighting parameter. Moreover, another advantage/feature is the scalable video encoder as described above, the encoder performs the reassignment of the syntactic structure of the pred_weight_table() from the lower level to the syntactic structure pred_weight_table() for improvement. Additionally, another advantage/feature is the scalable video encoder with remapping, as described above, the encoder uses the count sequence of images to reassign the weight parameters of the lower level to the corresponding index of the reference image improvement. Moreover, another advantage/feature is the scalable video encoder with perennate the Institute of economy and management, using the count sequence of images, as described above, with weights with the lowest index of the reference image are reassigned first. Additionally, another advantage/feature is the scalable video encoder with remapping, as described above, the encoder sets the field weighted_ prediction_flag to zero for a reference image used in the level of improvement that is not available in the lower level. Moreover, another advantage/feature is the scalable video encoder with remapping, as described above, the encoder sends, in the header of a series of consecutive macroblocks, the weights for an index of a reference image corresponding to a reference image used in the improvement, when the reference image used in the level of improvements that has no match in the lower level. Moreover, another advantage/feature is the scalable video encoder with remapping, as described above, the encoder performs the remapping is based on a series of consecutive macroblocks, when the image has the same segmentation series of consecutive macroblocks in both, the level of improvement in the lower level, and the encoder performs the remapping on the position of the macroblock, when the image has a different segmentation of a sequence of macroblocks in the level of improvement relative to the lower level. Additionally, another advantage/feature is the scalable video encoder as described above, the encoder performs the reassignment of the syntactic structure of the pred_weight_table() from the lower level to the syntactic structure pred_weight_table() for level of improvement when the encoder applies a reference image level improvements such as the weight parameter, as applied to a particular reference image of the lower level. Also, another advantage/feature is the scalable video encoder as described above, the encoder skips execution of the estimation of weight parameters, when the encoder applies a reference image level improvements such as the weight parameter, as applied to a particular reference image of the lower level. Additionally, another advantage/feature is the scalable video encoder as described above, the encoder stores only one set of weight parameters for each index of the reference image, when the encoder applies a reference image level improvements such as the weight parameter, as applied to a particular reference image nignog the level. Moreover, another advantage/feature is the scalable video encoder as described above, the encoder estimates the weights, when the encoder applies a different weight setting or level of improvement does not have the lower level.

These and other features and advantages of the present invention can be easily installed as an ordinary specialist related to the art on the basis of the doctrines contained in the materials of this application. It should be clear that the doctrine of the present invention can be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.

Most preferably, the doctrine of the present invention are implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program, actually embodied in the storage device programs. The application program may be uploaded to, and executed by the machine, containing any suitable architecture. Preferably, the machine is implemented on a computer platform that contains hardware, such as one or more Central processing units ("CPU", "CPU"), random access sapom the kăđẫa device ("RAM") and interfaces input/output ("I/O"). The computer platform may also include an operating system and microcolony code. A different sequence of operations and functions described in the materials of the present application may be either part microcommander code, or part of an application program, or any combination thereof, which may be performed on the CPU. In addition, various other peripheral devices may be attached to the computer platform such as an additional data storage device and printing device.

In addition, it should be clear that, since some of the components and how the system depicted in the accompanying drawings, is preferably implemented in software, the actual connections between the system components or functional stages of the sequence of operations may differ depending on the manner in which programmed the present invention. Given the doctrines contained in the materials of the present application, the specialist related to this technical field will be able to assume these and similar implementations or configurations of the present invention.

Although illustrative embodiments of disclosed in the materials of the present application with reference to the accompanying drawings, it should be clear that the present image is eenie not limited to such individual variants of implementation and that various changes and modifications can be made therein as an ordinary specialist, related to this technical field, without leaving the scope and essence of the present invention. All such changes and modifications of the means included in the scope of the present invention, as set forth in the accompanying claims.

1. Device for scalable decoding of a video signal, comprising: a decoder (200) for decoding a block in the level of image enhancement by applying to a reference image level improvements such as weight parameter, as applied to a reference image of the lower level is used for decoding a block in the lower level image, and the decoder (200) performs a remapping of the syntactic structure of the tables of the weights of the reference image with the lower level to the syntactic structure of the table of weights of the reference image for the level of improvement, with the block in the level of improvement corresponds to a block in the lower level when the reference picture of the level of improvement does not match a reference image of the lower level.

2. The device according to claim 1, in which the decoder (200) decodes the block in the level of improvement by determining whether to use the explicit mode of weight parameters or implicit mode weighted parameters.

3. The device according to claim 1, in which the decoder (200) complies with the limitation of the relevant kadirovshina is, namely, that to a reference image of the level of improvement is always applied the same weighting parameter, as applied to a reference image of the lower level, when the block is in the level of improvement corresponds to a block in the lower level and the reference image level of improvement corresponds to a reference image of the lower level.

4. The device according to claim 3, in which the constraint is defined as a constraint profile and/or level and/or indicated in the set of parameters of the image sequence.

5. The device according to claim 1, in which the decoder (200) evaluates the syntactic structure in the header of a series of consecutive macroblocks, for a series of consecutive macroblocks in the level of improvement to determine whether to apply to a reference image level continue to improve the weighting parameter or use a different weighting parameter.

6. The device according to claim 1, in which the decoder (200) performs reassigning the syntactic structure of the pred_weight_table() from the lower level to the syntactic structure pred_weight_table() for improvement.

7. The device according to claim 6, in which the decoder (200) uses the counter order image to reassign weight parameters from the lower level to the corresponding index of the reference image in the level of improvement.

8. The device according to claim 7, in which the weights with the least is the index of the reference image are reassigned first.

9. The device according to claim 6, in which the decoder (200) reads a field weighted_prediction_flag set to zero for the reference image used in the level of improvement that is not available in the lower level.

10. The device according to claim 6, in which the decoder (200) receives the title of a series of consecutive macroblocks weights for an index of a reference image corresponding to a reference image used in the improvement, when the reference image used in the level of improvements that has no match in the lower level.

11. The device according to claim 6, in which the decoder (200) performs the remapping is based on a series of consecutive macroblocks, when the image has the same segmentation series of consecutive macroblocks in the level of improvement, and in the lower level, and the decoder performs the remapping is based on the macroblock, when the image has a different segmentation of a sequence of macroblocks in the level of improvement relative to the lower level.

12. The device according to claim 1, in which the decoder (200) performs a remapping of the syntactic structure of pred_weigh_table() from the lower level to the syntactic structure pred_weight_table() for level of improvement when the decoder applies a reference image level improvements such as the weight parameter, as applied to a reference image of the lower level./p>

13. The device according to claim 1, in which the decoder (200) stores only one set of weight parameters for each index of the reference image, when the decoder applies a reference image level improvements such as the weight parameter, as applied to a reference image of the lower level.

14. The device according to claim 1, in which the decoder (200) parses the weight parameters from a header of a series of consecutive macroblocks, when the stage decoding applies to a reference image level improvements other weight option than applied to a reference image of the lower level.

15. The device according to claim 1, in which a table of the weights of the reference image is pred_weight_table().

16. Method for scalable decoding of the video signal containing phase, in which: decode (420) block in the level of image enhancement through that apply to a reference image level improvements such as the weight parameter, as applied to a reference image of the lower level is used for decoding a block in the lower level image, and are reassigned by the syntactic structure of the tables of the weights of the reference image with the lower level to the syntactic structure of the table of weights of the reference image for the level of improvement, with the block in the level of improvement corresponds to a block in the lower level is, when the reference picture of the level of improvement does not match a reference image of the lower level.

17. The method according to clause 16, in which step (420) decoding decodes the block in the level of improvement by determining whether to use the explicit mode of weight parameters or implicit mode weighted parameters.

18. The method according to clause 16, in which the step of decoding includes the step in which respect (420) the limitation of the corresponding encoder, namely, that to a reference image of the level of improvement is always applied the same weighting parameter, as applied to a reference image of the lower level, when the block is in the level of improvement corresponds to a block in the lower level and the reference image level of improvement corresponds to a reference image of the lower level.

19. The method according to p, which limit is defined as the limit of the profile and/or level and/or indicated in the set (510) parameters image sequence.

20. The method according to clause 16, in which the step of decoding includes the step, in which the estimate (410) syntax in the header are a series of consecutive macroblocks, for a series of consecutive macroblocks in the level of improvement to determine whether to apply the former weighting parameter to a reference image of the level of improvement or use is with a different weighting parameter.

21. The method according to clause 16, in which the step of decoding includes an action to perform syntactic reordering patterns pred_weight_table() from the lower level to the syntactic structure pred_weight_table() for improvement.

22. The method according to item 21, in which the runtime uses the counter order image to reassign weight parameters from the lower level to the corresponding index of the reference image in the level of improvement.

23. The method according to item 22, in which the weights with the lowest index of the reference image are reassigned first.

24. The method according to item 21, in which the step of decoding includes the action when the read field weighted_prediction_flag set to zero for the reference image used in the level of improvement that is not available in the lower level.

25. The method according to item 21, in which the step of decoding includes an action to take (435) in the header of a series of consecutive macroblocks weights for an index of a reference image corresponding to the reference image used in the improvement, when the reference image used in the level of improvements that has no match in the lower level.

26. The method according to item 21, in which the reassignment is based on a series of consecutive macroblocks, when the image has the same segment on the licensing of a series of consecutive macroblocks in the level of improvement, and at the bottom level, and the step of re-conversion are performed on the basis of the macroblock, when the image has a different segmentation of a sequence of macroblocks in the level of improvement relative to the baseline.

27. The method according to clause 16, in which the step of decoding includes the step in which reassigns (425) syntactic structure pred_weigh_table() from the lower level to the syntactic structure pred_weigh_table() for level of improvement when the stage decoding applies the same weighting parameter to a reference image of the improvement as applied to a reference image of the lower level.

28. The method according to clause 16, in which the step of decoding includes the step in which maintain only one set of weight parameters for each index of the reference image, when the stage decoding applies to a reference image level improvements such as the weight parameter, as applied to a reference image of the lower level.

29. The method according to clause 16, in which the step of decoding includes the step in which produce (435) parsing weight parameters from the header, a sequence of macroblocks, when the stage decoding applies to a reference image level improvements other weight option than applied to a reference image of the lower level.

30. The method according to clause 16, in which table the weights of the reference image is pred_weight_table().

31. Storage media for data processing scalable decoding video instructions which are executed by the processor that contains the scalable video data encoded therein, comprising: a block that is encoded in the rate of improvement of the image formed by applying to a reference image level improvements such as weight parameter, as applied to a reference image of the lower level used to encode a block in the lower level image, in this case, the block in the level of improvement corresponds to a block in the lower level when the reference picture of the level of improvement does not match a reference image of the lower level.



 

Same patents:

FIELD: information technologies.

SUBSTANCE: device and method are proposed to process multimedia data, such as video data, audio data, or video and audio data for coding, using certain classification of content. Processing of multimedia data includes determination of multimedia data complexity, classification of multimedia data on the basis of certain complexity, and determination of transfer speed in bits for coding of multimedia data on the basis of their classification. Complexity may include a component of spatial complexity and component of time complexity of multimedia data. Multimedia data is classified, using classifications of content, which are based on value of visual quality for viewing of multimedia data, using spatial complexity, time complexity or both spatial and time complexity.

EFFECT: development of improved method of images classification.

111 cl, 12 dwg

FIELD: information technologies.

SUBSTANCE: method for decoding of compressed video sequence, at the same time image frames are introduced into buffer memory related to decoding. Video sequence includes indication related to at least one gap in numbering of image frames, besides this indication is decoded from video sequence. Further, in response to this indication, buffer memory is configured so that it provides for number of images frames corresponding to gap in numbering of image frames, and images frames in buffer memory are used in process of decoding. Preferentially, specified indication informs about the fact that at least one gap in numbering of image frames in video sequence is deliberate, and specified number of image frames is used in buffer memory instead of image frames, which are not available in decoder.

EFFECT: provision of the possibility for decoder to account for image frames, which were deleted deliberately by coder.

31 cl, 14 dwg

FIELD: information technology.

SUBSTANCE: subsets are determined (step 29), each containing one or more coding units, where at least one image puts at least one coding unit into two or more subsets, the list of requirements (LOR) is established (step 30) containing at least one element associated with each subset. Significance values are use in order to select quality increments for generating an allowable code stream which satisfies the LOR for subsets (steps 34, 36). Quality increments can be selected so as to attain high quality for different subsets depending on size requirements in the LOR. For certain requirements, the code stream will exhibit an approximately constant quality of the reconstructed image. Quality increments can be selected so as to achieve small sizes of a compressed image for different subsets depending on quality requirements in the LOR.

EFFECT: high quality of the reconstructed image.

27 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: coding device has definition apparatus for determining image area data meant for processing in order to counter reconstruction implied by granular noise arising in image data coded based on said image data and apparatus for countering reconstruction, designed for processing in order to counter reconstruction for image area data, defined using definition apparatus when coding image data, where when the said image data are coded in data unit data modules, the said definition apparatus determines unit data which form the said image data as the said image area data, and apparatus for countering reconstruction forcibly sets the orthogonal transformation coefficient to zero, which becomes equal to zero when quantisation is carried out using the said unit data, among orthogonal transformation coefficients of unit data defined using the said definition apparatus.

EFFECT: improved quality of the decoded image.

14 cl, 18 dwg

FIELD: information technologies.

SUBSTANCE: method includes the following: operation of coding for formation of coded images in coder, and also operation of specified coded images transfer in decoder in the form of transmission units, operation of buffering for buffering of transmission units sent to decoder, in buffer and operation of decoding for decoding of coded images with production of decoded images. Buffer size is specified by means of determining overall size of at least two transmission units and setting maximum size of buffer on the basis of this overall size.

EFFECT: improved efficiency of coded images buffering.

22 cl, 16 dwg

FIELD: physics; video technology.

SUBSTANCE: invention relates to devices for re-encoding video data for real time streaming, and particularly to re-encoding video data for real time streaming in a mobile broadcast application. Proposed is a device for using content information to encode multimedia data, which includes a content classification module, which is configured to classify content multimedia data and provide content classification data, and an encoder which is configured to encode multimedia data in a first data group and a second data group based on content classification, wherein the first data group contains a coefficient, and the second group of data contains a first differential refinement associated with the coefficient of the first group of data.

EFFECT: design of a transcoder which provides for highly efficient processing and compression of multimedia data, which uses information defined from the said multimedia, and is scalable and error-tolerant for use in several multimedia data applications.

49 cl, 45 dwg

FIELD: physics; image processing.

SUBSTANCE: invention relates to the technology of simulating granularity of a film in an image. A method is proposed for simulating a film granularity unit for adding to an image unit through a first establishment of at least one image parametre in accordance with at least one unit attribute. The film granularity unit is established in accordance with the image parametre. Deblocking filtration can also be applied to the film granularity unit.

EFFECT: easier film granularity simulation.

27 cl, 4 dwg

FIELD: physics; communications.

SUBSTANCE: invention relates to an encoder/decoder of a scaled data stream, which includes at least two scalability levels. Proposed are a method and a device for encoding, decoding, storing and transmitting a scaled data stream, which includes levels with various encoding characteristics. The method involves creating one or more levels of a scaled data stream. At least one level has an encoding characteristic which includes at least one of the following: information on fine granular scalability (FGS); information on region of interest (ROI); information the scaled sub-sample level; information on relationships of decoding and the set of initial parameters. The method also involves signalling levels using encoding characteristics such that, the characteristics can be read by the decoder without need for decoding an entire level.

EFFECT: increased efficiency of a scaled data stream and possibility of direct transmission in a bit stream in a file format or through a transfer protocol of information scalability for a specific level of a scaled bit stream.

49 cl, 5 dwg, 8 tbl

FIELD: physics; image processing.

SUBSTANCE: invention relates to encoding and decoding video data, and more specifically to scaled video processing. A method is proposed for scaled video data encoding, which involves receiving said video data, after which, based on the received video data, a basic level is generated, which includes at least one image and at least one enhancement level which includes at least one image. For each of the said basic levels and enhancement level, a characteristic identifier is generated, which is associated with a reference number; the corresponding sequence parametre set (SPS) is determined for each of the basic levels and at least one enhancement level, with different values of characteristic identifier. For a basic level and enhancement level with same SPS parametres, one sequence parametre set is used; and said basic level and said at least one enhancement level are encoded by using sequence parametre sets which determined.

EFFECT: increased encoding or decoding efficiency, avoiding redundancy.

12 cl, 10 dwg

FIELD: physics; image processing.

SUBSTANCE: invention relates to methods of imitating film granularity in an image. The said result is achieved due to that, film granularity in a video image is imitated by creating a block first i.e. a matrix of transformed coefficients for a set of cutoff frequencies fHL fVL, fHH and fVH, related to the desired granularity structure. Cutoff frequencies fHL fVL, fHH and fVH represent cutoff frequency, in two measurements, of a filter which sets characteristics of the desired film granularity structure. The block of transformed coefficients undergoes inverse transformation, obtaining a sample of film granularity with accuracy of up to a bit, and the said sample is scaled in order to mix with a video signal for imitating film granularity in that signal.

EFFECT: easier imitation of film granularity in an image.

20 cl, 4 dwg

FIELD: coding elementary-wave data by means of null tree.

SUBSTANCE: proposed method includes generation of elementary-wave ratios pointing to image. In the process bits of each elementary-wave ratio are associated with different bit places so that each place is associated with one of bits of each elementary-wave ratio and associated bits are coded with respect to each place of bits to point to null tree roots. Each place of bits is also associated only with one of bits of each elementary-wave ratio. Computer system 100 for coding elementary-wave ratios by means of null tree has processor 112 and memory 118 saving program that enables processor 112 to generate elementary-wave ratios pointing to image. Processor 112 functions to code bits of each place to point to null tree roots associated with place of bits.

EFFECT: enhanced data compression speed.

18 cl, 7 dwg

FIELD: converting code of received video sequence using extrapolated movement data received from video sequence.

SUBSTANCE: proposed method for code conversion involves reception of first bit stream of compressed picture data having some coding parameters. These parameters may relate to GF-structure of picture frames, picture frame size, to parameter showing if frames presented in input bit stream are picture fields or frames, and/or if they form picture frames presented in bit stream, direct or interlaced sequence. First and second movement vectors are obtained from input bit stream and used together with weighting coefficients to extrapolate third movement vector for output bit stream of compressed picture data. Output bit stream that differs from input one by one or more parameters is outputted as converted-code output signal.

EFFECT: provision for minimizing of or dispensing with movement estimation in code conversion process.

22 cl, 4 dwg, 1 tbl

FIELD: protection of video information against unauthorized copying.

SUBSTANCE: proposed method using watermarks to protect video information against unauthorized copying by changing scale of pattern in the course of copying includes introduction of watermark in original video signal with different scales. Watermark is maintained in each scale for preset time interval sufficient to enable detector circuit in digital-format video recorder to detect, extract, and process information contained in watermark. Watermark scale is changed by end of preset time interval preferably on pseudorandom basis to ensure appearance of each of all scales in predetermined scale variation range as many times as determined in advance. In this way definite scale possessing ability of watermark recovery to initial position and size can be identified and used for watermark detection.

EFFECT: enhanced reliability, facilitated procedure.

24 cl, 7 dwg

FIELD: multimedia technologies.

SUBSTANCE: method includes at least following stages: determining, whether current value of processed coefficient of discontinuous cosine transformation is equal or less than appropriate threshold value, used in current time for quantizing coefficients of discontinuous cosine transformation of image blocks of common intermediate format, and if that is so, then value of discontinuous cosine transformation coefficient is set equal to zero, then currently used threshold value is increased for use as threshold value with next processing of discontinuous cosine transformation coefficient, in opposite case currently used threshold value is restored to given original threshold value, which is used as threshold value for next processing of discontinuous cosine transformation coefficient; determining, whether increased threshold value is greater than given upper limit of threshold value, and if that is so, than increased threshold value is replaced with given upper limit.

EFFECT: higher quality.

8 cl, 4 dwg

FIELD: methods and devices for memorization and processing of information containing video images following one another.

SUBSTANCE: from each image recorded prior to current time appropriately at least one image area is selected and aperture video information is recorded with placement information. from video-information at least one mixed image is generated with consideration of appropriate placement information. Mixed image is utilized for display in accordance to movement estimation, movement compensation or error masking technology frames.

EFFECT: decreased memory resource requirements for memorization of multiple previously received images.

3 cl, 4 dwg

FIELD: engineering of devices for transforming packet stream of information signals.

SUBSTANCE: information signals represent information, positioned in separate, serial packets of digital format data. These are transformed to stream of information signals with time stamps. After setting of time stamps, which are related to time of arrival of data packet, time stamps of several data packets are grouped as packet of time stamps, wherein, in accordance to realization variant, size of time stamps packet equals size of data block.

EFFECT: improved addition of data about time stamps to data packets with fixed size.

6 cl, 29 dwg

FIELD: engineering of circuit for compressing image signals, using blocks and sub-blocks of adaptively determined sizes of given coefficients of discontinuous cosine transformation.

SUBSTANCE: block size-setting element in encoder selects a block or sub-block of processed input pixels block. Selection is based on dispersion of pixel values. Blocks with dispersions greater than threshold are divided, while blocks with dispersions lesser then threshold are not divided. Transformer element transforms pixel values of selected blocks to frequency range. Values in frequency range may then be quantized, transformed to serial form and encoded with alternating length during preparation for transmission.

EFFECT: improved computing efficiency of image signals compression stages without loss of video signals quality levels.

4 cl, 5 dwg

FIELD: technology for compacting and unpacking video data.

SUBSTANCE: for each pixel of matrix priority value is determined, pixels difference value is calculated, priority values utilized for calculating value of pixels priority are combined in one pixels group, pixel groups are sorted, pixel groups are saved and/or transferred in accordance to their priority in priority matrix, while aforementioned operations are constantly repeated, while values of pixel group priorities are repeatedly determined anew, priority matrix for any given time contains pixel groups sorted by current priorities, and also preferably stored first and transferred are pixel groups, which have highest priority and still were not transferred.

EFFECT: simple and flexible synchronization at different transfer speeds, width of transfer band, resolution capacity and display size, respectively.

2 cl, 8 dwg, 1 tbl

FIELD: engineering of systems for encoding moving images, namely, methods for encoding moving images, directed at increasing efficiency of encoding with use of time-wise remote supporting frames.

SUBSTANCE: method includes receiving index of supporting frame, standing for supporting frame, pointed at by other block, providing movement vector for determining movement vector of current block, and determining movement vector of current block with utilization of supporting frame index, denoting a supporting frame.

EFFECT: increased efficiency of encoding in direct prediction mode, decreased number of information bits for frame, in which scene change occurs.

3 cl, 6 dwg

FIELD: engineering of systems for encoding moving image, namely - methods for encoding moving image, directed at increase of encoding efficiency with use of time-wise remote supporting frames.

SUBSTANCE: in the method in process of encoding/decoding of each block of B-frame in direct prediction mode movement vectors are determined, using movement vector of shifted block in given frame, utilized for encoding/decoding B-frame, and, if type of given frame is time-wise remote supporting frame, one of movement vectors, subject to determining, is taken equal to movement vector of shifted block, while another one of movement vectors, subject to determining, is taken equal to 0.

EFFECT: increased encoding efficiency in direct prediction mode, decreased amount of information bits for frame, wherein a change of scene occurs.

2 cl, 6 dwg

Up!