The way nizkoshumnoe encoding and decoding
(57) Abstract:The invention relates to image compression to reduce bandwidth requirements of digital video decoder. The technical result is the improved recovery image for vysokochetkogo playback that allows you to send one program in high definition with multiple programs in standard definition on channel terrestrial broadcast 6 MHz. The technical result is achieved by the adaptive processor digital image precedes MPEG2-encoder and receives signals in high-definition (19201080 pixels per image), intended for transmission or storage, and adaptive filters the signal at low frequencies. The signal is subjected to two-dimensional low pass filter to eliminate distortion coding and the associated noise, then horizontally weeded to create a hybrid signal lower resolution (12801080 pixels per image), which decompresses and decodes the receiver, followed by an increase in the frequency discredit the hybrid signal to its native resolution. 6 C. and 24 C.p. f-crystals, 5 Il., table 1. The technical field to which the invention relates
The Federal communications Commission (FCC) USA approved digital standard high-definition television (HDTV), proposed by the Grand Alliance (GA), preparing the soil for terrestrial broadcast of digital television in the United States. HDTV system company GA uses the international standard for the compression and transmission of video Expert group in the field of moving image MPEG2. For more details, see "Information technology - common coding of image and related audio information: Video ISO/IEC 13818-2: 1996 (E). When using modern and sophisticated video compression, such as processing of source estimation and motion compensation, the conversion task and statistical coding, compression system MPEG can reduce the bit rate by a factor of 50 or more. Prior to compression, full high definition signal for one second requires approximately one billion bits. As suggested in the specification GA-size images 19201080 pixels (picture elements) with a frequency of 60 fields per second compressed to 18 megabits per second for digital broadcast.System video compression company GA usually contains two main podsystem analog video format RGB (red-green-blue, red - green - blue). The preprocessor converts the input signals into digital form and performs gamma correction for each color component to compensate for the nonlinear characteristics of the imaging camera. Gamma correction reduces the visibility of quantization noise contained in the compressed image, especially in dark areas of the image. Then the preprocessor linearly converts digitized and passed gamma correction discretized signals in RGB color space SMPTE (Society of motion picture and television engineers Society of engineers, film and television 240M YC1C2. In conclusion, the resulting components of the chroma signal are subdirectly to create a digital video input signal 4:2:0. In addition to the just described problems, the preprocessor can perform the image conversion. For example, in the digital satellite broadcast system video horizontally weeded with 720 pixels per line to 544 pixels on the line to further reduce bandwidth requirements. This signal is sent to the video encoder MPEG2.The MPEG2 video encoder compresses the input digital video signal by removing some of the temporal redundancy between frames and some of POSOLOGY in sequential order, as explained above. The regulation accuracy of quantization allows the encoder to generate a compressed bit stream any frequency specified by a particular application. Quantization in the MPEG2 systems is performed on the coefficients of the discrete cosine transform (DCT - discrete cosine transform) block of data, which can represent the information of the original image or the difference information from the motion estimation. Using the quantization matrix together with scalable size of the quantization step of the quantizer selects and quantum only a small fraction of the coefficients of a discrete cosine transform on each block of discrete cosine transform intended for transfer, which leads to a considerable reduction of the data volume. Quantization matrix can be changed on a frame basis according to the statistical distribution of the coefficients of the discrete cosine transform and the content of the video. For various zones within the frame quantization can accurately tune from macroblock to macroblock by scaling the size of the quantization step based on the complexity of the macroblock. For a given output bit frequency of the output buffer to provide the control signal is high resolution quantization within the available bandwidth.In the ideal case, the system video compression removes high-frequency components, which will not be perceived by viewers as missing when the image is restored and reproduced. The remaining low-frequency components quanthouse to fit within the available bandwidth. The quantization noise introduced into the signal must also be invisible to the audience when restoring the image. However, in a real system selects the optimal ratio between transmitted information and the size of the quantization step for the available bandwidth. If the system does not provide a sufficient number of coefficients for quantization, the system increases the size of the quantization step, which leads to a block distortion in the reconstructed image. If the image during the compression process loses too much high-frequency information, the reconstructed image will contain other significant distortion of the contours.Moreover, differences in the quantization between frames lead frames in the group of pictures contain different frequency components. I frame, for example, may lose a significant amount of high-frequency coefficients during codiovan which set out the group of pictures will now contain distortions since high-frequency information varies between frames used to restore each other.These problems occur in the system GA, which exists at the present time. The compressed image signal in high definition only further reduces the quality of the reproduced picture. The providers of satellite broadcast do not wish to transmit signals to high-definition, as a lump sum through the transponder can be transmitted only one program. Implemented to meet the requirements of the compression of high definition programming, sufficient to accommodate two programs on one satellite channel (e.g. channel 24 MHz four-phase shift keying (PSK phase-shift keying) at the same time, leads to unacceptable for the viewer to picture quality. Consequently, providers of satellite broadcast hesitant to broadcast HDTV due to inefficient use of the channel. Similarly, service providers terrestrial broadcast reluctant to undertake the provision of full HD programs in which one program is completely occupies the channel, which can accommodate several programs standard definition (SD - standard definition).The essence of izopet is designal and selectively converts the format of the source signal in another format so that as it is necessary. The converted signal is filtered and converted back to the original format as necessary. The filtered signal is converted to a lower resolution and is compressed to a planned bit frequency. In conclusion, the compressed signal is supplied to the output channel data.Brief description of drawings
In Fig. 1 shows one configuration of the device video compression according to the present invention.In Fig.2 illustrates in greater detail a block 22 shown in Fig.1.In Fig. 3 shows one of the possible characteristics of the adaptive filter included in the block 22.Fig. 4 is a block diagram of an exemplary transmission system using the present invention.Fig.5 is a block diagram of an exemplary host system using the present invention.Description of the preferred embodiments of the invention
MPEG2 encoder, including the device, corresponding to the principles of the present invention, contains front of the two-dimensional encoder (for example, vertical and horizontal) filter. The encoder output buffer and filter every generate information that can be used in other modules the image, the choice of the quantization matrix, the choice of the scaling factor, the bit frequency at the output of each block, the texture image. The exchange of this information between the blocks is carried out by the controller, which monitors the encoding process, or by using individual controllers placed in each block.The controller evaluates incoming information and determines the unity in the group of pictures, frame or side frame, which can be useful for the modification of the filter and/or encoder for efficient coding group, frame or side frame bit before the planned frequency. Usually the filter is regulated because regulation of filter generates less noise than the regulation of the encoder. Also, the filter is actually a group of filters that makes the greatest possible flexibility when adjusting the coefficients of the individual filters as needed. These filters represent the horizontal low pass filter to eliminate the effects of aliasing/ vertical low pass filter and two-dimensional low pass filter, usually located in just a specified sequential order. The controller is underwater or more filters and/or encoder in accordance with one or more dominant unifitsirovannaya. The end result is that the input signal is filtered by the filter at low frequencies method, which basically allows the encoder to encode the image uniformly across the group of pictures, frame or side frame relative to the dominant commonality uniformly encoded data.The encoded signal can be transmitted in the available bandwidth and then restored and reproduced without the distortions that were present would otherwise. For high-definition signals with the frame size of the image 19201080 pixels after filtering and before encoding the horizontal resolution is reduced to 1280 pixels per line to further reduce the bandwidth of the transmitted signal. The result is a hybrid solution for the image that HD can receive, decode and play with small changes in software tools.An exemplary system configuration of the video compression according to the present invention shown in Fig.1. The input signal is received by the image detector 20, which determines whether the signal of the image (film), which reformatted the image is sent to the appropriate section of the adaptive image processor 22, as will be described later. If the input signal is not reformatted image signal, the signal passes into another section of the adaptive image processor 22. Identification of image signals that have passed telekomoperetora, using known methods.The processor 22 receives the control information from the output buffer 26 and from the MPEG2 encoder 24 via the controller 28 and filters the image frames so that the encoder 24 can effectively encode the frame so that the frame was within the available bit rate and was largely free from noticeable distortion. The processor 22 filters the signal in two dimensions (2-D) (for example, horizontal and vertical) that you want to improve the quality of the reconstructed image obtained from the encoded MPEG2 bit stream, compressed to an average of a bit frequency. The task is a modification of the local two-dimensional frequency content of the source to improve the efficiency of MPEG2 coding method, the least damaging recovered from the MPEG2 image from the point of view of image sharpness and distortion when encoding. Signal filtering can ptx2">Two-dimensional filter filters the low frequency image. Ideally, the high-frequency information, which is removed, or excessive, or invisible to the viewer. In practice, to achieve the desired bit frequency may be removed some high-frequency information that is visible to the viewer. However, the system, which includes front MPEG2 encoding processor 22 generates an image that is best compared to a system without a processor 22, which will be discussed later.The filtered signal is encoded MPEG2 encoder 24, which receives the image parameters from the processor 22 and the output buffer 26 via the controller 28 and adjusts the compression MPEG2 format so that it matches the available bit rate. Compression occurs in the same manner as described in the specification GA. The encoder 24 sends the compressed data to the output buffer 26. The buffer 26 provides compressed data at a predetermined frequency for encoding purposes of transportation, modulation and transmission over the transmission channel using known technologies of signal processing. Before modulation, the compressed signal can be sent in a statistical multiplexer for multiplexing with multiple programs DL the Ana in Fig.1 to simplify the drawing.System video compression can be configured to accept any type of video. System of Fig.1 configured for adoption as television (camera) and films (film), rich in well-known industry standards. For example, one common configuration would be used for the system of Fig.1, to receive the output signal from the preprocessor, as previously described in the section "prior art". The system can be configured for other types of video signals by adding appropriate hardware and/or software.These configurations are not shown to simplify Fig.1.The image detector 20 detects the presence of certain correlations in the input signal, which can be used to improve coding efficiency: (Type 1) the Source is interlaced 60 fields per second (Type 2) film image 30 frames per second interlaced 60 fields per second (Type 3) film image of 24 frames per second interlaced 60 fields per second (Type 4) Source with progressive scan (Type 5) film image 30 frames per second with progressive scan at 60 frames per second, and (Type TVET on an external control signal (not shown) or using well-known technologies, such as those used in modern MPEG2 encoders standard definition. Information about the format of the video signal together with the signal in the adaptive image processor 22, as described below. The image detector 20 also detects whether the signal type interlaced or type progressive and sends this information to the processor 22. These types of scans are approximate and set the parameters by which the signals are routed through the processor 22. Can also be used in other implementations frequency fields and frames.Adaptive image processor 22 performs several programmable functions, reducing the amount of data that you want to compress using the encoder 24. The processor 22 in the main processes each frame, so that the processed frame can be encoded in an optimal way to eliminate or greatly reduce noise that is noticeable to the viewer. The processor 22 can basically be regarded as a spatially varying two-dimensional low pass filter, as it thins each frame image, the spatial and adaptive filters out the selected two-dimensional high frequency components of the signal. To create obsello.The processor 22 can facilitate the encoding for any type of signal. However, for this variant of the present invention, the processor 22 is programmed to work with data in high definition, as specified by specification GA. This can be either 19201080 pixels on the image or 1280720 pixels on the image. According to the specification GA every HD requires to broadcast approximately 18 megabits per second. To simplify the discussion hereinafter will be considered in detail only the format 19201080. This consideration applies equally to format 1280720 or any other format.In Fig.2 illustrates in greater detail the adaptive image processor 22. Depending on the information about the format of the signal received from the detector 20, the image signal is sent to the controller 28 (Fig. 1) in the Converter interlaced to progressive 221 (Type 1), block reverse telekomoperetora 222 (Type 2, 3) or passes unmodified (Type 4-6) in a spatial lowpass filter with limited bandwidth 223. Filter 223 receives the output signal of the blocks 221 and 222 after these blocks are processed signal.Converter 221 receives the signal, if the format contains carastro the R includes all the information of the image in each frame. Signal filtering with progressive scan in the typical case does not introduce distortion, as this can occur when the filter information fields of the interlaced signal. Converter 221 applies the known methods for converting interlaced fields into a progressive frame.Block reverse telekomoperetora 222 removes redundant fields from the image is interlaced at 60 Hz and restores the original film image with progressive scan. Line-by-line format makes possible the subsequent vertical filtering of low frequencies without distortion of the movement. If the input signal source image (Type 2 or Type 3) is treated as a source of Type 1, vertical filtering of low frequencies will reduce the ability of the MPEG2 encoder to detect and properly handle the material of the source image. Will suffer the coding efficiency. Unit 222 converts the signal in progressive format, and before filtration removes redundant fields/frames, as the filter can filter out this redundant information in different ways. If redundant information before filtering is not deleted, the information after filtering may not be identical, and the encoder may not responsabilitati.The structure of the processor 22 is simplified by providing a single output clock signal from block 222. If the block 222 provides an output line of the image at 24 fps and 30 fps, you will need two output clock signal and support scheme.Signals that were originally created in progressive format at 30 frames per second, pass directly into the filter 223. Filter 223 is waiting video, presented as complete frames of the image. Spatial low pass filter 223 is actually a group of filters. For example, the first filter is a horizontal low pass filter to eliminate the effects of aliasing. The second filter is a vertical low pass filter. The last filter is a two-dimensional low pass filter, as described previously. The coefficients of each output of the adaptive filter be installed in accordance with control information from the encoder 24 and the buffer 26, as seen in Fig.1. The progressive signal is horizontally filtered low frequency to avoid aliasing in the subsequent decimation in frequency Converter discre which will be discussed later. For removing noise from aliasing in the output signal of the low pass filter 223 has a cut-off frequency 640 cycles per line. A horizontal filter to eliminate the effects of aliasing is enabled in block 223 may be a filter with finite impulse response (FIR) 17 conclusions with the following coefficients for conclusions:
[f0, f1, ..., f15, f16]=[-4, 10, 0, -30, 48, 0, -128, 276, 680, 276, -128, 0, 48, -30, 0, 10, -4]/1024.Encoding high-definition video signals at a reduced bit frequency usually requires additional vertical low pass filter to further reduce bandwidth video signals. Remove vertical high frequency energy before MPEG encoding is necessary to achieve an acceptable overall image quality. The vertical frequency domain of the most high phase sensitivity attenuated. Vertical frequency cutoff is set equal to some fraction of the Nyquist frequency. For example, a video may be suitable cut-o frequency equal to approximately half the frequency of the input signal HD. For high definition signal with 1080 lines at the height of the picture this would be consistent with section 540 CTT determined by the controller 28 of the parameters available from the encoder 24 and the buffer 26 of Fig.1 (i.e., the desired bit frequency, quantization matrices, and so on). The vertical low pass filter included in block 223 may be a FIR-filter with 17 conclusions with the following coefficients for conclusions:
[f0, f1,..., f15, f16]=[-4, -7, 14, 28, -27, -81, 37, 316, 472, 316, 37, -81, -27, 28, 14, -7, -4]/1024.Alternatively, the cutoff frequency may be equal to twice the line frequency signal to standard definition. Usually the vertical low pass filter follows the horizontal filter designed to eliminate the effects of aliasing.The processor 22 performs vertical filtering, not vertical decimate, resulting in a constant vertical string permission. Currently filtering is more preferable to interlaced video signals than decimate. Converting vertical string resolution for interlaced image sequence requires sophisticated hardware and software, resulting in high cost of the receiver. Vertical frequency conversion sample rate affects the vertical high-frequency characteristic due to the increasing complexity of the Oia for receiver currently do unpromising reducing the vertical resolution to reduce distortion and bit frequency, the resulting encoding. The produced image is considerably deteriorated when using existing technology in converters vertical sampling rate instead of a vertical low pass filter described above. However, efficient and cost-effective converters vertical sampling rate can change described here, the vertical filter, without violating the principles of the present invention.The coefficients for both horizontal and vertical filters low frequencies can be modified programmatically and, if necessary, can be applied at the pixel level to achieve the intended bit frequency without creating distortions in the restored image. Usually sufficient modification of the coefficients on per-frame basis. Alternative to the slower processor is preprogrammed number of different sets of coefficients for the filters and the choice set, the most suitable for the processing of image information. Greater flexibility adaptive filters allows the whole system to create a data flow with less distortion compared to Sistema and vertical directions by block 223, the controller 28 determines whether the signal to be uniformly encoded by the encoder 24 on a frame-by-frame basis without making considerable noise. If so, then the video signal in any of the blocks 224, 225 or 226, depending on its format, as will be discussed next. If, however, the encoding process, in all probability, will make a noise and/or distortion in the signal, the signal is emitted in a two-dimensional low pass filter in block 223 for further adaptive filtering. The control parameters from the processor 22, the encoder 24 and the output buffer 26 (Fig.1) allow the controller 28 (or individual controller block in block 223) to determine whether further filtering. Used to implement the determination of the control parameters are, for example, measuring movement and contrast available quantization table, the coding efficiency and the current plan to the bit frequency.Two-dimensional filter in block 223 reduces the amount of high frequency information from the image frame, mainly in the direction of the diagonal instead of horizontal or vertical directions separately. The human eye is very sensitive to high-frequency noise on the vertical and the Ali, to make possible a uniform quantization of the encoder 24, usually produces a result signal of higher quality with less of the observed noise. Diagonal filter, like all previous filtering, works with the entire image frame and is programmable.Diagonal filter can be compatible with the matrix of quantization in the encoder. Matrix quantization is often used diamond matrix for quantization of the I frames. However, these matrices often produce noise, as and P frames use other types of quantization matrices that preserve the high frequency components during the process of compression and motion compensation, which occurs in the encoder 24. Filters processor 22 removes high frequency information from each frame image before MPEG2-encoder 24 generates processing data of I, P and frames in the path of the motion estimation. Thus, high frequency components are largely removed from P and frames, as well as from the I frame. When restoring, as you know, the image is largely free from the distortions created MPEG2 encoding.In practice, the controller 28 in Fig.1 estimates the parameters of the signal (e.g., movement, contrast and so on) before filtering the concrete is th diagonal filtering. During the filtration process, a specific frame, the controller 28 monitors the parameters of a signal from the processor 22, the encoder 24 and the buffer 26 and changes the coefficients as necessary to maintain the planned bit of frequencies with minimal distortion/noise. Each frame is filtered based on the most recent parameters of the signal and enters the encoder 24 to compress and then in the buffer 26, while the subsequent information is entered in the filter 223.If the signal was generated as an image signal with a frequency of 24 frames per second filtered signal in block 224, designed to stretch 3:2. Block 224 duplicates selected frames to provide an output signal with a frequency of 30 frames per second. This is done using known methods. Then, from block 224, the signal supplied to the Converter with a horizontal decimation 226.Block subdisciplinary fields 225 converts the progressive signals from the filter 223 format progressive scan format interlaced. This conversion is carried out by known methods. Without the inverse transform in an interlaced format signal would contain double the amount of data, as the frequency of progressive frames, pouzivatel sampling frequency 226 receives a line signal at 30 frames per second directly from the filter 223. In addition, as described above, in the inverter 226 provide signals blocks 224 and 225. Converter 226 thins the high-definition signals to the selected transmission format. This format may not be standard. It can be any image size and frame that you want. However, non-standard format will require modification of the receiver.When the inverter 226 receives the signals of the HDTV standard GA 19201080, inverter 226 thins horizontal information and displays hybrid pixel format frame 12801080. Receivers compatible with HDTV company GA, capable of receiving the image frames containing 19201080 pixels and 1280720 pixels. Therefore, GA-compatible receivers can be modified to support 1280 pixels horizontal resolution and 1080 pixels vertical resolution. Hardware compatible receiver increase the sampling rate with 1280 horizontal pixels up to 1920 horizontal pixels simultaneously with increased vertical resolution. However, the corresponding standard GA receivers are not required and are not programmed to receive image frames with a resolution of 12801080 pixels (horizontal resolution on verticalmouse available, but there must be added the software to decode and only increase the horizontal resolution. Adding software is simpler and cheaper than changing design and the addition of new hardware required for other non-standard formats.The processor 22 provides a hybrid format 12801080, as modern technology playback is not able to reproduce the resolution of 1920 pixels per line. Currently the best TV monitors can reproduce only a resolution of about 1200-1300 pixels per line. Therefore, restriction of output resolution up to 1280 pixels per line in the horizontal direction has a small adverse effect on the quality of the picture, if at all. The granting of permission when playing 12801080, which supported existing hardware receivers to decode and decompress, take the manufacturers of receivers a minimum of problems, as you need only change the software. For certain receivers, such as broadcast satellites, modification software can be downloaded and installed dissaperance.Hybrid format has advantages as providers of terrestrial and satellite programs do not wish to transmit high definition programming. Satellite transponder transmits a bit stream with a frequency of about 24 megabits per second (Mbps). Terrestrial HDTV broadcast can be transmitted at speeds up to 19 Mbps, including high-definition (HD) 18 Mbit/s and other information (for example, the audio channel, program guide, access mechanisms, and so on). Each of the modern satellite transponder can carry at most one HDTV program that providers of satellite programs considered insufficiently profitable. Simple reduction of the horizontal resolution frame from 1920 to 1280 enough to make possible the simultaneous transfer of two HD programs over a single satellite transponder. The filtering provided by the processor 22, successfully allows such dual transmission of high definition video over a single channel.The characteristic of the filtering provided by the processor 22 may take various forms, including diamond, cross and hyperbole along coordinate axes, where each filter filtering is diagonal. One possible form, the two-dimensional hyperbola, today the wife of Fig.3. The cut-off frequency adjustable filter basically is set to allow uniform compression of the selected group of pictures, frame or side frame by the encoder 24. If necessary, can be filtered extra horizontal and vertical high-frequency information, but usually not. As changing the complexity of the image increases or available bit rate, the amount of data filtered diagonal filter and other previous filters is reduced. Two-dimensional filter can be described, for example, as a two-dimensional FIR filter with 13 pins in each direction (1313) or as a two-dimensional filter with infinite impulse response (IIR filter).Two-dimensional FIR filter included in block 223 may be a filter having 1313 conclusions, with coefficients for the conclusions given in the table (see the end of the description).For these factors, the transfer ratio conversion with decreasing frequency DC (DC - down conversion) is 1024. The coefficients show octanol symmetry, which gives 28 independent coefficients. Symmetric region factors make possible a more rapid setting of the adjustable filter. However, the Xia characteristic in one part of the image.The response of the filter processor 22 can vary continuously from one set of coefficients to another on a pixel-by-pixel basis. Thus, the processor 22 can set various operating parameters to maintain good image quality while limiting the bit frequency, as will be discussed later.As mentioned previously, the processor 22 may be adaptively modified for adaptive filtering depending on the parameter (s) used to specify the adaptation of the filter. For example, variation in the image frame can be used to segment the image into regions for different treatment. The contours are an important feature of the image, as the dominant contours of the mask encoding errors in closely adjacent to, and they also can identify areas of the image. To identify areas of low complexity, such as the body or the sky may be used in colorimetry. Textures can also be identified and treated as a region. Texture is usually less important than the contours. Consequently, textures, identify areas that may be more filtered than other areas. Also, to record important altrazine, can be used for kinematic composition. The background is mostly mitigated by the depth of field of the optics of the camera and can be filtered more strongly. Information pan and review may be used to determine the center of interest in the image for differential treatment by the processor 22.The operation of the encoder 24 compliant MPEG2. The encoder 24 may be provided by the controller 28 of the information that the processor 22 may be used to improve performance. Such information may include information for the bit frequency, for example. This information is bit frequency can contain the average bit frequency for a group of pictures, the bit frame rate and bit rate macroblock or block. Other information that can improve the performance of the processor 22 includes the complexity of the discrete cosine transform, type quantization matrix and the step size of the quantization matrix. The processor 22 may provide via the controller 28 information to the encoder 24 for regulating its work to improve the performance of the encoding.After forming in the transport packet stream data with the use of the AK described in the specification of the company's Grand Alliance. Except for the required increase of the sampling rate to the full pixel resolution high-definition receiver, signal processing provided by the processor 22, is transparent to the decoder in the receiver that is compliant with the Grand Alliance.In the receiver, the data stream is demodulated transport stream is processed to extract the packets of data and information about the program using known technologies. For HD programs in a hybrid format described above, the sampling frequency of the signal in the playback processor is increased in the horizontal direction, if the play requires full high definition signal. The number of vertical lines in the image signal remains unchanged. This restores the full high definition video signal with a resolution of 19201080 playback device recreating high definition image. If the playback device image does not require full high definition signal, the signal properly weeded using known methods during recovery image before playing. Existing receivers that are compatible with the standard Grand Alliance, require mod is of funds enables you to independently choose algorithms horizontal and vertical hardware and software processing, selected standardized mode of the Grand Alliance standard, as required for the input signal.Fig.4 is a block diagram of the passing of the video images through a coding system. At step 30, the signal format is identified, and the identification information is supplied together with the video signal. Format information may indicate, for example, if the original signal from the image format of 24 frames per second and whether it is interlaced or progressive. If the video is interlaced, at step 31, it is converted into a progressive signal with a frequency of 60 frames per second. If the signal is converted by a stretch of 3:2, at step 32 redundant frames are removed. If the video already has a progressive format video camera, he goes directly to the filter stage 34. At step 34, the video signal is spatially filtered by the low frequencies. This includes vertical, horizontal and diagonal filtering, as described above. At stage 35 is re-conversion progressive signal, which was formed from an interlaced signal at the step 31, back into an interlaced signal. At step 36, the signal is subjected to a stretch of 3:2, to return the excess is use, now proceed with step 34 directly on the stage 38. At step 38, the video signal from one of the stages 34, 35 and 36 subdirectives to hybrid signal resolution high-definition 12801080 defined above, or another output format, as necessary. At step 40, the hybrid signal is encoded in the MPEG2 format, as described previously. Stage 42 processes the coded signal for transport, the stage 44 modulates the transport signal, as required for transmission over the output channel, such as a high frequency channel terrestrial broadcast. In conclusion, the stage 46 transmits the modulated signal. Steps 42 through 46 are carried out by known methods.Fig. 5 is a block diagram of the passing of the transferred image signal through the receiver. This flowchart assumes that the image resolution of the received signal is a hybrid high definition video signal 12801080 defined above. At step 50 the transmitted signal is received by the tuner and demodulated. The demodulated signal is decoded and extracted from the MPEG2 format on the stage 52. Step 54 identifies the resolution of the video signal as 12801080 pixels on the image using control information sent along with the signal. The MPEG2 Protocol compromice is identified by a unique code, like any other specified resolution. Hybrid signal can also be described in any other way, for example by using the information in the user data contained in the transmitted data. At stage 56 at the time of playback processing is an increase in the sampling frequency hybrid video signal in the horizontal direction up to full high definition signal 19201080. Increasing the sampling rate of the hybrid signal occurs with the use of new software tools in conjunction with existing hardware and software available in the receiver, as described previously, In conclusion, full high-definition video is reproduced on the display with a resolution of 19201080 at step 58. Steps 50, 52 and 58 using known methods.The device and methods described above can be applied in a number of configurations to achieve improved recovery image for vysokochetkogo play. Depending on the requirements of a particular system can be used in adaptive and maladaptive ways. Some of these options are discussed below.Non-adaptive strategy would be the job of filtering frames by the CPU 22 to the planned bit carepolice, the center of the reproduced image is the most interesting area. It also assumes that the periphery of the image less interesting and therefore less important to the viewer. The filter coefficients processor 22 are set by the controller 28 using parameters that are functions of the spatial position of the pixel, and all image information is processed uniformly.The adaptive option is the segmentation of images into regions using texture parameters of the model, local widevariety, dimensions, color or other dimensions of complexity of the image based on the original image. Filtering characteristics of the processor 22 adaptive modified for different areas.Another approach is adaptive modification of the filtering performance of the processor 22 as a function of the difference between the actual bit rates and expected bit rate. In this case, a single parameter controls the change of the filter coefficients for two-dimensional frequency transform.Another strategy is the development of a two-dimensional frequency characteristics of the filtering provided by the processor 22, which is compatible with Methot, which has a two-dimensional form. For this strategy, the coefficients of the filter must be a function of the step size of the quantization matrix. Since the step size is changed in accordance with the known mode of operation of the encoder, a corresponding change would occur for the corresponding filter coefficients.The above scenarios illustrate the flexibility of the system, using the principles of the present invention. Such a system preferably operates in the context of the speed control standard MPEG2 to empower MPEG2 compression by reducing distortion when encoding and other noise. Versatility and economy with the introduction of HDTV with the use of the present invention are increased. The number of HD programs transmitted by a single transponder in a satellite-based system of direct broadcast (i.e., 24 MHz four-phase shift keying), increases from one to two, or one program in high definition with multiple programs standard definition. In accordance with the principles of the present invention is achievable possibility of transmission of one program in high definition with multiple programs in standard definition on channel naseo one channel or multiple standard definition programs on one channel.Although the present invention is described in the context of the systems transmitting and receiving a high definition signal, its principles are applicable to other apparatus, such as data storage systems. In systems such as digital video disk (DVD), video data is encoded and stored for playback at a later time. The carrier has a limited amount of available storage space. If the encrypted program, the film image or other sequence exceeds the amount of available space on the media, further encoding/compression in order to fit the program can create unacceptable distortions. The above-described invention can be used for efficient encoding program to lower bit frequency, allowing to place the program on the disk. Or now several programs can fit on one disk. Digital preservation on film may also provide advantages as described above. 1. The method of processing first and second signals, respectively, showing the first and second different image formats, providing for detection of the presence of the first or second video signal, according to which A. upon detection of the a, (b) filtering the converted signal to create a filtered signal, (c) re-convert the filtered signal into the original format of the first signal to create a re-converted signal, (d) re-convert the converted signal to a lower resolution to create a signal with a lower resolution, (e) encode the signal with a lower resolution to create the encoded signal and (f) serves a coded signal to the output channel, and B. upon detection of the second video signal (g) is filtered second signal to generate a filtered signal, (h) convert the filtered signal to a lower resolution to create a signal with a lower resolution, (i) encode the signal with a lower resolution to create the encoded signal, (j) serves coded signal to the output channel.2. The method according to p. 1, in which the first video signal is interlaced and the above-mentioned signal from the interlaced scanning is converted into a signal with a progressive phase (a).3. The method according to p. 1, in which the first video signal is a signal which telekomoperetora film and the above-mentioned signal held telekomoperetora the tx2">4. The method according to p. 1, in which the second video signal is progressive.5. The method according to p. 1, in which the stages of the filter provides filtering of the low frequencies.6. The method according to p. 5, in which the filtration stages provide two-dimensional filtering.7. The method according to p. 1, in which the filtration stages provide adaptive filtering, adjusted for the frames of the group of pictures, or a separate frame, or part of the frame.8. The method according to p. 1, in which the filtration stages provide a temporary filtering of low frequencies and adaptive change filtering characteristics in response to the characteristics of the signal.9. The method according to p. 1, in which the filtration stages provide spatial filtering of low frequencies and adaptive change filtering characteristics in response to the characteristics of the signal.10. The method according to p. 1, in which the stages are compatible encoding according to the international standard for the compression and transmission of video expert group in the field of moving image MPEG2.11. The method according to p. 1, in which a signal with a lower resolution is the resolution 12801080 discrete data elements per frame.12. the e 19201080 discrete data elements per frame.13. The method of processing one of the following: video interlaced and signal format which telekomoperetora film, according to which detect the presence of one of the following: video interlaced and signal format which telekomoperetora film, convert the detected signal into one of the following: a signal with progressive scan and the signal format of the previous reverse telekomoperetora film, respectively, to generate the converted signal, filter the converted signal to create a filtered signal, re-convert the filtered signal into one of the following: the signal is interlaced and the signal format which telekomoperetora film, respectively, to create retransformed signal, re-convert the converted signal to a lower resolution to create a signal with a lower resolution, encode the signal with a lower resolution to create the encoded signal and serves coded signal to the output channel.14. The method according to p. 13 in which the step of filtering is filtering the low frequencies and the phase Kodirov who sing in the field of moving image MPEG2.15. The method according to p. 13, in which a signal with a lower resolution is the resolution 12801080 discrete data elements per frame.16. The processing method is not past telekomoperetora video with progressive scan, whereby the adaptive filter the detected signal to generate a filtered signal, convert the filtered signal to a lower resolution to create a signal with a lower resolution, encode the signal with a lower resolution according to the international standard for the compression and transmission of video expert group in the field of moving image MPEG2 to generate the encoded signal and serves coded signal to the output channel.17. The method according to p. 16 in which the step of filtering is filtering the low frequencies and the phase encoding is an encoding according to the international standard MPEG2.18. The processing method is not past telekomoperetora video with progressive scan, according to which the filtered detected signal to generate a filtered signal, convert the filtered signal to a lower resolution to create a signal with a lower resolution, with the resolution of 12801080 d which serves coded signal to the output channel.19. The method of processing a received digital video signal in a system for processing high definition video, which can have more than one image resolution, including resolution 12801080 discrete data elements on the frame, according to which decodes the signal to create a decoded signal, determines the resolution of the decoded image signal, converts the horizontal information from the decoded signal in a different resolution, if the decoded signal has a horizontal resolution of 1280 discrete elements on a string, to generate the converted signal and serves the converted signal to the output device.20. The method according to p. 19, in which the conversion is to increase the conversion and the other is a 1920 horizontal discrete elements on the line.21. The method according to p. 19, in which the conversion is a step-down conversion and a different resolution is a lower resolution.22. The method according to p. 19, in which the accepted digital video signal is compatible according to the international standard for the compression and transmission of video expert group in the field of moving image MPEG is tlenol during conversion.24. The method according to p. 16, in which the adaptive filter is a function of parameters of the image signal before filtering.25. The method according to p. 16, which, starting from the stage of adaptive filtering stage filing provides processing relative to a fixed frame format of the image.26. The method according to p. 16, in which the adaptive filtering is adaptive within a picture frame.27. The method according to p. 26, in which the adaptive filtering is performed on a pixel-by-pixel basis.28. The method of supplying the video format signal, and the above-mentioned format is set to 1280 by elements of the picture at 1080 elements of the picture.29. The method according to p. 28, in which the mentioned 1280 elements of the picture are horizontal information and referred 1080 picture elements are vertical information.30. The method according to p. 29, in which the video information is information broadcast satellite.
SUBSTANCE: device has scaling block, two delay registers, block for forming pixel blocks, buffer register, block for calculating movement vectors, two subtracters, demultiplexer, enlargement block, pulsation filtering block, mathematical detectors block, multiplexer, reverse scaling block, as a result of interaction of which it is possible to detect and remove some series of TV frames from programs, which cause harmful effect to viewer, specifically pulsations of brightness signals and color signals with frequency 6-13 Hz.
EFFECT: higher efficiency.