Structure of decoder for optimised control of error processing in multimedia data streaming

FIELD: information technologies.

SUBSTANCE: method and device are suggested for multilevel integration used for elimination of errors. Error is detected in multimedia data on the basis of the first level protocol, and then error detected in multimedia data is masked on the basis of the second level protocol. In one aspect error in multimedia data is eliminated on the basis of communication level protocol, and it is controlled on the basis of transport level protocol. Further distribution of controlled error is determined on the basis of synchronisation level protocol, then error detected in multimedia data is masked on the basis of applied level protocol. Further stage of error elimination and scaling stage are provided.

EFFECT: increased efficiency of multimedia data stream processing by reception of multiple streams of coded multimedia data, eliminating errors in erroneous part of stream and recovering multimedia data from multiple streams.

40 cl, 10 dwg

 

Description

In this patent application claims the priority in accordance with the provisional patent application No. 60/660 .681, entitled "Method and apparatus for error recovery in video communications", filed March 10, 2005, provisional patent application No. 60/660 .923, entitled "Method and apparatus for video decoding", filed March 10, 2005, and provisional patent application No. 60/660 .867, entitled "Method of error recovery for a decoder", filed on March 10, 2005, each of which belongs to the applicant of the present application and is incorporated into this description by references.

The technical field to which the invention relates

The present invention relates to a method and device for decoding the streaming media data in real time on a portable device.

The level of technology

Due to high growth and great success of the Internet and wireless communications, as well as increased demand for multimedia services, streaming media over the Internet and via mobile/wireless attracted great attention. In heterogeneous IP networks, video content provided by the server, but can also be provided by one or more clients. Wired connections include a connection over a telephone line, a digital network integrated services (ISDN), cable lines, what about the protocols digital subscriber line (together called xDSL), fiber optic cable, local area network (LAN), wide area networks (WANs), and other transmission Mode can be either unicast or multicast (group).

Mobile/wireless communication such communications in heterogeneous IP networks. Transport of multimedia content via mobile/wireless communication is quite difficult due to the fact that the quality of these channels is often greatly diminished because of the fading associated with multipath propagation, shadowing, inter-symbol interference and noise (interfering noise). Some other reasons such as mobility and competing traffic, also lead to changes in bandwidth and loss. The noise of the channel and the number of supported users define variable in time property of the media channel.

The demand for higher data rates and higher quality of service in heterogeneous IP networks and mobile communication systems is rapidly growing. However, factors such as limited time delay (lag), limited transmission power, limited bandwidth and fading, multipath propagation, continue to limit the data rate used in practice of the system. When transmitting multimedia data in cast the STI, in environments prone to error, the error tolerance transmitted multimedia data is critical in ensuring the desired quality of service, due to the fact that errors, even in some of the decoded value, can lead to decoding artifacts (distortion)propagating in spatial and temporal relations. To minimize errors using different encoding parameters that support the desired data rate, however, all these methods suffer from the problems associated with errors reaching the decoder.

Data is compressed using the source coder, transmitting the maximum amount of information with the consumption of a minimum number of bits following the channel encoder, which tends to maximize channel capacity for a given probability of error when receiving these bits.

Channel coding, such as coding reed-Solomon, is used to increase the reliability of the data encoded by the encoder source. Methods of joint coding channels are used to provide varying levels of protection from errors to the data encoded by the encoder source, with varying levels of importance or to provide the possibility of adapting the transmission rate of the coded videogenic available network bandwidth by dividing and reject bags. This is due to the fact that conventional transport protocols do not deliver corrupted data decoder source.

Methods of source coding, such as reversible coding with variable length codes (e.g., MPEG-4), was used to eliminate errors by decoding the packet in the reverse order upon actual acceptance of damaged packages. There is a tradeoff in coding efficiency using the methods of source coding, which preserves the quality of the decoded video data at a predetermined data transfer rate.

Standards hybrid coding, such as MPEG-1, MPEG-2, MPEG-4 (together called MPEG-x), H.261, H.262, H.263, and H.264 (together called H.26x), use the point of re-synchronization bit stream as the primary method of handling errors in the decoder.

Another reason that may cause loss of data in excess of the initial distortion is due to emulation wrong keywords. Identifying the initial position of bit error is not a trivial task and, as a rule, is impossible without a special structure that supports the identification of the positions of bit errors at the MAC layer or component of the physical layer. Therefore, after discovery of the damage bitstream, the decoder may have to stop decterov the tion and to move the bit stream to find the next point resynchronization, in the process, which inevitably benefit from large amounts of potentially useful data. Despite the emulation of another keyword that has a length similar to the original, that is true, the keyword may seem less of a problem compared to the sequence of the above events, in fact it is not an example. There are many ways in which this type of error can result in errors when decoding correct bitstream decoder. For example, in most modern codecs (coder-decoders) in the bit stream contains objects (parameters associated with compression), the values of which affect the order the next part of the bitstream. Therefore, an incorrect value of this object can lead to incorrect decoding of the bitstream.

Due to the fact that conventional transport protocols do not deliver the decoder invalid data (for example, the use of video or audio decoder), the decoder has limited ability to handle bit errors using the waiver package and re-sync, which is the most common solution. Need better way of handling bit errors, which lead to the spread of errors and data loss due to problems such as loss of synchronization and emulating the wrong keywords.

The invention

In one aspect, the method and apparatus for multi-level integration, used to eliminate errors, contain the method or means for detecting errors in the multimedia data based on the Protocol of the first level and masking found in the multimedia data errors based on the Protocol of the second level. In another aspect, a device for multi-level integration, used to eliminate errors, contains a detection tool for detecting errors in the multimedia data based on the Protocol of the first level, and means for masking to mask found in the multimedia data errors based on the Protocol of the second level. In the method and device for multi-level integration of the first level may include the communication layer. The communication layer can include one element of the set containing the physical layer, MAC layer and the transport layer, or a combination of these elements. In addition, the method and apparatus may further comprise method or means to control errors detected on the basis of the transport layer Protocol. The control phase of the detected errors can include the step of limiting the propagation of errors. The method and apparatus can also optionally include the method or means for determining location is adelene errors detected on the basis of the Protocol-level synchronization. The second level may include the application layer.

In another aspect, a method and apparatus for multi-level integration, used to eliminate errors, include the method or means for detecting errors in the multimedia data based on the Protocol level communication, control errors detected on the basis of the transport layer Protocol, determine the distribution of controlled errors based on the Protocol level synchronization and masking found in the multimedia data errors based on the application-level Protocol. In another aspect, the method and apparatus used for processing multimedia data, include the method or means for performing Troubleshooting encoded multimedia data and support scalable encoded media data. In another aspect, a device used for processing multimedia data includes a component Troubleshooting to perform Troubleshooting encoded multimedia data and component scaling to support scalable encoded media data. In the method and device used in multimedia processing, scalability can be either spatial or temporal, or both. Step Troubleshooting outstanging or stage temporal error concealment, either the phase spatial error concealment, or the step of converting the frame rate, or a combination of these steps.

In another aspect, the method and apparatus used for processing multimedia stream, include a method or means for receiving multiple streams of encoded multimedia data, perform Troubleshooting in the wrong part of the flow and recovery of multimedia data from multiple threads. In another aspect, a device used for processing multimedia stream includes a receiver for receiving multiple streams of encoded multimedia data, component Troubleshooting to perform Troubleshooting in the wrong part of the thread and block recovery for recovery of multimedia data from multiple threads. In the method and device used for processing multimedia stream, the stage of removing errors may be either a phase temporal error concealment, or the phase of the spatial error concealment, or the step of converting the frame rate, or a combination of these steps.

It should be noted that the above method and apparatus may be implemented using machine-readable medium and/or processor configured to perform the method or operation of the device.

Brief description of drawings

Figa depicts the Lok diagram of an example communication system for delivering streaming multimedia data.

Figv depicts a block diagram of an example of a layered communication system for delivering streaming multimedia data.

Figs depicts a block diagram of another example of a layered communication system for delivering streaming multimedia data.

Figa depicts a block diagram of an example structure of a decoding device for decoding the streaming media data.

Figv depicts a Protocol stack diagram of an integrated multi-level governance system, including the transmitter, and the other view is depicted in FIGU structure of the decoding device.

Figure 3 depicts an example media-symbols arranged for cascade coding with erasing errors of reed-Solomon and turbomotive.

Figure 4 depicts a sequence diagram of operations of an example method of decoding streaming multimedia data.

Figure 5 depicts the structure of the information TurboPage for video.

6 depicts a block diagram of an illustrative system components, which may be part of a multimedia receiver 24 shown in figure 1.

7 depicts a flowchart of the process of eliminating errors.

Detailed description

Describes a method and apparatus for providing enhanced eliminate errors in the mul is iMedia-decoder. Provided integrated error, such as error is detected in the stream of multimedia data at the top level (for example, on the level of communication and performing eliminate errors relative errors detected at the application level (for example, video or audio decoder audio). In one example of the structure of the decoder, providing information, marking invalid bits for components application layer, used to create informed decisions when performing different types of methods to resolve the errors. Workarounds are used to replace invalid characters alleged symbols obtained from the information available for the component application layer, such as pre-decoded video data, audio data, textual and graphical information. To ensure a complete understanding of embodiments in the following description of the specific elements. However, specialists in the art it will be clear that variations of the implementation can be carried out without these features. For example, in the flowcharts can be drawn electrical components in order not to obscure embodiments of unnecessary elements. In other cases, such components, other structures and methods can show the ü in detail for further explanation of embodiments. Specialists in the art will understood that the electrical components are depicted as separate blocks that can be rearranged and/or combined into a single component.

It should also be noted that some embodiments of can be described as the process depicted as block diagrams, flow diagrams, structural diagrams or circuit diagrams. Although a flowchart may describe the operations as a multi-step process, many of the operations can be executed in parallel or simultaneously, and the process can be repeated. In addition, the order of operations can be rebuilt. The process ends if his operation is complete. A process may correspond to a method, function, procedure, subroutine, program parts, etc. If the process corresponds to a function, its end corresponds to the function return to the calling function or the main function.

Figa depicts a block diagram of an example communication system for delivering streaming multimedia data. The system 20 includes a transmitter 22 and multimedia decoder 24 of the receiver. The transmitter 22 includes compressed multimedia data of various forms, which includes, among other things, video data, audio data, graphical information, textual information and images. the data may be compressed video data and audio data, as in the MPEG-x and H.26x, compressed audio data, as in the standards of voice compression or MPEG-4 AAC, MP3, AMR and G.723, or any other form of digital data.

The transmitter 22 receives data from a variety of sources, including external memory, the Internet and online submission of audio and/or video data. The transmitter 22 also performs a transmission (Tx) data received over the network. The network may be a wired network 28, such as a phone, cable or fiber optic, or wireless network 26. In relation to wireless communication systems network 26 may include, for example, some communication systems, multiple access, code division multiple access (CDMA or CDMA2000), or alternately, the system may be a system of multiple access frequency division (FDMA), a multiple access system based on orthogonal frequency division (OFDMA)system, multiple access with time division (TDMA), as mobile communication technology for industry service GSM/GPRS (General packet radio common use) / EDGE (enhanced GSM data) or TETRA (long-haul terrestrial radio), multiple broadband access code division multiple access (WCDMA)system, high-speed data (1xEV-DO or 1xEV-DO Gold Multicast) or any other wireless communication system that uses to which Binali methods.

The decoder 24 includes a means, such as radio frequency antenna or network connection for receiving data over the wireless network 26 and/or wired network 28. The decoder 24 can include multiple processors containing a different combination of pre-processors (for example, any type of Central processing unit (CPU), for example, ARM), a digital signal processor (DSP), software, firmware and hardware, such as multimedia VideoCore processor, for allocating demodulation and decoding tasks associated with the accepted data. The decoder 24 also contains components memory for storing received data and intermediate data at various stages of the process of demodulation/decoding. In some embodiments, the implementation of the preliminary ARM processor performs less complex tasks, including decompression (removal of side information such as headers and messages) and demultiplexing multiple bit streams comprising audio data, video data, and more. Preliminary ARM processor also performs the parsing of the bitstream, error detection, masking, and the entropy decoding of variable length. In some such embodiments, digital signal processor (DSP) performs keyword expansion VLC (ID var is authorized length), the inverse zig-zag scan image data for the spatial definition of the parameters of the pixel, the inverse prediction AC/DC parameters of the pixel for the video data of the MPEG-4 standard (not a feature of the H.264 standard because of the context adaptive entropy coding and decoding of audio data (for example, MPEG-4 AAC, MP3, AMR or G.723). Multimedia VideoCore processor can perform more computationally complex task of decoding video data containing dekvantovanie, the inverse transform, the prediction with motion compensation and deblocking (form of filtration to reduce edge distortion (artifacts) between the edges of the block of pixels). In the communication system 20, one or more elements can be added to, rebuilt or merged. In relation to the wired communication network 26 may include, for example, part of a system of communication based on Internet Protocol (IP), transport protocols, such as transport Protocol real-time (RTP) or a universal transfer Protocol datagrams (UDP).

Figv depicts a block diagram of a layered Protocol stack, used for tasks division performed on the transmitter 22 and the decoder 24. Components 205 and 210 of the upper level, located in the transmitter 22 and decoder 24 matched with the public, can include multiple applications, such as, for example, video or AudioCodes and/or decoders. Some of the options for implementation may include multiple streams of information that are intended for simultaneous decoding. In these cases, the task of synchronization of multiple threads can also be made in the components 205 and 210 of the upper level. Component 205 of the upper level can provide the encoded timing information in the bit stream transmitted over the wireless network 26 and/or wired network 28. Component 210 of the upper level 210 can parse a lot of data streams from the condition that the associated application was decoding them at about the same time.

Components 215 of the lower level, located in the transmitter 22 may include various schemes to ensure robustness. Error-prone channels, such as wireless network 26 and/or wired network 28 can introduce errors in the bit stream received by the decoder 24. Such schemes robustness provided in the components 215 of the lower level, may include one or more diagrams of error correcting coding (error control), interleaving schemes and other well-known experts in the field of engineering schemes. Components 220 of the lower level, located in the decoder 22 may shall include appropriate components of decoding errors, which allow the detection and correction of errors. Some of the errors introduced by the wireless network 26 and/or wired network 28 may be unrecoverable using components 220 of the lower level. For unrecoverable errors solutions, such as components 220 of the lower level, requesting retransmission of the damaged component by component 215 of the lower level of the transmitter 22 may be impossible in some situations, for example, when transmitting multimedia data in real time, for example, when streaming applications. In some embodiments, the implementation of components 215 and 220 of the lower level components of the communication layer. One or more items may be added to, rebuilt or combined in the transmitter 22 or the decoder 24 shown on Figv.

Figs depicts a block diagram of a more detailed example of a layered Protocol stack, used for dividing the transmitter 22 and the decoder 24. Components 205 upper level, located in the transmitter 22, or distributed in one level from the set containing the application layer 206 and the level 207 synchronization, or in multiple levels. Components 215 of the lower level, located in the transmitter 22, or distributed in one level from the set that contains the transport layer 216, the level 217 management DOS is upon to the transmission medium (MAC)/streaming level 217 and the physical layer 218, or at multiple levels. Similarly, the components 210 of the upper level, located in the decoder 24, or distributed in one level from the set containing the application layer 211 and the level 207 synchronization, or in multiple levels. Components 220 of the lower level, located in the decoder 24, or distributed in one level from the set that contains the transport layer 221, level 222 control access to the transmission medium (WT)/streaming level 222 and physical layer 223, or at multiple levels. Specialists in the art are aware of these levels, and they are familiar with the distribution of different tasks among them. It will be discussed further example of a structure that combines different levels of decoding device 24, as discussed above, to take advantage of the robustness provided by the transmitter 22. One or more items may be added to, rebuilt or combined in the transmitter 22 or the decoder 24 shown on Figs.

Figa depicts a block diagram of an example structure of a decoding device for decoding the streaming media data. Figv depicts a Protocol stack diagram of an example integrated multi-level governance system containing a transmitter 22, and the representation of the Protocol stack depicted in FIGU structure decoder the feeder. As shown in Figa and 2B, a media decoder 30 component contains 32 physical layer, the component 34 WT-level, block 39 parsing the transport layer (TSP) and sync level and the component 50 application level. Media decoder 30 receives the input bit stream (C), contains the schema cascading error correction, for example, the schema cascade turbo coding/reed-Solomon. Component 32 physical layer can perform tasks demodulation, among other things, includes the reception of error-correcting decoding, for example decoding using the turbo code, and interaction with MAC-level (level control access to the transmission medium). Component 34 MAC level can perform error-correcting decoding, for example, the detection errors of reed-Solomon error correction and marking unrecoverable corrupted data, for example, the group consisting of one or more bits. Component 34 MAC layer communicates with the component 39 parsing the transport layer and sync level (TSP).

Component 39 TSP may further comprise component 36 demuxing transport layer and the component 38 parsing level synchronization. Component 36 demuxing transport layer may take a bit stream transmitted from the component 34 MAC level, containing as true, t is K and damaged bits, and information, marking invalid groups of bits. Damaged group of bits and the corresponding marking information includes information corresponding to the error 33 in cyclical redundancy check sum (CRC) turbodecoding and errors 35 reed-Solomon. (In some Protocol stacks component 36 demuxing transport layer also known as sub-level streaming level, where the MAC sublayer-level and sub-level streaming level are sublevels of the transport layer.) Component 36 demuxing transport layer can demux (de-mux) or disassemble adopted the bitstream into multiple bit streams. The parsed bit streams can contain the bit streams that are intended for different applications, such as video decoder, audio decoder audio, and various combinations of text, graphics applications and applications display. Component demuxing transport layer can also understand one bit stream intended for a specific application, such as a bit stream of video data, for example, into two or more separate levels (for example, using scalable coding), for example, on the main level and improve. Then these levels can be used for scalability, such as temporary and/or SNR wt is tabernaemontani. One example of a scalable coding divides frames with internal encoding (such as I-frames and other frames with mutual encoding (for example, P-frames or b-frames obtained using, for example, prediction with motion compensation) at different levels in the bit stream. I-frames may be encoded at a basic level, and P-frames and/or b-frames may be encoded at the level of improvement. Scalable coding is useful in dynamic channels, where the scalable bit streams can be adapted to fluctuations in network bandwidth. In error-prone channels scalable coding can increase reliability through protection from unequal error of the main level and improve. The best error protection can be applied to a more important level.

Component 38 parsing sync level advanced parsing of the bit stream to the bit sub, related to each other based on the timing. Multimedia bitstream of video data can be parsed into a bit stream of video data bit stream of audio data and the bit stream with the appropriate text caption. Component 38 parsing sync level passes the parsed bit streams associated with the application of the decoder along with the information of the time synchronization. This provides the associated audio data, video data and text information display and playback at the appropriate time.

In addition to discussing the above parsing component 36 demuxing transport layer can parse and upload information, marking damage (for example, information 33 about the CRC error and information 35 error reed-Solomon), which he took from the component 34 MAC-level and component 32 physical layer, the component 38 analysis of the level of synchronization and/or the relevant application layer (for example, video decoder or audio decoder). Component 38 parsing level synchronization can convey information, describe the damage, in the form of information 37 about the error distribution. Component 38 parsing level synchronization may also provide recommended information 41 about the strategy for error control. Then the process application layer may use information 37 marking damage, and information 41 about control strategies for error handling.

Component 50 application layer may contain one or more components, such as, for example, a component 40 Troubleshooting, component 42 zoom component 44 of the frame-rate conversion (FRUC), part 46 of the decoding of the base application and component 48 further about the processing. Component 50 application layer uses the submission 37, marking damage, and information 41 about control strategies for making decisions about how to use components eliminate errors, scaling, and frame-rate conversion (FRUC) to manage invalid data, thus offering a higher quality decoding using component 46 decoding the base application. For example, regarding the time scale, where some of the multimedia data is received on the same level that contains important information, and the rest of the multimedia data is on another level, the component frame-rate conversion (FRUC) can be used to restore the missing media data if the second level was not received, lost or damaged. After decoding component 48 further processing performs any necessary modifications hardware to enable display, playback or rendering video and audio output on the display device or a speaker, respectively. Component 48 further processing may also perform operations improvement or recovery before the multimedia data will be reproduced or represented.

Component 46 Dec is tiravanija base application may contain video decoder(s), audio decoder audio(s), and text and graphics applications. Through the process of eliminating errors, scaling, and frame-rate conversion (FRUC) bit streams of different applications before or during decoding using component 46 decoding the base application can be made to improve the quality of the bit stream of low quality (encoded with lower quality, or taken with a low quality because of errors). For example, the components 40, 42 and 44 can offer improvements over the standard source that is compatible with the H.264 bitstream video (baseline profile is a very simple profile that was designed for low-power devices), and to provide some elements of other profiles of the H.264 standard, such as B-frames and the separation layer data, which are required to perform the scaling, detection and fault tolerance for a data stream. Elements of processes that use components of the multimedia decoder 30 below. One or more items may be added to, rebuilt or combined in the transmitter 22 or the decoder 30 shown in Figa and 2B.

Below is a brief discussion of the process of error detection and error correction. One example of the arrangement of the detection and correction of errors and the uses cascading code containing both internal (channel) code and external (channel) code. Cascade channel codes consist of turbo (internal) code on a physical level and correcting code (external) erase error reed-Solomon installed on the MAC layer. Figure 3 depicts an example media-symbols arranged for cascade coding with erasing errors of reed-Solomon and turbomotive. As for the coding side, the output symbols from the information source, dual output keywords of the encoder is glued in bytes 102. Each byte 102 is considered a symbol in a finite field, known as Galois field (256)", for the purposes of external (N, K) reed-Solomon code (RS) on Galois field (256) (GF). N and K respectively denote the sizes of all keywords 104 reed-Solomon (RS) and the source data 106, containing the number of characters systematic part. Thus, the N minus K represents the number of symbols 108 parity included in each keyword 104. A reed-Solomon code RS (N, K) is amenable to correction N minus erase errors.

To top line 106, in essence, contain characters that are derived from the information source, and these characters can be scanned out To the lines or initially scanned rows or columns. Interleaving is achieved by scanning the initial columns and leads to mean the flax smaller groups of corrupted bits in case if this line 112 information TurboPage damaged. The length of each group of corrupted bits of erase error information TurboPage can be equal to 1 byte for the originally scanned columns unlike length, equal to (L-1) bytes, for initially scanned line. The decoder marking these damaged groups of bits, as discussed below, may be required to identify the size and position (bit stream) of these groups of bits. After this initial stage of the deployment data source each of the L columns 104 (bytes) is encoded using RS in N bytes by adding N-K parity bytes and, therefore, are formed K+1,...,N is shown in figure 3 rows 108. The top row consists of data 106 source, referred to as the information unit RS and a set of N lines is referred to as the block that is encoded with the RS, or simply a coded block 110.

For each line 112 added cyclical redundancy check sum (CRC) and some trailing bits required for proper operation of turbochager. By adding a checksum to each row 112 these rows will not be able to satisfy their respective checksum after turbodecoding can be declared erased. Each code block 110 at a time enters one line 112 in turbocodes and, therefore, each row is referred to as information TurboPage.

The process of turbodecoding is clear to the process of decoding reed-Solomon, which then reduces the coefficient of residual errors. The ability of successful correction to erase depends on the total number of erase inside the code block and the number (N minus K) parity symbols used in the key word RS.

In the structure channel coding intended for multimedia decoder 30 shown in figure 2, if the code block reed-Solomon (RS) has wipe out corrective ability, then the corresponding information block RS (Figure 3) may be transferred to the components at the transport level, the sync level or at the application level with notification, marking what number To information Turboatom 112 (Fig 3) are damaged. Systematic structure of the external (N, K) reed-Solomon code (RS) on Galois field (256) allows the direct use of accurately received information Turboatom (not damaged).

Figure 4 depicts a sequence diagram of operations of an example method of decoding streaming multimedia data. Modulated erroneous data (IN) accepted and entered into the process 60 decoding and demodulated (step 62). Data can be received over a wired or wireless networks, for example, Wi-Fi the th network 26 and wired network 28, depicted in figure 1. At step 62, the component 32 physical layer, shown in figure 2, performs demodulation of received erroneous data. Then demodulated data is transferred to step 64, where errors can be detected and corrected. At step 64, the component 32 physical layer, shown in figure 2, can perform turbodecoding, while the component 34 WT-level, shown in figure 2, can perform error correction reed-Solomon.

Once on stage 64 turbodecoding and decoding reed-Solomon discovered and corrected all fixable errors, information TurboPage and/or invalid bytes are identified at step 66, for example through labelling. Before sending the bit stream to perform the parsing stage 68 from cyclical redundancy check sum (CRC), the guard bits in each information TurboPage, as well as from lines 108 parity (Figure 3) refuse. Unmistakably accepted, accepted with errors, but corrected and marked corrupted data along with identification information identifying corrupted data transmitted together on stage 68 for parsing the bitstream. At step 66 component 32 physical layer and/or the component 34 WT level (Figure 2) may identify (e.g., labeling) invalid data

As discussed above, the component 36 demuxing transport layer and the component 38 parsing level synchronization parses the bitstream into multiple bit streams intended for a variety of processes in the application layer. In depicted in the Figure 4 example, the bitstream is parsed (step 68) to the bitstream 70 video data bit stream 72 of the audio data and the bit stream 74 textual and/or graphical information. Each bit stream 70, 72 and 74 may contain data that identifies a damaged group of bits in a separate bit streams. In addition, if a separate bit streams must be synchronous in time, the bit streams can contain timing information.

After parsing the individual bit streams is the error handling at the application level, where the steps 78, 82 and 86 are replaced damaged bits using one of several methods to resolve errors or masking. Component 40 Troubleshooting, component 42 scaling and component 44 of the frame-rate conversion (FRUC) component 50 application layer (Figure 2) can be replaced by the steps 78, 82 and 86. After replacing invalid characters separate streams of video data, audio data and text/graphic information can be decoded at stage 80, 84 and 88, respectively. Component 4 decoding base application component 50 application layer (figure 2) can perform the decoding on the stages 80, 84 and 88. Component 46 decoding the base application component 50 application layer (Figure 2) also can replace the damaged bits on the steps 78, 82 and/or 86, which may be labeled component of the lower level. One or more items may be added to, rebuilt or United to the process 60, the processing depicted in Figure 4.

Next is discussed in more detail an example of the identification of invalid bits (step 66) by marking invalid data, as shown in Figure 4. Figure 5 depicts the structure of the information TurboPage for video data. Block 140 represents the set of information Turboatom, such as rows of information Turboatom 106, the components shown in figure 3 information block RS. The frame of video data may require the encoding of information Turboatom data. For example, the first frame (F1) originates in the line 142A information block 148A. Remaining in the frame F1, the data is arranged in rows 142B, 142C, 142-D, and in the first part of the line 142E. Line 142A also contains a header 146A level synchronization (SH), which contains information such as the identification of flow, time synchronization, identification frame (the number of frames the number of available levels in the presence of a base level and improvements) and other information. Title 146A level sync the organization is used to identify the unit of analysis of the level of synchronization (reference number 38 in figure 2), which application which sends data contained in the following information blocks representing the frame F1. At the end of each line is present cyclical redundancy check sum (CRC), as discussed above, which is used in conjunction with the decoding of reed-Solomon to identify worn or damaged packages.

In the beginning of each line of information TurboPage header is present 144 transport layer (TH). Each header 144 transport layer (TH) contains the fields "Last_Flag" and "Offset_Pointer". Marking invalid information TurboPage performed to that shown in Figure 4 step 66 may be performed directly using the value of the field "Offset_Pointer". Installing an incorrect value in field "Offset_Pointer" can refer to the package as damaged. Alternatively, a field "Error_Flag" can be used in the header of the transport layer (TH). Marking can be achieved by setting the value of the field "Error_Flag"equal to one (Error_Flag=1) in the corresponding header transport layer (TH). For example, if the line 142C is damaged, the value of the header transport layer (TH) line 142C may be set equal to one. Field Last_Flag" is used to indicate that the current line is the last line of the frame (for example, by setting its value, avego unit), and if it is the last line, the field "Offset_Pointer" is used to specify where in the string of information turborocket is the beginning of the next frame (in number of bytes). For example, in line 142E the header value 144B of the transport layer (TH) can be equal to one (the value of the field "Last_Flag" is one), and the value of the field "Offset_Pointer" may be equal to the number of bytes contained in the string before heading -144 P transport layer (TH) (start of frame F3). Title V level synchronization may contain data indicating field "Frame_ID" and bit stream, as discussed above. Information block 148C contains video data representing the frame F3. If the line 142E was determined as invalid, the decoder may not know where the end of the frame F1 and the beginning of the frame F3.

In addition to (or instead of) the inclusion of the field "Error_Flag" in the header of the transport layer, as discussed above, the data table can be created and passed to the application layer with the information listed in Table 1 for each video frame.

A sample of the information contained in the error table of the bitstream of video data

Table 1
A sample of the information contained in the table Osh the side of the bitstream of video data
- The number of frames is an integer, a limited set of frames, for example, 30 frames, where after the greatest number numbering restarts from the unit.
- B/E - indicates the number of base levels or levels improve frame in the presence of multiple levels of zoom.
- Frame_Length - length of frame in bytes.
- RAP_Flag indicator that indicates whether the frame is a random access point, for example, the whole I-frame with an internal encoding. This indicator can also serve as a point of re-synchronization when confronted with errors.
- Tag display time (PTS) - time in the sequence of frames for which the frame should be displayed.
- Frames per second (FPS)
The amount of information Turboatom occupied by the frame (fully and partially)
- Error_Bit_Pattern is a variable that indicates what information TurboPage are damaged (five bits "00100" will indicate that the third of the five information Turboatom damaged).
- Error_Ratio is a variable that indicates the ratio of invalid information Turboatom for correcting packages (1/5 will indicate that one of the five packages damaged).

Can also be formed from other tables on the flexible bit stream, similar to Table 1. Information marking errors contained in the headers of the transport layer and/or the fault tables similar to Table 1 can be used components of the application layer, such as components 40, 42 and 44 (Figure 2), to identify invalid characters in the parsed bit streams 70, 72 and 74, and to replace them on the steps 78, 82 and 86 (Figure 4), respectively. As discussed above, packet Erasure of coding reed-Solomon (RS) not refuse, they are passed to the application layer, for example video decoder. This protects holistic packages, potentially long 122 bytes or more for packets of video data from loss.

Replace invalid characters on the steps 78, 82 and 86 (Figure 4) can take two basic forms, namely, the error correction and error masking. Correction and error concealment at the level of the bitstream is performed on the erase and packet errors. Decoding errors (because errors in bytes) adjusted through the use of criterion MAP (maximum a posteriori probability) to the extent possible. Errors that cannot be corrected, masked through the use of spatial and/or temporal information of the neighboring macroblock (a macroblock is an area of pixels, 16×16, which typically operates from standartizacija video and can also be used smaller submicromolar, for example, 8×8, 8×16, etc.). Temporal masking can be used, for example, if the image is static more than one frame. If the damage is masked macroblock is in an area that has remained relatively constant in the previous frame, it is likely to be present in the damaged frame and the previous frame can be used as an estimate for the damaged area. Spatial masking can take advantage of boundaries or objects that exist in the neighboring macroblocks of one frame.

As discussed above, can be taken multiple levels of the encoded data representing the one bit stream. The levels may include a base layer and one or more levels improve, where level(no) improvements can provide additional frames that are not available in the basic level (e.g., bidirectional predicted B-frames), or it can provide differential improvement of higher quality for the parameters of the pixel at the basic level. With regard to differential improvements for the parameters of the basic level, if the base level is damaged, the level of improvement would be useless, because it was obtained on the basis of C is achene pixels basic level. Thus, the rate of improvement may be released from the decoding if baseline data is corrupted. This process is called selective decoding, and it also can be used under scenarios of low power. For example, if the device containing the decoder operates in the low power mode or battery operated, can only be decoded base level, lowering the level of improvement, thus retaining the loops of the computation and, in turn, consumed power. Methods spatial and/or temporal masking can be used to replace data combined levels (base level and level of improvement).

The level of improvement (sometimes treated as a low priority) can be transmitted at low power in contrast to the baseline level (or highest priority). This increases the probability of error at the level of improvements on the basic level. Therefore, if the received base layer contains a high percentage of errors, the level of improvement could potentially be skipped.

Predicted bidirectional frames (B-frames) as predicted from the previous frame and the subsequent frame using prediction with motion compensation. B-frames offer a very high degree of compression and is highly desirable for saving about usknow abilities. B-frames are also desirable for their characteristics of temporal scalability. Standards-compliant B-frames are used to predict any other frames, and therefore they can be omitted without affecting the other frames. B-frames are also the most susceptible to the propagation of errors due to errors in frames on which they depend. In this regard, the B-frames with the highest probability will be the same level of improvement (or low priority). If the level of improvement is transmitted at low power, the B-frames are more susceptible to errors.

If a B-frame (or any other type of frame) fully damaged or is the percentage of errors above the threshold, making the temporal or spatial masking errors impossible, there can be used a frame rate conversion (component 44 FRUC shown in figure 2). Frame rate conversion (FRUC) is used to restore the lost frame. Frame rate conversion (FRUC) can also be used to restore the frame, which was released from the decoding in order, for example, saving energy. Methods of frame-rate conversion (FRUC) using the motion vector and the pixel information from the previous and subsequent frames for interpolation of motion vectors and pixels. Appropriate methods convert the output frame rate (FRUC) is not included in the context of this discussion. Methods of frame-rate conversion (FRUC) include a form-only encoder" and "decoder". Method of frame-rate conversion (FRUC) - only decoder performs pseudointerface data frame without other information. Method of frame-rate conversion (FRUC) "only the encoder uses third-party information transmitted in messages additional information improvement (SEI) of the H.264 standard or custom message data of the MPEG-2 standard.

6 depicts a block diagram of example system components, which may be part of a multimedia receiver, such as receiver 24 shown in figure 1. The system 600 is a system of multi-level integration, used to eliminate errors. 7 depicts a flowchart of the process of eliminating errors that can be used by the system 600. As for 6 and 7, the process 700 at step 705 accepts one or more multimedia data streams. The data streams may include video data, audio data and/or text information hidden titles, and other Adopted one or more threads may contain encoded data. The encoded data may be converted to data, quantized data, compressed data, or combinations thereof. The means of reception, such as shown in Figa receiver 24, at step 705 may take one Il the multiple threads. In addition to taking multiple multimedia data streams each accepted stream can contain multiple levels, such as the base level and the level of improvement. The receiver 24 may be wired or wireless receiver, or a combination.

Errors remaining in the bit stream are detected at step 710. Errors detected may contain errors remaining after error correction at a lower level, and discovery protocols, containing some of corrected errors that were introduced by the transmission channel, such as a wireless channel 26 or wired channel 28 (Figa). As discussed above, not all errors are corrected, and any of the lower level protocols may mark the corrupted data and/or groups of data that contain errors. Lower layer protocols used at step 710 for error detection can be implemented at the link layer, as discussed above. Levels of communication can be one element of the set containing the physical layer, the MAC layer (or stream level) and transport layer, or a combination of these elements. The detection tool, such as shown in Fig.6 block 605 error detection, at step 710 may perform error detection. Block 605 error detection may use different schemes to discover what is possible, well-known specialists in the field of technology, for example, schemes reed-Solomon code and/or turbo, as discussed above. Error detection may occur due to damage to the cyclical redundancy check (CRC) turbodecoding. Error detection may occur due to failure of the reed-Solomon decoder.

On detected errors can be used to monitor (step 715) via several methods. Step 715 control may include the step of marking (or tagging) corrupted data, as discussed above. Step 715 control may include the step of limiting the propagation of errors through the identification of groups of data, for example, packets, blocks, parts, frames, macroblocks, subacrosomal that contain errors. Step 715 control can be based on the transport layer Protocol. Such a Protocol can flag errors remaining after encoding one or more bits in the header of the transport layer to use the upper levels (Figv and 1C). The upper levels can use the error indicators of the header transport layer for additional identification and/or limitations of the package top level consisting of one or more invalid packet transport layer, thus further constraining the distribution of the error in b is a Hilbert flows of the upper level. The means of control, such as shown in Fig.6 block 610 error control, can perform tasks error control.

At step 720 determines the error distribution. In one aspect, the error distribution is based on the Protocol level synchronization. One or more bit streams, which were adopted at step 705, can be disassembled at the level of synchronization. If the sync level is taken from one of the lower levels, for example, from the level of communication, information, marking invalid data, it can identify the damaged part of the bit streams. The availability of this information can provide Protocol-level synchronization capability planning error concealment at the top level (e.g., application layer) and/or strategies to address errors. Depending on the size of the damaged data can be applied to other strategies. The transport level packet, which may be marked as invalid, can be combined into a packet level synchronization, which will be transferred to the various components of the application layer, depending on the part of a bit stream they are. The packet transport layer may have a constant length, and the packet level synchronization variable. The level of synchronization can identify the error distribution of pore the STV insert data to identify, what part of the synchronization packet of variable length contains invalid packet transport layer. In addition to using information about the errors (step 715) Protocol level synchronization may include additional methods of error detection. These methods of error detection may include relying cyclical redundancy check sum (CRC) packet level synchronization. A certain error distribution may be additionally given to the components of the application layer by inserting markings errors in the parsed data packets. The determination tool, such as block 615 to determine the distribution of errors can determine the distribution of errors (step 720).

At step 725 may be performed Troubleshooting errors in parts of the one or more encoded multimedia data streams. Correcting errors may be based on application-level protocols. The components of the application layer can perform Troubleshooting. The components of the application layer can determine what type of Troubleshooting to use on the basis of information received from the sync level, as discussed above. Troubleshooting may include one element of the set containing the temporary masking of errors, spatial masking errors, frequency conversion frame (FRUC) and the and the combination of these elements, as discussed above, as well as other ways. Masking of errors detected in one or more streams of multimedia data may be based on application-level Protocol. Troubleshooting can also support the scalability of one or more data streams. Scalability can include either spatial or temporal scalability, or both, as discussed above. The removal of errors, such as shown in Figa component 40 fixes that can perform Troubleshooting step 725. Means for masking errors, such as block 620 error concealment may perform masking errors. The scaler, such as shown in Figa and 6 components 42 and 620 zoom can support scalability when performing Troubleshooting on stage 725.

Process 700 may include a step 730 recovery of one or more bit streams. Step 730 may also include the step of combining accurately received data with masked data. The recovery step may include the step of reducing the frame rate (a form of temporal scalability), the number of which exceeds the error threshold in the frame. The recovery phase may include a decision not to decode, to mask or modify the Ural branch of the Yan improvements (form scalability SNR). The application-level protocols may be the basis for stage 730 recovery. A recovery tool such as block 630 recovery data flow can restore to step 725. One or more items may be added to, rebuilt or combined in the system 600. One or more items may be added to, rebuilt or combined to process 700.

Examples of the above-described methods and devices include the following.

The method of decoding multimedia data, comprising the steps of receiving a bit stream, performing decoding with error checking the received bit stream, and the step of decoding the error-checking includes the step of identifying the damaged bits that have not been corrected, the transmission bit stream decoded with error control containing the identified corrupted bits to a decoder, replacing at least one of the identified bits, and decoding the bit stream is decoded with error control, containing the replaced bits. In one aspect, the method further includes the step of parsing the bit stream decoded with errors, one or more characters, and the step of parsing the information identifying the damaged bits of conditions to any characters, the content is the following invalid bits were identified as damaged. In another aspect, the method further includes the building phase error table containing the parsed information identifying the damaged bits, where the error table contains information characterizing the identified bits for positions in the sequence of video frames. In another aspect, the method further includes the step of parsing the bitstream encoded with error control, at first parsed bit stream and the second parsed bit stream, where the first parsed bit stream is a bit stream of the base level and the second parsed bit stream is a bit stream improvement.

The method of decoding multimedia data, comprising the steps of receiving a bit stream, performing decoding with error checking the received bit stream, where the step of decoding the error-checking includes the step of identifying the damaged bits that have not been corrected, the transmission bit stream decoded with error control containing the identified corrupted bits, the decoder replacement of at least one of the identified bits by performing frequency conversion of frames between the first frame and the second frame, and decoding the bit stream decoded by the error-checking.

Specialists in the art should understand that information and signals may be represented using any of a variety of different technologies and methods. For example, data, instructions, commands, information, signals, bits, symbols, and elementary signals, which can be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.

Specialists in the art should also be understood that the various illustrative logical blocks, modules, and steps of the algorithm described with reference to disclosed herein are examples, may be implemented as electronic hardware, firmware, software, firmware, tools, firmware, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above, in General, on the basis of their functionality. Are there any features such as hardware or software depends on the specific objectives and structure restrictions imposed on the armed forces of the system. Specialists can implement the described functionality in different ways for each particular purpose, but such solutions implementation should not be interpreted as deviations from the amount disclosed methods.

The various illustrative logical blocks, components, modules, and circuits described with reference to the disclosed examples herein may be implemented or performed with a generic processor, digital signal processor (DSP), a specialized integrated circuit (ASIC), gate arrays with operational programming (FPGA) or other programmable logic devices, the logic element discrete components or transistor logic, discrete hardware components, or any combination thereof, designed to perform as described in this document functions. Universal processor may be a microprocessor, but the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, for example, a combination of a digital signal processor (DSP) and microprocessor, multiple microprocessors, one or more microprocessors in conjunction with the drôme digital signal processor (DSP) or any other such configuration.

The stages of a method or algorithm described with reference to disclosed in the present description, the examples may be embodied directly in hardware, in a software module executed by a processor, or combinations thereof. A software module can reside in memory random access memory (RAM), flash memory, permanent memory storage device (ROM), memory, erasable programmable memory (EPROM), memory, electrically erasable programmable memory (EEPROM), registers, hard disk, removable disk, a compact disk storage device (CD-ROM) or in any other form known in the prior art media. Illustrative data medium connected to the processor from the condition that the processor can read and write information to the data carrier. Alternative media can be an integral part of the processor. The processor and the storage medium may reside in a specialized integrated circuit (ASIC). Specialized integrated circuit (ASIC) may reside in the wireless modem. Alternatively, the processor and the storage medium may reside as discrete components in the wireless modem.

The previous description of the disclosed examples is respecto to provide any expert in the art of possibility of creation or use of the disclosed methods and devices. Various modifications to these examples will be obvious to experts in the field of technology, in addition, defined in the present description, the principles can be applied to other examples and can be added additional elements.

Thus, the described methods and apparatus for decoding streaming multimedia data in real time using information, marking the damaged bits and invalid data in the app decoder for performing intellectual error concealment and error correction of corrupted data.

1. Solution errors in the multimedia data using multilevel integration, comprising stages at which errors in the multimedia data through the first level on the basis of the Protocol for the first level; determine the distribution of errors detected by the second level on the basis of the Protocol for the second level; and mask found in the multimedia data errors by means of a third level Protocol for third level and a certain distribution of the error.

2. The method according to claim 1, in which the first level is the level of communication.

3. The method according to claim 2, in which the level of communication is one of or a combination of the physical layer, the MAC-level (or flow-level) and transport is on the level.

4. The method according to claim 1, containing also the stage at which the monitor found an error on through a transport layer Protocol for the transport layer.

5. The method according to claim 4, in which the step of monitoring the detected errors includes a stage on which restrict the distribution of errors.

6. The method according to claim 1, wherein the second level contains the level of synchronization, and in which the distribution of the error identifies what portion of a bit stream is corrupted and provides one or more strategies for masking errors and error correction for the third level.

7. The method according to claim 1, wherein the third layer contains the application layer.

8. The method according to claim 1, wherein the first level contains the link layer and the transport layer, the second level contains the sync level, the third level contains the application layer, the method also includes a step at which control the detected error by the transport layer Protocol for the transport layer.

9. Device to correct errors in the multimedia data using multilevel integration, containing a means for detecting errors in the multimedia data based on the Protocol for the first level, means for determining errors detected on the basis of the Protocol for the second the match; and means for masking found in the multimedia data errors based on the Protocol for the third level and a certain distribution of the error.

10. The device according to claim 9, in which the first level is the level of communication.

11. The device according to claim 10, in which the level of communication is one of or a combination of the physical layer, the MAC-level (or flow-level) and the transport layer.

12. The device according to claim 9, which also contains means for controlling errors detected on the basis of a Protocol for the transport layer.

13. The device according to item 12, in which the control means includes means for restricting the propagation of errors.

14. The device according to claim 9, in which the second level contains the level of synchronization and in which the distribution of the error identifies what portion of a bit stream is corrupted and provides one or more strategies for masking errors and error correction for the third level.

15. The device according to claim 9, in which the third level contains the application layer.

16. The device according to claim 9, in which the first level contains the link layer and the transport layer, the second level contains the sync level, the third level contains the application layer, the device also includes means for controlling errors detected by the transport ur is VNA-based Protocol for the transport layer.

17. Device to correct errors in the multimedia data using multilevel integration containing the sensor in the first level, which detects an error in a multimedia data based on the Protocol for the first level; the determinant of the distribution of errors in the second level, which determines the distribution of errors detected by the second level on the basis of the Protocol for the second level; and a block mask in the third level, which masks found in the multimedia data error based Protocol for third level and a certain distribution of the error.

18. The device according to 17, in which the first level is the level of communication.

19. The device according to p, in which the level of communication is one of or a combination of the physical layer, the MAC-level (or flow-level) and the transport layer.

20. The device 17, which also contains a control unit in the transport layer, which controls the detected error based on the Protocol for the transport layer.

21. The device according to claim 20, in which the control unit limits the propagation of errors.

22. The device according to 17, in which the second level contains the level of synchronization and in which the distribution of the error identifies what portion of a bit stream is corrupted and provides one or more of the strategies for masking errors and error correction for the third level.

23. The device according to 17, in which the third level contains the application layer.

24. The device according to 17, in which the first level contains the link layer and the transport layer, the second level contains the sync level, the third level contains the application layer, the device also includes a control unit in the transport layer to control errors detected by the transport layer Protocol for the transport layer.

25. Processor to correct errors in the multimedia data using multilevel integration, configured to detect errors in the multimedia data through the first level on the basis of the Protocol for the first level; and determine the distribution of errors detected by the second level on the basis of the Protocol for the second level; and a dropout is detected in the multimedia data errors by means of a third level Protocol for third level and a certain distribution of the error.

26. The processor on A.25, in which the first level is the level of communication.

27. The processor on p, in which the level of communication is one of or a combination of the physical layer, the MAC-level (or flow-level) and the transport layer.

28. The processor on A.25 also configured to control detecting the negative errors through a transport layer Protocol for the transport layer.

29. The processor on p, in which the control unit limits the propagation of errors.

30. The processor on A.25, in which the second level contains the level of synchronization and in which the distribution of the error identifies what portion of a bit stream is corrupted and provides one or more strategies for masking errors and error correction for the third level.

31. The processor on A.25, in which the third level contains the application layer.

32. The processor on A.25, in which the first level contains the link layer and the transport layer, the second level contains the sync level, the third level contains the application layer, while the processor is also configured to control errors detected on the basis of a Protocol for the transport layer.

33. Read computer media to perform the error correction in the multimedia data using multilevel integration, comprising stages, which detect the error in the multimedia data through the first level on the basis of the Protocol for the first level; determine the distribution of errors detected by the second level on the basis of the Protocol for the second level; and mask found in the multimedia data error by the third level on the basis of the Protocol for the third level of the I and a certain distribution of the error.

34. Media in p in which the first level is the level of communication.

35. Media in clause 34, in which the level of communication is one of or a combination of the physical layer, the MAC-level (or flow-level) and the transport layer.

36. Media in p, in which the method also includes a step at which control the detected error by the transport layer Protocol for the transport layer.

37. Media in p, in which the step of monitoring the detected errors includes a stage on which restrict the distribution of errors.

38. Media in p, in which the second level contains the level of synchronization and in which the distribution of the error identifies what portion of a bit stream is corrupted and provides one or more strategies for masking errors and error correction for the third level.

39. Media in p, in which the third level contains the application layer.

40. Media in p in which the first level contains the link layer and the transport layer, the second level contains the sync level, the third level contains the application layer, and the carrier also contains commands to control errors detected on the basis of a Protocol for the transport layer.



 

Same patents:

FIELD: information technology.

SUBSTANCE: invention relates to buffering packets of a media stream during transmission from a transmission device to a receiving device. Media packets are generated from at least one type of media information in a stream generator; at least one transmission frame is generated based on transmitted media packets; transmitted packets are generated from at least one transmission frame and a transmission schedule is generated for transmitted packets. In addition, the first and second steps of hypothetical decoding are executed. The first step of hypothetical decoding is executed in accordance with the transmission schedule, and involves buffering the transmitted packets in accordance with the transmission schedule in the first buffer for hypothetical decoding and output of packets from the first buffer for hypothetical decoding based on the transmission frame. The second step of hypothetical decoding involves controlling occupance rate of the first and second buffer for hypothetical decoding by controlling at least one of the following: operation of the stream generator, generation of at least one transmission frame, transmission schedule.

EFFECT: more efficient buffering of media stream packets.

20 cl, 7 dwg

FIELD: image transferring equipment engineering, possible use in multimedia communications.

SUBSTANCE: in accordance to method, when error codes are detected on receiving side, data of code stream of image with error codes are refused prior to decoding of data of code stream of image, and refused data of code stream of image are replaced with data of code stream of image, positioned in appropriate position of previous frame, and data of code stream of image are encoded continuously. Also, an array of marks is set up for data of code stream of image prior to encoding on receiving side, to perform recording of positions, where error codes have been detected.

EFFECT: possible avoidance of transfer of internal frame images on transmitting side and of frozen images on receiving side, or decrease of their occurrence periods, thus improving quality of image.

7 cl, 2 dwg

FIELD: re-synchronization.

SUBSTANCE: method can be used in decoding channel according to MPEG-4 standard. To provide proper decoding of pressed video data signal, the re-synchronization word RW differs from known words of variable length code VLC as well as from start code of plane of video object and has at least 17 sequent zeros, after which the unit follows, for plane of video object coded to provide two-directional prediction. After error in transmission in pressed video signal is detected, the pressed video data signal can be re-synchronized.

EFFECT: higher efficiency of re-synchronization.

4 cl, 2 dwg

The invention relates to encoding and decoding digital data divided into blocks of digits, in order of importance digits

The invention relates to television, in particular to the processing of the image data, and in particular to a method and apparatus for loop-filtering the image data

FIELD: re-synchronization.

SUBSTANCE: method can be used in decoding channel according to MPEG-4 standard. To provide proper decoding of pressed video data signal, the re-synchronization word RW differs from known words of variable length code VLC as well as from start code of plane of video object and has at least 17 sequent zeros, after which the unit follows, for plane of video object coded to provide two-directional prediction. After error in transmission in pressed video signal is detected, the pressed video data signal can be re-synchronized.

EFFECT: higher efficiency of re-synchronization.

4 cl, 2 dwg

FIELD: image transferring equipment engineering, possible use in multimedia communications.

SUBSTANCE: in accordance to method, when error codes are detected on receiving side, data of code stream of image with error codes are refused prior to decoding of data of code stream of image, and refused data of code stream of image are replaced with data of code stream of image, positioned in appropriate position of previous frame, and data of code stream of image are encoded continuously. Also, an array of marks is set up for data of code stream of image prior to encoding on receiving side, to perform recording of positions, where error codes have been detected.

EFFECT: possible avoidance of transfer of internal frame images on transmitting side and of frozen images on receiving side, or decrease of their occurrence periods, thus improving quality of image.

7 cl, 2 dwg

FIELD: information technology.

SUBSTANCE: invention relates to buffering packets of a media stream during transmission from a transmission device to a receiving device. Media packets are generated from at least one type of media information in a stream generator; at least one transmission frame is generated based on transmitted media packets; transmitted packets are generated from at least one transmission frame and a transmission schedule is generated for transmitted packets. In addition, the first and second steps of hypothetical decoding are executed. The first step of hypothetical decoding is executed in accordance with the transmission schedule, and involves buffering the transmitted packets in accordance with the transmission schedule in the first buffer for hypothetical decoding and output of packets from the first buffer for hypothetical decoding based on the transmission frame. The second step of hypothetical decoding involves controlling occupance rate of the first and second buffer for hypothetical decoding by controlling at least one of the following: operation of the stream generator, generation of at least one transmission frame, transmission schedule.

EFFECT: more efficient buffering of media stream packets.

20 cl, 7 dwg

FIELD: information technologies.

SUBSTANCE: method and device are suggested for multilevel integration used for elimination of errors. Error is detected in multimedia data on the basis of the first level protocol, and then error detected in multimedia data is masked on the basis of the second level protocol. In one aspect error in multimedia data is eliminated on the basis of communication level protocol, and it is controlled on the basis of transport level protocol. Further distribution of controlled error is determined on the basis of synchronisation level protocol, then error detected in multimedia data is masked on the basis of applied level protocol. Further stage of error elimination and scaling stage are provided.

EFFECT: increased efficiency of multimedia data stream processing by reception of multiple streams of coded multimedia data, eliminating errors in erroneous part of stream and recovering multimedia data from multiple streams.

40 cl, 10 dwg

FIELD: physics; image processing.

SUBSTANCE: invention relates to a method of buffering multimedia information, as well as a method of decoding a coded stream of images in a decoder, in which the coded stream of images is received in form of transmission blocks which contain multimedia data. A system for processing multimedia data is proposed, which contains a coder for coding images and a buffer for buffering multimedia data. Multimedia data are included in the data transmission blocks. The data transmission blocks are ordered in the transmission sequence, which at least partially differs from the sequence of decoding multimedia data in transmission blocks. There is also definition block, which can set a parametre which indicates the maximum number of data transmission blocks which precede any data transmission block in a stream of packets in the transmission sequence and that data transmission block is tracked in the decoding sequence.

EFFECT: more efficient compression when buffering multimedia information.

32 cl, 7 dwg

FIELD: physics, communications.

SUBSTANCE: invention relates to transmission of a media stream over an error-prone digital video broadcasting - handheld (DVB-H) channel in which media datagrams are labelled according to a priority, packed in a multiprotocol encapsulation section, unequally protected using forward error-correction codes packed into a traffic stream and transmitted into the channel using packets with time-division. A system and a method are proposed for transmitting a multiplexed service stream over a DVB-H channel. Media IP packets are priority labelled. For each packet with time division, the IP packets are grouped based upon the priority labels. Multi protocol encapsulation - forward error correction (MPE-FEC) matrices are made for different priority labels in each packet with time division. Reed-Solomon code data table (RSDT) columns are computed such that the average service bit rate does not overshoot the maximum allowed bit rate, and protection increases with priority. The application data table (ADT) and RSDT of the MPE-FEC matrices are then encapsulated into MPE-FEC sections.

EFFECT: shorter start delay during reception of an unequally protected priority service bit stream.

21 cl, 10 dwg

FIELD: physics, communications.

SUBSTANCE: invention relates to multimedia transmission systems, specifically to methods and a device for acquiring services. Proposed is a service acquisition device which has a source coder configured to generate one or more channel switch video (CSV) signals, which is an independently decoded version of a low-resolution video for the selected channel in a received multiplex transmission and associated one or more multimedia signals, an error coder configured to code CSV signals and multimedia signals for formation of coded error blocks, and a linker configured to encapsulate coded error blocks into a multiplex transmission signal.

EFFECT: fast acquisition of a service and/or switching between services in multiplex transmission.

60 cl, 23 dwg

FIELD: information technologies.

SUBSTANCE: video data is coded, packet is formed with coded video data, and packet is transferred via wireless channel into access network. Level of access control to transfer medium (MAC) receives negative notice from the access network (NAK). It is identified whether received NAK is associated with packet, which contains video data. If received NAK is associated with packet, which contains video data, errors are corrected.

EFFECT: improved efficiency of video data errors correction.

36 cl, 5 dwg

FIELD: information technologies.

SUBSTANCE: method for transmission/reception of signal and device for transmission/reception of signal. Device for transmission of signal includes coder with forward error correction (FEC), which executes FEC-coding of input data for detection and correction of data errors, interleaver, which interleaves FEC-coded data, and unit of symbols display, which displays interleaved data to data of symbol according to method of transmission.

EFFECT: improved efficiency of channel bandwidth use, increased speed of data transmission and increased distance of signal transmission, reduced cost of network development for signal transmission-reception.

15 cl, 33 dwg

FIELD: information technologies.

SUBSTANCE: several various VLC-tables are stored in coding devices, in process of coding and decoding, one of VLC-tables is selected and used to do coding of SVR for this video unit. Table may be selected on the basis of number of neighbouring video units for current video unit, which include non-zero transformation ratios.

EFFECT: increased coefficients of coding of SVR video units, in which structures of coefficients occurring with higher probability, are coded with the help of shorter codes, while structures of coefficients, which occur with lower probability, are coded with the help of longer codes, which is especially useful in coding of video units of improved layers in coding of scalable video.

25 cl, 7 dwg, 1 tbl

Up!