Environment for data storage and playback device data (options)

 

(57) Abstract:

Environment for data storage and playback device designed to store and synchronous playback multiplexed video data and audio data and data superimposed dialogue compressed at a variable speed and with different functions. Reproduced data from the storage medium for data using sector numbers, and reproduces the data for negative numbers of sectors that are worn by some of the sector. Information multiplexing, indicating multiplexed whether each of the image data, speech, and superimposed dialogue, and information for positions of access points used to search for data and random access, is stored at the positions represented by these negative numbers of sectors, and is reproduced in such a way as to ensure the simultaneous reproduction of the multiplexed data of video, audio and data superimposed dialogue, compressed at a variable speed, and perform various functions. The technical result is to facilitate error correction and synchronization search, shutdown or operation of the feeder frame in the case of data and storage environment, which preferably can be used, for example, in case of using the storage medium of the moving image in digital form.

The known device data playback that reproduces data from a disk used as a storage medium for data, and which is moving image in digital form, is described as a playback device data, sensitive to the change of the velocity, as shown in Fig. 12 posted in Japanese patent application N 6-124,168 of the present applicant (published may 6, 1994). This playback device data reproduces data stored on an optical disc 101 using the reader device 102. Reading device irradiates an optical disk with laser beams and uses the beam reflected from the optical disk 101 for reproducing data stored on it. The signals reproduced by the reader, are sent to the demodulator 103. The demodulator 103 demodulates the reproduced signals issued by an optical reader device 102 for transmission to the detector sector 104.

The detector sector 104 finds the address loaded in each sector of data transferred,made after the detector sector, during synchronization support sector. The detector sector 104 generates an error signal of sector number on the circuit switching paths 118 by the control circuit of the ring buffer 106, if the detector cannot determine the address or if certain addresses are not continuous.

Diagram of the ECC 105 detects an error in data received from the detector sector 104, and uses the spare bits of the data to correct errors and subsequent issuance of the corrected data in the memory ring buffer (FIFO - first come, first served basis) 107 for the ring switch. Moreover, if the scheme ECC 105 will not be able to correct the error in the data, it outputs a signal generating error to the circuit switching tracks 118.

The control circuit of the ring buffer 106 controls the recording and reading of the memory ring buffer 107 and checks the signal to request the code that queries the output from separation schemes aggregated data 108.

Definition schema switching paths 118 monitors the output signal from the control circuit of the ring buffer 106 for issuing a switching signal paths on the circuit tracking 117 as required to change the player position the ia switch tracks 118, in addition, determines the error signal of sector number received from the detector sector 104, or the signal generating error obtained from the circuit of the ECC 105, for issuing a switching signal paths on the circuit tracking 117 to change the playback position of the reader device 102.

The output of the memory ring buffer 107 is transmitted to the scheme of separation of the aggregated data 108. The scheme of separation of headers 109 in the scheme of separation of the aggregated data 108 divides the pack headers, the headers of packets of data from the memory ring buffer 107, to forward them to control the separation unit 111, and sends the data, summarized by time division, to the output terminal G of the switching circuit 110. The output terminal (dial-up terminals) H1, H2 switching circuit 110 is connected with the output terminal of the buffer videocode 113 and the buffer audiocode 115, respectively. The output buffer videocode 115 is connected to the input of the decoder 114, and the output buffer audiocode 115 is connected with the input of the audio decoder 116.

In addition, the pings codes generated by the decoder 114, are entered in the buffer videocodec 113, and the request signals generated code buffer videocode 113, entered into the scheme R is Ufer audiocode 115, and the code request signals generated by the buffer audiocode 115, entered into the scheme for dividing the aggregated data 108.

The operation of each component of the playback device data described hereinafter. The reader 102 irradiates the optical disc 101 by the laser beams and uses reflected from the optical disk, the light beam for reproducing the downloaded data there. Reproduced signals issued by the reader 102 are introduced into the demodulator 103 for demodulation. The data demodulated by the demodulator 103, issued in the circuit ECC 105 by means of the detector sector 104 for detecting and correcting errors.

The error signal of sector number is issued to the circuit switching paths 118, if the detector sector 104 cannot correctly determine the number of sectors (the address assigned to the sectors of the optical disc 101). Diagram of the ECC 105 outputs a signal generating error to the circuit switching paths 118, if the data meets a fatal error. The corrected data is transferred from the circuit ECC 105 in the memory ring buffer 107 for storage.

The control circuit of the ring buffer 106 reads the address of each sector on the output of the detector sector 104 to assign Olamim buffer unit 106 assigns the read address (the playback point (RP) for data stored in the memory ring buffer 107 on the basis of the request signal code from separation schemes aggregated data 108 that is located after the control ring buffer device. Then it reads the data from the playback position (RP) for transmission to the scheme of separation of the aggregated data 108.

The scheme of separation of header 109 in the scheme of separation of the aggregated data 108 divides the pack headers and the headers of packets received from the data received from the memory ring buffer 107, for their further transfer to the control circuit diagram division 111. Schema schema management division 111 sequentially connects the input terminal G of the switching circuit 110 to the output terminal (circuit switched terminal) H1 or H2 in accordance with the ID information flow in the header of a packet received from separation schemes headers 109, for correct data division condensed with time division. It then transmits the data to the appropriate data buffer 113 or 115.

The buffer videocode 113 outputs a signal to request the code for splitting the aggregated data 108, using the available part of its internal buffer code. Then the buffer 113 stores the received data. In addition, he receives a signal, the video signal from the received data to issue them via the output terminal.

Buffer audiocode 115 outputs a signal to request the code for splitting the aggregated data 108, using the available part of its internal buffer codes. The buffer 115 then stores the received data. In addition, he receives signals request codes from the audio decoder 116 and outputs data that it contains. Audio decoder audio 116 reproduces audio signals from the received data to issue them via the output terminal.

Thus, the video decoder 114 requests the data from the buffer videocode 113 and the buffer videocode 113 requests data from separation schemes aggregated data 108. Finally, a scheme for dividing the aggregated data 108 requests the data from the control circuit memory ring buffer 106. In this case, data is moved from memory ring buffer 107 in opposite relation to the query direction.

For example, the number read from the memory ring buffer 107 is reduced by decreasing the amount consumed by the decoder 114 data per unit time in accordance with the continuous processing of data for simple screens. In this case, the amount of data loaded in the memory ring buffer 107 may increase, resulting in overflow. Scheme to define the number of data the currently loaded into memory ring buffer 107, and, if the amount of data exceeds a predetermined value, determines that the memory ring buffer can overflow to issue commands to switch tracks in the tracking scheme 117.

If the detection circuit switch tracks 118 detects the error signal of sector number received from the detector sector 104, or the signal generating error from the circuit ECC 105, it throws a recording address (WP) and read address (RP) to calculate the amount of data remaining in the memory ring buffer 107, and the amount of data required to guarantee read from the memory ring buffer 107 in the scheme of separation of the aggregated data 108 at the time of committing the optical disc 101 of one revolution of the current position of the track (i.e. while waiting one turn of the optical disk).

If memory ring buffer 107 remains a large amount of data, the detection circuit switch tracks 118 determines that the error can be detected if to make the reader 102 to repeat the playback of data from a state in which the error was detected, and issues a command to switch tracks in the tracking scheme 117, as almost the grow of exchange of information.

When the switching command paths are removed from the schema definition of the switching paths 118, for example, the tracking scheme 117 makes the reader 102 to move from state A to state B on the inner side of the circle from the position A, as shown in Fig. 13. The control circuit of the ring buffer 106 prevents the entry of new data into the memory ring buffer 107, and the data loaded in the memory ring buffer 107, is transmitted to the scheme of separation of the aggregated data 108 as required until the second turnover of the optical disk from position B to position A, that is, until the sector number obtained from the detector sector 104 becomes equal to the number obtained before the switching of the track.

Even when the sector number obtained from the detector sector 104 becomes equal to the number obtained before the switching of the track record in the memory ring buffer 107 is not reset and is another track, if the amount of data loaded in the memory ring buffer 107 exceeds a predetermined value, i.e. if the memory ring buffer 107 may be stuffed. Thus, the playback device data can use the memory ring buffer 107 to assign a variable of the data playback can be significantly greater when the synchronous reproduction of the aggregated data with the image, speech and data superimposed dialogue, compressed with different speeds, in accordance with IS011172 (MPEG1) or IS013818 (MPEG2) and correction of synchronization errors and search, stop, or operation of the feeder frame in case of errors.

In accordance with the above the present invention provides a playback device data which is synchronously reproduces data with images, speech and data superimposed dialogue, compressed at different speeds, and performs various functions, and it also has a medium to store data associated with the device.

To achieve these objectives, the present invention sets a negative number of sectors, some sectors of the data storage environment, where they are stored and reproduced data in sectors.

The playback device data for reproducing data in accordance with the present invention reproduces data from a data storage environment, where the data is stored in sectors using sector numbers, and negative numbers of sectors, set some sectors.

The present invention provides storage in predetermined positions of the data storage environment with uplotnenie data information seal that indicates if the seal ring is made of these data.

The present invention carries out the reading of the predefined positions of the data storage environment with summarized data that include image data, audio data, data superimposed dialogue and/or other data loaded into it, the information seal that indicates whether the data is summarized data or not.

In addition, the present invention allows storing positional information for access points that can be used to search for data and for random access to a predetermined position on the storage environment with summarized data, including image data, audio data, summarized data dialog and/or other data stored on it.

In addition, the present invention stores the positional information for access points that can be used to search for data and for random access in a predetermined position in the storage environment with summarized data that include image data, audio data, data superimposed dialogue and/or other data loaded into it.

In addition, the present from the random access of the predefined positions of the data storage environment with summarized data, which contain image data, audio data, data superimposed dialogue and/or other uploaded data there.

The playback device data in accordance with the present invention having a means for error correction, the ring buffer, the buffer videocode, buffer audiocode and/or a code buffer superimposed dialogue, checks the working memory, which is located in one or more of the above-mentioned means during the power-on or at any time.

The playback device data in accordance with the present invention, used for error correction, uses two types of symbols of error correction associated with different posted directions for correcting errors in the reproduced data has a means of error correction, which is capable of changing the number of repeat bug fixes.

The playback device data in accordance with the present invention, which performs reading data from a position in which he met an error when the error cannot be corrected, automatically makes the change in the number of repeated readings depending on operating conditions or tibetanum, that can realize the change in the number of bug fixes and a number of re-reads the data shall automatically change the number of repeats and the sequence of error correction and re-read the data depending on conditions of work or the type of data that you want to play.

The playback device data in accordance with the present invention, which contains the buffer for reading variable speed or buffer to re-read data, failure data is read from the storage medium stores, in the buffer memory information about the content data from a data storage environment.

The playback device data in accordance with the present invention, containing a buffer for reading with a different speed or buffer to re-read the data, if it fails, the reading of data from among the data storage stores the positional information in the memory buffer for access points that can be used for searching and random access.

The playback device data in accordance with the present invention for reproducing data from a data storage environment with images, speech or data superimposed dialogue the data superimposed dialogue, in the storage medium, the device is switched on or when the storage medium is installed in the case of a removable medium.

In the storage environment in accordance with the present invention, containing the image, speech, or data superimposed dialogue or other data downloaded it, writes the reproduced data in certain positions, when all or part of the loaded image, speech and/or data superimposed dialogue is played automatically when the device is activated or when you install the storage medium, in the case of a removable medium.

The playback device data for reproducing data from a data storage environment with images, speech or data superimposed dialogue or other data downloaded it automatically plays all or part of the image, speech and/or data superimposed dialogue loaded in the storage medium when the reproduction of part or all of the loaded image, speech and/or data superimposed dialogue is completed, interrupted, or suspended.

In the storage environment in accordance with the present invention, containing the image, speech, or data superimposed dialogue or friend who uploaded images, speech and/or data superimposed dialogue automatically reproduced when the reproduction of part or all of the loaded image, speech and/or data superimposed dialogue is completed, interrupted, or suspended.

The playback device data in accordance with the present invention for reproducing the aggregated data that contains an image, speech and/or data superimposed dialogue, has means defining a seal for a determination made whether the seal image, voice, and data superimposed dialogue in the summarized data.

In the playback device data in accordance with the present invention includes a buffer audiocode and/or the buffer code data superimposed dialogue, a buffer memory that stores the image, speech and/or data superimposed dialogue, and information about the start time of the decoding data stored in the speech or data superimposed dialogue.

This invention has reference oscillator to detect errors in time synchronization playback or time of the decoding of image, voice, and data superimposed dialogue, and to measure the magnitude of the error.

For a start, however, the data instead of in order to perform the decoding, or clears all or part of the buffer for storing the audio data, to allow the audio data to have a start time of the decoding, prior to the time of decoding video data so that the decoded video data may be performed before decoding the video data, if the comparison time of the start of decoding video data from the start time of the decoding of the audio data will show that the latter must be previously previous.

The present invention also starts playback of image, speech or data superimposed dialogue in sync with the vertical synchronization signal.

The playback device data in accordance with the present invention, which can reproduce summarized data with video and audio data starts playback of the audio data synchronously with the reproduced video data and starts the reference oscillator, if the audio data is detected for the first time after the reproduction of the video data has already begun.

The playback device data in accordance with the present invention, which can reproduce summarized data with the image data and n is producing a reference oscillator, if video data is detected for the first time after the start of playback of audio data.

The playback device data in accordance with the present invention, which can reproduce summarized data with the image data and the superimposed dialogue starts playback of video data synchronously with reproducible data superimposed dialogue and starts the reference oscillator, if video data is detected for the first time after the start of playback data superimposed dialogue.

The playback device data in accordance with the present invention, which can reproduce summarized data containing speech data and superimposed dialogue starts playing the audio data synchronously with reproducible data superimposed dialogue and starts the reference oscillator, if the audio data is detected for the first time after the start of playback data superimposed dialogue.

The storage environment in accordance with the present invention for storing the aggregated data with the video data in accordance with IS011172 (MPEG1) or IS013818 (MPEG2) or aggregate data, including video data, ensures that the information decoding encoded on the eat for video playback in accordance with IS011172 (MPEG1) or IS013818 (MPEG2), having means for determining the image captions, and image type, performs a quick preview playback by reproducing the I-images and P-images without playing B-images.

The playback device data in accordance with the present invention for reproducing video and audio data that contains a means of error correction for correcting errors in data read from the storage medium temporarily suspends the output image, reduces the brightness of the screen displays a blue screen or a screen, painted in another color, pauses, speech output or decreases the output level when the data in which an error was detected, reproduced, if the error cannot be corrected by means of error correction.

The playback device containing means for correcting errors in data read from the storage medium, and a mechanism for counting the number of cases where errors cannot be corrected by the error correction, ignore data that should be reproduced or stops playback depending on the number or frequency errors, which met during the Oprah what emeniem for video playback in accordance with IS011172 (MPEG1) or IS013818 (MPEG2), having means for detecting image captions, and image type, carries out the search operation by re-switching tracks to implement search operations in both forward and in reverse, when the detected P - or B-picture immediately after you select and play a single I-picture.

The playback device data in accordance with the present invention for reproducing the aggregated data with the image, speech and/or data superimposed dialogue pauses the download audio data and/or data superimposed dialogue code buffer, periodically clears the buffer code or gives a command to search for information about the start time of the decoding data superimposed dialogue during the search video data or the filing of the frame.

The data are reproduced in accordance with the sector numbers and negative numbers of sectors that are installed in some sectors. Summarized data with images, speech and data superimposed dialogue, compressed at different speeds, can synchronously reproduced, and can perform various functions by storing in the positions represented by these negative numbers of sectors, the information uplo is to store the data, and positional information for access points is used to search for data and random access, and also to reproduce this information.

The invention is illustrated by reference to the accompanying drawings, in which:

Fig. 1 is a block diagram illustrating the configuration of a playback device data in accordance with the present invention;

Fig. 2 is a schematic drawing describing the format of the sector for data playback in the playback device data;

Fig. 3 is a schematic drawing describing the configuration of the DSM, with which the playback device data reproduces data;

Fig. 4 is a schematic drawing describing the configuration of the DSM, other than the DSM of Fig. 3, from which the playback device data reproduces data;

Fig. 5 is a schematic drawing describing the structure of the TOC data in the DSM;

Fig. 6 is a schematic representation describing the structure of the TOC data in the DSM, other than TOC data of Fig. 5;

Fig. 7A - 7D are schematic picture describing the structure of the flow compacted bits input to the multiplexer, and the structure of the bitstream to be displayed on each buffer, cocoa Fig. 7A - 7D;

Fig. 9 is a schematic representation describing the structure of the header image, voice, and data superimposed dialogue in the bit stream in Fig. 7A - 7D;

Fig. 10 is a schematic representation describing the data format of subcode;

Fig. 11 is a sequence of operations that describe the transformation of the state controller for explaining the operation of the playback device data;

Fig. 12 is a block diagram showing the configuration tools fix bugs 3;

Fig. 13 is a sequence illustrating the operation of the controller 16 in its initialization state;

Fig. 14 is a sequence illustrating the operation of the controller 16 in his state of the read TOC;

Fig. 15 is a sequence illustrating the operation of the controller 16 in the state shutdown;

Fig. 16 is a sequence illustrating the operation of the controller 16 in a state ready for reproduction;

Fig. 17 is a sequence illustrating the operation of the controller 16 in its state determined by the method synchronized start;

Fig. 18 is a sequence of operations, and who, 19 is a sequence illustrating the processing of speech by the controller 16 in a synchronized initial state speech and image;

Fig. 20 is a sequence of operations that describe the operation of the controller 16 in its initial state, synchronized image-only;

Fig. 21 is a sequence of operations that describes the processing of the controller 16 in its initial state, synchronized image-only;

Fig. 22 is a sequence of operations describing the processing by the controller 16 in its original state, synchronized only for overlapping dialogue;

Fig. 23 is a sequence of operations that describe the operation of the controller 16 when an error is detected in the synchronization of the parts of the image;

Fig. 24 is a sequence of operations describing the processing performed by the controller 16 to detect errors in the synchronization of parts of speech;

Fig. 25 is a sequence of operations that describe other processing performed by the controller 16 to determine the error in synchronization in parts of the image;

Fig. 26 is a sequence of operations that describe the processing done is a sequence of operations, describing the processing performed by the controller 16 to correct the error in synchronization in the parts of speech;

Fig. 28 is a sequence of operations that describe the operation of the controller 16 in the determination of the error;

Fig. 29 is a sequence of operations that describe other processing performed by the controller 16 to determine errors;

Fig. 30 is a sequence of operations that describe other processing performed by the controller 16 to determine errors;

Fig. 31 is a sequence of operations describing the processing by the controller 16 superimposed dialogue;

Fig. 32 is a sequence of operations that describe the operation of the controller 16 in its search status;

Fig. 33 is a sequence of operations that describe the operation of the controller 16 in its stop state;

Fig. 34 is a sequence of operations that describe the operation of the controller 16 in his state of the feeder frame;

Fig. 35 is a block diagram describing the configuration of the known device playback data; and

Fig. 36 is a schematic picture describing the track in the playback device data Fig. 35.

Enforcement which of of data.

Fig. 1 shows, in General terms, the playback device data in accordance with the present invention, where the environment for data storage (DSM) 1 consists of an optical disk that can be removed from the block driver 2, which loads the digital data, such as image, speech, superimposed dialogue, and information table of contents (TOC). However, DSM 1 may be removable or detachable medium optical memory, magnetic memory, optical medium, or a semiconductor memory element, or other storage medium for digital data.

The block driver 2 has a mechanical part for mechanical loading and unloading DSM 1 and the driver driving the readout mechanism containing optical head for reading the reproduced signals from DSM 1. The reader meets DSM 1 and may be a magnetic or optical head. In addition, the reader acts as a pointer to the address, if DSM 1 is a semiconductor element. Block drivers 2 demodulator that demodulates received signals play to get subcode, the aggregated data, the data error correction (C1) and data error correction code (C2) and sends them to Nimet data subcode, summarized data, the data error correction (C1) and the data error correction code (C2) sent from block 2 drivers in the format shown in Fig. 2, and uses the data error correction to detect and correct errors. In addition, he analyzes the data of subcode with the corrected errors to obtain the data of sector number. In addition, it adds the data of sector number and an error flag received from the data subcode to aggregated data with the corrected errors and sends the summarized data in the ring buffer 4 in the format shown in Fig. 7A. In Fig. 12 shows the configuration tools fix bugs 3. RAM 30 stores data supplied by the block driver 2. The switch 31 connects to the error correction scheme 32 or to add data 34, where the data read from the RAM. The error correction scheme uses 32 data error correction (C1) and the data error correction code (C2) for error correction. Diagram add data add data of sector number and the error flag supplied by the controller 33 for the aggregated data read from the RAM 30. The controller 33 controls the RAM addresses and the switch 31, and analyzes data subcode. In reading the TOC, described the possible errors for the same data for some time.

For data that contains eight bits of the aggregated data, to which one bit is added as required, is added to the error flag "0" to aggregated data, if these data do not contain errors or if errors in the data were fully corrected, and the flag "1" is added to the data if the error is unrecoverable. Means for correcting errors 3 sends data subcode on the decoder subcode 21 only in the case when the data does not contain errors or if the error is completely corrected.

The decoder subcode 21 decodes data subcode obtained from the error correction 3, to transmit the decoded data to the control circuit 16.

Ring buffer 4 is inside the FIFO memory, and temporarily stores the summarized data, the data of sector number and the error flag sent from the error correction 3 in the format shown in Fig. 7A, for transmission of the aggregated data and data rooms additional sectors, as well as the error flag in the format shown in Fig. 7A, in response to the read pointer specified by the control circuit of the ring buffer 26.

All data sent from the error correction 3, can be remembered without imposing conditions; only a limited amount of data can be the; is only a limited amount of data can be loaded into the buffer with the endpoint at the end defined by the controller; or only a limited amount of data can be loaded into the buffer in a certain gap between the sector numbers starting point determined by the controller 16, and the sector number of the end point, a controller 16. This process can be switched by the control circuit of the ring buffer 26.

If the start and/or end point identified by the controller 16, the control circuit of the ring buffer 26 informs the controller 16, when it will be defined by the start or end point data in the buffer. In addition, it accepts the command to load the TOC data received from the error correction 3, in a specific area for TOC data in the buffer memory and detects the end of the boot message to the controller 16. The control circuit of the ring buffer 26 transmits the TOC data loaded and stored in the ring buffer 4, in response to a request from the controller 16. In addition, in conjunction with the control circuit of the ring buffer 106 and the circuit switching paths 118, shown in Fig. 35, the control circuit of the ring buffer 26 controls the number is Uchenie track properly.

The demultiplexer 5 parts store the data received from the ring buffer 4 shown in Fig. 7A, in videobaby stream, audiority stream and the bit stream is superimposed dialogue, and transmits the titles and data, Audioslave and data and headers superimposed dialogue data in the buffer VideoCAD 6, the buffer audiocode 9 and the buffer code superimposed dialogue 12, as shown in Fig. 7B, 7C and 7D, respectively.

The demultiplexer 5 sends an error flag corresponding to each image, speech or data of this dialogue in the buffer VideoCAD 6, the buffer audiocode 9 or a code buffer superimposed dialogue 12 respectively. However, he stops to request the code from the ring buffer 26 and stops sending data in the buffer VideoCAD 6, the buffer audiocode 9 and the buffer code superimposed dialogue 12, if he receives a signal indicating that the buffer VideoCAD 6, the buffer audiocode 9 or buffer superimposed dialogue 12 is full.

The demultiplexer 5, in addition, finds the data of sector number, the reference clock frequency master oscillator system (SCR), loaded in a system header, a set time of decoding the image (DTSV) loaded in the header of the video data to specify the start time of decane time to begin decoding the audio data, and set time decoding of superimposed dialogue (DTSS) loaded in the header data superimposed dialogue to specify the start time of the decoding data superimposed dialogue to send a signal to the controller 16, which means that identified the data of sector number, SCR, DTSV, DTSA and DTSS. In addition, it returns the data of a certain number of a sector, SCR, DTSV, DTSA and DTSS, and transmits the contents to the controller 16 after a request from the controller 16.

If the demultiplexer 5 when checking the continuity of rooms sectors will find information about numbers of sectors, not located in a continuous manner, which were obtained from the ring buffer 4, it fills the gap between sectors blank data containing the error flag of one or more bytes, and transmits the data to all buffers videocodec 6, buffers AudioCodes 9 and buffers code superimposed dialogue 12 in order to inform them about data loss in this position or boundaries of the discontinuous sector, created by the search operation.

The buffer VideoCAD 6 has an inside memory of the FIFO and stores the header of video data and video data sent from the demultiplexer 5, for transmission to the DTSV detector 7 at the request of the decoder 8. Besides, he kind of is ideacod, if the memory buffer is full or loss occurs digits.

The DTSV detector 7 allows for the transfer of video data from the header of video data and video data sent from the buffer VideoCAD 6, for transmission to the encoder 8. In addition, it defines DTSV in the header of the video data to signal to the controller 16, which means that the DTSV were detected, and stores the detected DTSV in the internal register to indicate to the controller 16 on the conservation at the request of the controller 16.

The video decoder 8 includes an MPEG decoder in accordance with IS011172 (MPEG1) or IS013818 (MPEG2) and decodes the video data transmitted from the DTSV detector 7 for sending results to the post-processor 15. In the decoding process it suspends the decoding resumes decoding, search for title I-pictures and reports about the discovery of the title I-image to the controller 16. The MPEG decoder can detect the image header to determine the type of the header image, that is, whether the image header title I-, P - or B-picture, and reports the detection of the header and the type of controller 16. The video decoder 8 temporarily replaces the video data obtained by decoding, black or blue will find the compressed data contains syntactically inaccurate description, or when it attempts to decode the data with the error flag.

Buffer audiocode 9 has an internal memory FIFO and stores the headers of the audio data and the audio data sent from the demultiplexer 5, for transmission to the DTSA detector 10 at the request of the audio decoder 11. In addition, it outputs a signal, indicating to the demultiplexer 5 and the controller 16 of the overflow or the disappearance of bits in the buffer audiocode, if the memory buffer becomes full or loss occurs digits.

Like the DTSV detector 7, the DTSA detector 10 is able to transmit only the audio data of the header of the audio data and the audio data sent from the buffer audiocode 9, for further transmission to audio decoder audio 11. In addition, he finds DTSA in the header of the audio data to transmit a signal to the controller 16 and the audio decoder audio 11, meaning that DTSA were found. In addition, it stores the detected DTSA in its internal register in order to inform the controller 16 on the conservation at the request of the controller 16.

Audio decoder audio 11 decodes compressed or uncompressed audio data transmitted from the DTSA detector 10, to transfer the results to the output Autoterminal. Within the about the data within a certain time, transmits audio data within a certain time. For example, the intervals are divided into four intervals: 1 s, 100 MS, 10 MS and 1 MS, and the minimum unit of compression for compressed data. Audio decoder audio 11 stops decoding after receiving the signal from the DTSA detector 10, which means that the DTSA were found. In addition, it has the function of reducing the volume by half to temporarily decrease the volume of the decoded audio output signal by a certain amount and function completely turn off the volume to mute.

The buffer code, superimposed dialogue 12 has an internal memory FIFO and stores the header data superimposed dialogue data and superimposed dialogue received from the demultiplexer 5, for transmission to the DTSS detector 13. In addition, it outputs a signal, indicating to the demultiplexer 5 and the controller 16 of the overflow or the disappearance of digits in the code buffer superimposed dialogue, if the memory buffer becomes full or loss occurs digits.

The DTSS detector 13 allows you to transfer only the data of superimposed dialogue data header superimposed dialogue data and superimposed dialogue received from the buffer code superimposed dialogue 12 for their pialova and time duration data superimposed dialogue signal to the controller 16, meaning that they were discovered, and stores the detected DTSS and the time duration in its internal register to indicate to the controller 16 on the conservation at the request of the controller 16.

After the discovery of DTSS for searches DTSS DTSS detector outputs a signal to the decoder superimposed dialogue 14, and controller 16, which means that DTSS were found. Decoder data superimposed dialogue 14 decodes the data of superimposed dialogue, obtained from the DTSS detector 13, to make results in the postprocessor 15.

During decoding, the decoder superimposed dialogue 14 suspends the decoding resumes decoding and suspend the issuance of the results of decoding. During the search DTSS it skips data superimposed dialogue instead of them to decode, to receive the signal detection DTSS from the DTSS detector 13.

The post-processor 15 generates a video signal for reproduction of information showing the current state of the playback device data in response to a command from the controller 16 and synthesizes the video signal received from the video decoder 8, the video signal received from the decoder superimposed dialogue 14, and the video signal is issued to display the current SOS is"ptx2">

The controller 16 may receive information from each section and to issue signals, and may also control the operation of the entire device playback the data shown in Fig. 1. The external interface 17 receives commands from computer equipment or editors, for transmission to the controller 16. User input means 18 receives the information typed by the user on the keyboard or from a remote device send commands for transmission to the controller 16.

The means of reproduction of information 19 reproduces the information showing the current status of the playback device, in response to the command controller 16 using, for example, the lamp or liquid crystal display. Generation of the vertical synchronization signal 22 generates a vertical synchronization signals for transmission to the video decoder 8, the decoder data superimposed dialogue 14, post 15 and the controller 16.

The STC register 23 is increased in response to the signal from the circuit count STC 24 and uses the reference clock for synchronous playback of image, voice, and data superimposed dialogue. The controller 16 can set arbitrary values for the STC register 23. The STC register 23 in nastoashe 16 as the software.

Diagram of count STC 24 generates such signals, pulse signals of a certain frequency, for their issuance in the STC register 23. In addition, it suspends the issuance of the signal on the STC register 23 in response to a command from the controller 16. Diagram of count STC 24 and the register STC act as the internal master oscillator STC. Like the STC register 23 register count STC 24 in this performance independent from the controller 16, but in another version it may be entered as a signal generator of the count in the form of software.

(2) Configuration of the DSM.

In DSM 1 all data is loaded in the sector and the starting position for reading data from the DSM 1 is determined by the controller 16 through the use of sector number. After defining the initial position of the next sector is continuously read to determine the controller 16 new position. For example, when the sector 100 is determined as a starting point, the sector is read in the sequence 100, 101, 102, 103, ... until then, until you define a new position reading.

In Fig. 2 shows that each sector contains 6,208 bytes and contains four types of data: data subcode, summarized data, the data error correction (C1) and the correction data is ipov data reproduced summarized data, and the other three types of data, i.e. data subcode, the data error correction (C1) and the data error correction (C2), are additional data to increase the speed seals and fidelity.

As shown in Fig. 10, data subcode contain information about the sector number, time code information, the content ID of subcode and flag ban playback. Information about the number of the sector contains a sector number, code information of time contains information about the time during which the sector is reproduced, the data about the contents contain information showing the contents of data subcode (for example, "01", if the data contains a flag ban playback) and the flag ban playback contains a flag (for example, "FF") indicating whether the sector on the input area, output area, or in areas where data is stored, such as TOC data that is not reproduced. The remaining 59 bytes are reserved and these bytes can boot other information as the data subcode. Summarized data contain summarized data containing the image, speech and data superimposed dialogue, which should be reproduced, and other data, such as computer prograne errors in data subcode and aggregated data, as well as in the data error correction. Since the data error correction C1 and the data error correction C2 have different directions explode, snooze patch by C1 and C2 improves the ability of bug fixes.

In Fig. 3 shows the type of data loaded in the sector aggregated data of each sector where the data are classified by sector numbers. The data stored in the aggregated data, in essence contain the image, speech and data superimposed dialogue, condensed there, but in exceptional cases contain specific data such as TOC data loaded in the sector from -3,000 to 1,023. Image, speech and data superimposed dialogue, which should be reproduced, downloaded in the sector 1,024 and subsequent sector.

The area called TOC area is in sectors from -3,000 to -1 in the DSM 1. The TOC area contains the data of the TOC, i.e. the information content in the DSM 1. As shown in Fig. 3, the same data TOC loaded in three areas, i.e. in the sector from -3,000 to -2,001, sector from -2,000 to -1,001 and sector from -1,000 to -1 to better prevent errors. However, the size of the TOC data cannot exceed 1,000 sectors. Users can define the number of sectors through pologa picture and sound. However, since the TOC data are for control may not be used during normal playback, then the TOC is set with a negative

sector numbers, which cannot be determined by conventional desyatiklassniki keyboard.

Sector in DSM 1 containing data with images, speech and data superimposed dialogue, sealed in them, are grouped into one or more tracks in accordance with the content. This group contains a set of sequential sectors, is called a track. Fig. 5 shows the configuration of the TOC data. The data contain TOC title TOC, the TOC value, number of tracks, the information about each track, title, table of input points, a table of input points input and mark the TOC.

The TOC header contains a special sample data showing that the TOC starts from this position. The value of the TOC contains the TOC data length in bytes. Information for each track contains the number of each track, the starting sector number end sector, the flag of the title track, the end flag track flag track prohibit playback flag video encoder, flag audiocamera, flag encoder superimposed dialogue and flag the accuracy of the information flag kodirovaniyu to be in the range from 1 to 254. The number of the first sector at the starting point and the number of the destination sector in the endpoint show the range of tracks in the DSM 1. Header flags and the final track shows that the track is the title or the final track, respectively.

The suppression flag play is set to prohibit playback of the track and is not set when the playing track is not prohibited. Flag seal image, speech, and superimposed dialogue shows, were sealed with the image, speech and data superimposed dialogue in the summarized data in the track, respectively. Each flag seal can show the degree of compaction for each type of data in the track.

Flag information about the correctness flag seal indicates whether the contents of the flag of the seal of the previous image, speech, and superimposed dialogue. For example, each of the previous flags cannot be fixed at a single value, if the state seal for images, speech or data superimposed dialogue changes within a single track. In this case, three flags assigned an arbitrary value and flag information about the correctness flag seal is recorded value pokaza track is the title or the final track, can be added to any track from 1 to 254. However, the playback device can be simplified by reducing the value of the TOC data and ensuring that DSM 1 contains only one header track and the destination track by replacing the structure of the DSM of Fig. 36 on the structure shown in Fig. 4, and the structure of the TOC in Fig. 5 the structure of Fig. 6, and providing special tracks with track numbers from 0 to 255 for the header and end of tracks and fixing their position in DSM 1.

The table header input points contains a special sample data indicating that a table of input points starts at this position. Table input points contains the number of input points and information for the input points. The number of input points contains the number of input points in the DSM 1, the positions of the input points, represented by the numbers of sectors of information and time code loaded in the data subcode in the sector.

Table of input points is used during random access and search. Table input points must be specified when data is compressed with a variable speed in accordance with IS011172 (MPEG1) or IS013818 (MPEG2), as increasing numbers of sectors is not proportional to the increase in breezie.

(3) the Operation of equipment data playback.

(3-1) to Turn on the power.

Fig. 11 is a chart converting the operating status of the controller 16. The controller 16 enters the initial state, when the power source of the playback device data shown in Fig. 1, is included. In Fig. 13 shows the sequence of operation of the controller in its original condition. In this state, the controller 16 sends a command to turn on the light vehicle information display 19 to indicate that the included power supply, and, in addition, gives a command to the post-processor 15 to start the renderer, such as CTR (not shown) to display a message indicating that the power supply is turned on (step SP100). The controller reads the control samples loaded into RAM 25 for recording in the appropriate accumulation device installed in the vehicle bug fixes 3, the ring buffer 4, the buffer VideoCAD 6, the buffer audiocode 9, the buffer codes superimposed dialogue 13 and the storage device 20, and then reads them from the storage devices (step SP102) to test whether these devices accumulate accurate (memory check step SP103).

If found the b light bulb, indicates that an error was detected, and, in addition, sends a command to the post-processor 15 to start the renderer, such as CTR (not shown), to display a message indicating that the power supply is turned on (step SP100). The controller reads the samples for testing loaded in RSU for their entries in the corresponding accumulation device installed in the vehicle bug fixes 3, the ring buffer 4, the buffer VideoCAD 6, the buffer audiocode 9, the buffer code superimposed dialogue 12 and the means of accumulation 20, and then reads them from the storage devices (step SP102) to test whether these devices accumulate accurate (memory check step SP103).

If during the memory check error is detected, the controller will send a command to the vehicle information display 19 to turn on the light, indicating that the error is found, and gives a command to the post-processor 15 to run the tool display such as a CRT (not shown) to display a message indicating that an error was detected in the memory (step SP104). In this state, the controller 16 sequentially ignores all input information from the external interface 17, and custom tools for input 18 with the exception of coma is nick power for a certain period of time, if an error is detected in the memory (step SP105).

If the means of accumulation of errors are not detected, the controller 16 sends a signal to the block driver 2, asking whether loaded DSM 1 (step SP106). After receiving this signal, the block driver 2 outputs a signal to the controller 16, pointing loaded or not DSM in the current time. Check, whether loaded DSM 1, is carried out using a installed in the mechanical part of the block driver 2 on the switch or through testing, can be used to focus in a predetermined part of DSM 1. If the controller 16 will signal indicating that the DSM 1 the currently loaded, it loads the state of the read TOC at step SP2 shown in Fig. 11 (step SP107). Otherwise, if the controller 16 receives a signal indicating that the DSM 1 is not loaded in the current moment, he gives a command to the vehicle information display 19 to turn on the light, indicating that the DSM 1 is not loaded, and, in addition, gives a command to the post-processor 15 to display a message indicating that the DSM 1 is not loaded (step SP108). The controller 16 subsequently waits for the signal from block 2 drivers, indicating that the DSM 1 is loaded.

Block drivers 2 detects that uses the three DSM 1, to allow the reader unit drivers to read the signals. After loading block drivers 2 sends a signal to the controller 16 indicating that the DSM 1 uploaded. The controller 16 enters the state of the read TOC at step SP2 of Fig. 11 after receiving the signal indicating that the upload complete, waiting for a signal from the block driver 2, indicating that the DSM 1 is loaded.

(3-2) Read TOC.

In Fig. 14 shows the sequence of operation of the controller 16 in his state of the read TOC. After input status read TOC, the controller 16 instructs the tool to repair errors 3 enter reading mode TOC (step SP200). The controller 16 also instructs the block driver 2 search section, where the recorded first data TOC, i.e. the section -3,000 (step SP202, SP203).

Block drivers 2 reads data from the DSM 1 for transmission to the means of error correction 3. A means of error correction detects and corrects any errors found in the data sent from the block driver 2, and transmits the summarized data to the ring buffer 4 and data subcode on the decoder subcode 21. However, the number of possible cycles C1 and C2 hotfix must be installed more than in normal play and C2 error correction, performed by means of error correction 3 are performed only once during the normal playback data to reduce the time from loading data from DSM 1 to output the image signal post-processor 15 or the audio decoder 11 and output to the output Autoterminal. However, the ability of error correction can be improved by repeating C1 and C2 bugs fixes a large number of times, if the time from loading data to playback the data need not be reduced. Therefore, for reading TOC data that is not needed immediately, but require high reliability, a means of error correction 3 repeats the operation for the correction of errors, if the controller 16 could not fix the error, despite the first attempt to use one C1 correction and one C2 correction. Means for correcting errors 3 can of course be repeated C1 and C2 correction for some time, for example four times.

Although the number of patches increases for TOC data to improve error correction, but a big mistake in DSM 1, i.e. the loss of data in a large interval, cannot be completely corrected even when you bug fixes. Thekiller 16 gives a command to the block 2 drivers search positions, where an error was detected, and reads the data again from the DSM 1 in order to try to detect and correct an error in the loaded data. This process of re-reading is not performed during normal playback, as it requires a huge amount of time. However, in this state, the read TOC controller 16 carries out this work.

If the error cannot be corrected after a predefined number of data readings from the DSM 1, the controller 16 instructs the block driver to search the second TOC information loaded in three different positions in the DSM 1 to download it, and then tries to load this information in a circular buffer 4 as when data is first loaded TOC. The controller 16 performs the same job for a third of the TOC information, if he could not read the second TOC information. Such readings from different positions possible by the fact that the same data TOC loaded in three positions and this is not possible during normal playback. However, in this state, the read TOC controller 16 carries out this work (step SP202, SP203, SP204, SP205, SP206).

If the controller will not be able to read all TOC data loaded in these three positions, it tells the renderer infoor 15 to display a message on the screen, showing error reading TOC (step SP207). In addition, the controller 16 instructs the unit 2 drivers to unload the disc (step SP208) and enters the initialization state. Block drivers 2 unloads the disk upon receiving the command of unloading from the controller 16.

The controller 16 instructs the control circuit of the ring buffer 26 to start downloading TOC after error correction (TOC step SP209). The control circuit of the ring buffer controls the record pointer to load the TOC data in a specific area for TOC data loaded in the memory installed in the ring buffer 4. Ring buffer 4 has an entry in the TOC data area in the memory, data playback, funds received from the error correction 3. In this case, all data TOC shown in Fig. 5, is loaded into memory, if the ring buffer 4 has sufficient memory to store this number, otherwise data is loaded TOC, except for the table header of the input points and tables of input points.

Ring buffer 4 can detect the loading of the end mark TOC to detect the end of data loading TOC: after detection of the end of the load ring buffer 4 informs the controller 16 about this condition. The controller 16 is 210).

(3-3) the stop State (the title track/end of the playing track).

In Fig. 15 shows the sequence of operation of the controller 16 in its stop state. After you enter the shutdown status of the controller 16 determines whether the loaded TOC (step the sp300). The controller 16 reproduces the header track, if the TOC were loaded. Otherwise, for example, if the reproduction of all or part of the data from the DSM 1 has been completed, the controller gives a command to play the final track.

To reproduce the header track, the controller 16 specifies the data TOC (step SP301) and, if there is a track with a flag indicating that this header track, plays this track regardless of playback commands from the user (step SP302). To play the final track, as played by the header track, the controller 16 specifies the data TOC (step SP303) and, if there is a track with a flag indicating that it is the final track, plays this track regardless of playback commands received from the user (step SP304).

In the stop state, the controller 16 sends the stop command the stop command error correction, command stop loading the buffer 4 and the demultiplexer 5, respectively, if it cannot find the header or the destination track for playback or when the playback of the header or end of a track is finished (step SP305). In addition, it clears the buffer VideoCAD 6, the buffer audiocode 9 and the buffer superimposed dialogue 12 (step SP306).

In the stop state, the controller 16 waits for the command to start playback, sent by the user through the user tools to enter 18 or the external interface 17 (step SP307). In addition, he gives a command to the vehicle information display 19 and the post-processor 15 to light the light that indicates the stop state, and to specify an appropriate message on the screen (step SP308).

User input means 18 sends a signal for the beginning of the playback controller 16, when a user uses the keyboard input the necessary information to start playback. In this case, if the track you wish to play have been defined by the user, information about the track numbers are also transferred to the controller 16. The external interface 17 outputs a signal of the beginning of the playback controller 16 after making the appropriate command from external equipment (not shown). In this case, or if the external is equipped with the

The controller 16 enters the ready state of play in step SP4 in Fig. 11 after receiving the signal to begin playback from the user input device 18 or the schema of the external interface 17. The controller 16 starts playing from the track represented by the track number "1", if the user input device 18 or the schema of an external interface 17 has not determined the number of tracks for playback.

(3-4) the Willingness playback.

In Fig. 16 shows the sequence of operation of the controller 16 in its ready to play. After entering the ready state of a playback controller 16 gives a command to the playback device information 19 and the post-processor 15 to light the bulb, indicating that playback is repeated and display the appropriate message on the screen (step SP400). The controller 16 then initializes the ring buffer 4, the demultiplexer 5, the buffer VideoCAD 6, the video decoder 8, the buffer audiocode 9, audio encoder 11, buffer code superimposed dialogue 12, the decoder superimposed dialogue 14, post 15, and the storage device 20 (step SP401). However, he does not initialize TOC data loaded and stored in the ring buffer 4.

The controller 16 Sep what hat means bug fixes 3 to perform as C1, and C2 error correction immediately after the error detection. The controller 16 then accesses the data of the TOC to get the number of a sector at the beginning of tracks that should be played, and issues a command to search the block driver 2, using the sector number (step SP403).

The controller 16 sends a command to the beginning of the loosening of the demultiplexer 5 (step SP404). The demultiplexer 5 provides softening compressed bit stream received from the ring buffer in the format shown in Fig. 7A, and then transmits it to the buffer VideoCAD 6, the buffer audiocode 9 and the buffer code superimposed dialogue 12, as shown in Fig. 7B, 7C and 7D, respectively. In addition, he finds SCR loaded in the header of the system, and stores them in its internal register.

The buffer VideoCAD 6 loads the data sent from the demultiplexer 5 in its memory buffer and then passes them on to the detector STSV 7. Similarly, the buffer audiocode 9 and the buffer code superimposed dialogue 12 download data transmitted from the demultiplexer 5 in their respective memory buffers and then transfer them to the DTSA detector 10 and the DTSS detector 13.

The DTSV detector 7 selects only video data from the data transmitted by the buffer VideoCAD 6 for transmission to videonya DTSV reports the detection to the controller 16, and the DTSV value stores. Similarly, the DTSA detector 10 and the DTSS detector 13 select only the speech data and superimposed dialogue data received from the buffer audiocode 9 and buffer superimposed dialogue 12 for transmission to audio decoder audio 11 and the decoder superimposed dialogue 13 respectively. In addition, they try to find DTSA in audioservice shown in Fig. 9, and DTSS in the header data superimposed dialogue, also shown in Fig. 9, and after the discovery of DTSA and DTSS report the detection to the controller 16 and retain their values respectively. Once you have completed this processing, the controller 16 enters the state determine how synchronized start at step SP5 of Fig. 11.

(3-5) the State determine how synchronized the beginning.

In Fig. 17 shows the sequence of operation of the controller 16 in the state determine how synchronized the beginning. After entering the state determine how synchronized the beginning of the controller 16 performs processing necessary to start playback of image, speech and/or data superimposed dialogue. He chooses the procedure used at the initial stage of the playback data, using the data contained in the CAA data that must be played.

The controller 16 refers to the flags of the seal image, speech, and superimposed dialogue data for each track in the TOC data, shown in Fig. 5, to detect the presence of image, voice, and data superimposed dialogue reproducible data. The controller 16 first loads of TOC loaded in the ring buffer 4, the information about the tracks pertaining to playable tracks (step SP500). It then determines whether the set each flag seal on the basis of flag information about proper installation of flag seal in the received information about the track (step SP501). If during the processing of a negative result is obtained due to the fact that the flag information about proper installation of flag seal contains a value that indicates incorrect, it performs the same determination based on a signal indicating a detection DTSV, DTSA or DTSS sent from the DTSV detector 7, the DTSA detector 10 or the DTSS detector 13 during a certain period of time after specifying the mode of disintegration.

The controller 16 enters the synchronized start speech and images, if he determines on the basis of flags seals in the AK and DTSA detected in a certain period of time. He enters the synchronized start image only if he determines on the basis of flags seals in the TOC information that playable tracks are video data, while in these tracks no audio, or if DTSV were detected in a certain period of time, a DTSA were not detected in the same period of time. He enters the synchronized only the beginning of the speech, if he determines on the basis of flags seals in the TOC information that the audio data is present in the reproduced tracks, while the video data is not present in these tracks, or if DTSA were detected in a certain period of time, a DTSV were not detected in the same period of time.

Finally, when the controller 16 detects on the basis of flags seals in the TOC information, which is reproduced in the track not present neither audio nor video, or if in a certain period of time were not detected in any DTSV and neither DTSA, he enters a synchronized start of superimposed dialogue, if DTSS was detected at this time. Moreover, the controller 16 enters the stop condition, if it detects on the basis of the TOC information that neither the image nor the military period of time (steps from SP502 to SP510).

(3-6) synchronized to the beginning of speech and images.

In Fig. 18 shows the sequence of processing operations for video data performed by the controller 16 in its state synchronized to the beginning of speech and images. After entering the synchronized state commencement speech and image controller 16 instructs the video decoder 8 to suspend the decoding and search for title I-image (step SP600). Because it causes a search of title I of the image, while decoding is suspended, the video decoder 8 does not start decoding after the discovery of the title I-image and waits for a command undo shutdown from the controller 16. Title I-image is a concrete example of data placed at the beginning of the data within the image in the video data, such as flow videoview defined IS011172 (MPEG1) or IS013818 (MPEG2).

According to the rule DTSV must be loaded in the header of video data in the video data containing the header, I-image, by using the encoding method shown in "when the flag coding DTSV=1", Fig. 9, when the data is loaded into the DSM, in which are stored the threads compacted bits in accordance with IS011172 (MPEG1) or IS013818 (MP is ora DTSV 7. The clock starts at I-images, since the images other than the I-images, that is, P - and B-image, predictable encoded through the use of images, temporarily located before and/or after these P - and B-images, and, thus, it is impossible to start the decoding of P - and B-images.

The controller 16 then determines whether or not this is the disappearance of bits in the buffer VideoCAD 6 (step SP601). If the buffer VideoCAD 6 is the disappearance of digits, then this buffer has no data that must be read, so that the controller 16 stops the reading of video data from the buffer VideoCAD 6. Then, after receiving the signal from videodecoder 8, indicating that the title I-image has been read, the controller 16 loads the DTSV value of the DTSV detector 16 (step SP602). The controller 16 then determines whether the calculation STC 24 (step SP603).

If you have enabled automatic counting circuit counting TSC 24, the image and it should be played back synchronously with the system master oscillator clock frequency STC, that is, synchronously with the TSC register, which is already counting. If automatic calculation of STC has been disabled, it must be started decoding image and speech TVset the following processing to the video decoder 8, if automatic calculation of STC has been included. First, the controller 16 compares the STC loaded in the STC register 23, DTSV, found DTSV detector 7 (step SP604). If DTSVSTC, he determines that he missed the start time of the decoding, sends a command to the video decoder 8 to re-search for title I-image (step SP605) and loads from the DTSV detector 7, DTSV in accordance with the following title I-picture in the stream videoview (step SP602).

As STC also implemented automatic calculation, the controller 16 again loads the most recent value of an STC from the STC register 23. He then compares the newly loaded DTSV with this STC (step SP604) and repeats this process until DTSV > STC. If loaded with DTSV value greater than the STC value, the controller 16 waits until DTSV = STC will not be executed (steps SP615, SP616). He then issues a command undo shutdown decoding in the video decoder 8 synchronously with the next vertical synchronization signal sent from the circuit for generating the vertical synchronization signal 22 (steps SP617, SP618). The controller 16 sets the value of the STC, is equal to the value DTSV, as STC automatically performs the calculation for standby signal vertical synchronization (step SP619).

Processing of the buffer audiocode 9. In the synchronized start speech and image, however, the controller 16 does not perform special processing errors even after receiving the error signal disappearance of bits from the buffer VideoCAD 6 after the decoder 8 will be given the command to search for the title I-image and before the discovery of the I-picture; buffer audiocode 9 waits to receive data from the demultiplexer 5 to cancel the status of endangered places.

If the video decoder 8 detects an I-picture, the controller 16 will have to wait until the buffer VideoCAD 6 will not load enough data. The device in accordance with the present invention fills the buffer code as follows, if the STC will be able to carry out automatic counting, in order to obtain a predetermined filling buffer code defined in IS011172 (MPEG1) or IS013818 (MPEG2).

If the video decoder 8 detects an I-picture, it can receive data from the demultiplexer 5 and load them into the buffer videocode 5 to buffer overflow 6 due to the fact that the video decoder 8 has already suspended decoding. Each time data is loaded, the demultiplexer 5 attempts to find a new SCR.

The controller 16 loads a new fast buffered VideoCAD 6 (step SP606). He then compares this with SCR DTSV downloaded from the DTSV detector 7 (step SP607). At this point, if DTSVSCR, it determines that the code buffer is loaded enough data. If DTSV > SCR, it waits until the demultiplexer will not detect the new SCR. In addition, he determines that the code buffer is loaded enough data if it receives a signal indicating an overflow from the buffer VideoCAD 6, buffer audiocode 9 or buffer code superimposed dialogue 12 while waiting define a new SCR (step SP608).

STC, which is the master oscillator system must be running synchronously with the vertical synchronization signal, if it was on automatic calculation of STC. DTSV is encoded in synchronism with the vertical synchronization signal, while the DTSA is encoded independently of the vertical synchronization signal. Thus, the STC is started synchronously with the vertical synchronization signal by using DTSV as initial values. After starting the STC and the simultaneous start of decoding video data, decoding the audio data starts by using DTSA. The controller performs the following processing for the decoder if it was off automatically the controller 16 compares the DTSA, read from the DTSA detector 10, DTSV, read from the DTSV detector 7 (step SP610). If DTSADTSV, it means that the decoding of the audio data performed before decoding the video data; thus, STC cannot be started in synchronism with the vertical synchronization signal. The controller 16 thus repeats the search command DTSA for the audio decoder 11 as long as they have not been DTSA > DTSV. Control of the audio decoder 11 is described below in detail.

If DTSV and DTSA were loaded and DTSA > DTSV, the controller 16 waits for a vertical synchronization signal from the circuit for generating the vertical synchronization signal 22 and starts the calculation STC 24 in operation in synchronism with the vertical synchronization signal to enable the automatic count STC (step SP612). The controller sends the command cancel to stop the video decoder 8 to start decoding the video data by running the calculation STC 24 (step SP613).

In Fig. 19 shows the sequence of operations for processing audio data, performed by the controller 16 in its state synchronized to the beginning of speech and images. After entering the synchronized state commencement speech and image controller 16 issues a command to mute and command the Ufer audiocode 9, starts decoding and waits for a signal from the DTSA detector 10, which means that the signal DTSA was discovered. In this case, however, the audio decoder audio 11 not, in fact, displays the decoded data in accordance with the mute, which was received. The controller 16 checks the buffer audiocode 9 on the disappearance of bits (step SP701). The disappearance of bits in the buffer audiocode 9 indicates that the buffer audiocode 9 has no data to send. Thus, after it is detected, the controller 16 will cause the stop data transfer from the buffer audiocode 9 and will allow the data transfer again, when it will be resolved the problem of the missing bits. The decoder 11 stops decoding after receiving the signal from the DTSA detector 10, which means that the signal DTSA was discovered. At this point, the controller 16 can load detected DTSA from the DTSA detector 10 (step SP702). The stop state of the audio decoder 11 may be canceled by the controller 16, as described below.

The controller 16 then determines the operating status of STC (step SP703). The controller performs the same processing for the audio decoder 11, as and to the video decoder 8, if automatic calculation of STC was included. That is, the controller 16 compares later seska DTSA for the audio decoder 11, until the DTSA > STC (step SP705). When loaded DTSA having a magnitude greater than the STC value, the controller 16 loads the new STC (step SP710), waits until the DTSA = STC (step SP711), and issues a command undo shutdown decoding on the audio decoder audio 11 (step SP712).

The controller 16 performs the following processing for the audio decoder, if you have disabled automatic calculation of STC. That is, the controller determines that DTSV has already been loaded for processing synchronized start to the video decoder 8 in Fig. 18 (step SP706). If DTSV have been loaded, the controller 16 loads them for processing synchronized start to the audio decoder 11 (step SP707). The controller sequentially compares loaded with DTSV DTSA (step SP708) and repeats the search command DTSA for the audio decoder 11, until the DTSA > DTSV (step SP709). After performing DTSA > DTSV value of the STC can be loaded for processing synchronized start to the audio decoder 11 at this point, since the calculation STC 24 worked in the auto count STC for processing synchronized start to the video decoder 8 Fig. 18, as described above. The controller 16 then waits for STC = DTSA (step SP711) and sends a command to mark the e completion of the above processing, the controller 16 enters the hard state of play.

(3-7) synchronized only the beginning of the speech.

In Fig. 20 shows the sequence of operations performed by the controller 16 in its state synchronized only the beginning of the image. After entering the synchronized state began only image controller 16 carries out processing necessary for startup only video data in synchronism with the vertical synchronization signal. The controller 16 in the synchronized start only image is basically the same as in the synchronized start speech and images, except there is no comparison with DTSV DTSA, i.e. step SP610 Fig. 18. So here will be omitted in detailed description. As in the case of synchronized commencement speech, image, the controller 16 instructs the video decoder 8 to stop the decoding and search for title I-image (step SP800).

If the video decoder 18 detects an I-picture, that is, the controller 16 loads DTSV (step SP802), and STC has been disabled, the controller 16 waits for enough data in the buffer VideoCAD 6. That is, as in the case of synchronized start speech and image, the controller 16 compares the detected DTSV with later aircraft the existing overflow from the buffer VideoCAD 6, buffer audiocode 9 or buffer code superimposed dialogue (step SP806, SP807, SP808).

For audio data, the controller 16 does not perform the processing, if the audio decoder audio 11 already started decoding, and otherwise sends a command to mute the output and the search command DTSA on audio decoder audio 11 to force the decoder to wait for the audio data that must be transferred from the demultiplexer 5 in the buffer audiocode 9.

For video data, the controller 16 further performs the following processing. If automatic calculation of STC has been enabled, it performs the same processing as in the case of synchronized start speech and image-enabled automatic count STC (step SP804, SP805, SP814, SP815, SP816, SP817, SP818). At this point, the controller 16 does not perform the processing for the audio data.

If automatic calculation of STC has been disconnected, the controller performs the same processing as in the case of synchronized start speech and image-enabled automatic calculation of STC. In this case, however, the controller does not process the audio data, that is, does not repeat the undo command stop decoding for the audio decoder 11, until the DTSA = STC, after the start of the decoding of videodecoder the decoder superimposed dialogue, and enters a state of hard play. The controller 16 enters the synchronized start only speech in step 804 and the subsequent steps shown in Fig. 21, if he accepts the signal from the DTSA detector 10, indicating that DTSA were discovered after the beginning of playback synchronized only the beginning of the image, and then enters the state of hard play.

(3-8) synchronized only the beginning of the speech.

Fig. 21 shows the sequence of processing operations by the controller 16 in its state synchronized only the beginning of the speech. After entering the synchronized state only the beginning of the speech, the controller 16 performs processing necessary for the beginning only the audio data synchronously with STC. For video data, the controller does not perform the processing, if the video decoder 8 has already begun to decode, and otherwise sends a command to search for a title I-image to the video decoder 8.

After entering the synchronized state only the beginning of the speech, the controller 16 sends a command to mute the output and the search command DTSA the audio decoder 11 (step SP900). After receiving search commands DTSA audio decoder audio 11 sends a request code to a buffer audiocode 9, starts decoding and waits for a signal from the de the reality of the decoded data in accordance with the mute, he has received. The controller 16 checks the buffer audiocode 9 on the disappearance of bits (step SP901). The disappearance of bits in the buffer audiocode 9 indicates that the buffer audiocode 9 has no data for transmission. Thus, after detecting such a situation, the controller 16 causes the stop sending data from the buffer audiocode 9 and allows you to re-forward, when it will be resolved the problem of the missing bits. Audio decoder audio 11 stops decoding after receiving the signal from the DTSA detector 10, which means that the signal DTSA was discovered. Since then, the controller 16 can load detected DTSA from the DTSA detector 10 (step SP902). The stop state of the audio decoder 11 may be canceled by the controller 16, as described below.

The controller 16 then determines the operating status of STC (step SP903). The controller executes the following processing, if it was on automatic count STC: that is, the controller 16 compares the most recently loaded STC from the STC register 23 with later uploaded DTSA from the DTSA detector 10 (step SP904) and repeats the search command DTSA for the audio decoder 11 to perform DTSA > STC (step SP905). When loaded with DTSA value is greater than the STC value, the controller 16 loads the new STC (step SP913), R is (step SP911).

If automatic calculation of STC has been disconnected, the controller 16 starts to wait for the loading enough data in the buffer audiocode 9 after the DTSA detector 10 detects DTSA. That is the same as during the aforementioned process of waiting for enough data in the buffer VideoCAD 6, the controller 16 reads the most recently received SCR from the demultiplexer 5 (step SP906), compares it with SCR read DTSA (step SP907), and then waits for DTSASCR or acceptance signal indicating an overflow from the buffer VideoCAD 6, buffer audiocode 9 or buffer code superimposed dialogue 12 (step SP908). If you have disabled automatic calculation of STC, the controller 16 starts the automatic calculation of STC at the same time, when the audio decoder audio starts decoding. That is, after detecting that the loaded enough data in the buffer audiocode 9, the controller 16 moves the DTSA value detected by the detector DTSA, in the STC register 23 (step SP909) and starts the calculation STC 24 in the work to enable the automatic count STC (step SP10). During the start-up circuit automatic count STC 24, the controller 16 issues a command undo shutdown at 11 audio decoder audio for the beginning of the encoding of the audio data (step SP911).

(3-9) synchronized only the beginning of superimposed dialogue.

In Fig. 22 shows the sequence of operations performed by the controller 16 in its state synchronized only the beginning for the superimposed dialogue. After entering the synchronized state only the beginning for the superimposed dialogue, the controller 16 performs processing necessary for the beginning only data superimposed dialogue synchronously with STC.

Data superimposed dialogue is taken from the video. However, like regular TV signals image or video data encoded in accordance with IS011172 (MPEG1) or IS013818 (MPEG2), video 8, used by the decoder of the present device, have a duration of from about 1/25 to 1/30 seconds, while the data of superimposed dialogue used in the present device, data are from the th one second or more for a single image, as in the synthesis of superimposed dialogue or blend in a movie or TV programs.

Because the data is superimposed dialogue have the aforementioned characteristics, the amount of data superimposed dialogue for one screen should be loaded in the DSM 1 at a lower transmission speed than video and audio data is also loaded into DSM 1. This device, which reproduces the data loaded in this way, loads via buffer superimposed dialogue 12 and DTSS detector 13 data superimposed dialogue, transmitted at a low transmission speed, and after decoding by the decoder superimposed dialogue 14 gives the postprocessor 15.

During synchronized only the beginning of superimposed dialogue for video controller does not perform the processing, if the video decoder 8 has already started decoding, and otherwise sends a command to search for a title I-image to the video decoder 8 to force the decoder to expect video data transmitted from the demultiplexer 5 in the buffer VideoCAD 6.

For audio data, the controller does not perform the processing, if the audio decoder audio 11 already started decoding, and otherwise sends a command to mute the output and the search command DTSA on AU the EP audiocode 9.

For data superimposed dialogue, if it was on automatic calculation of STC, the controller represents the superimposed dialogue, through the use of the same processing procedures as in the case of state of hard play, described below. During the synchronized start of superimposed dialogue controller 16 first determines whether included count STC (step SP1000). If automatic calculation of STC was turned off, the controller executing the following processing displays the superimposed dialogue, using the same procedure as in the case of a hard state of play, described below. If automatic calculation of STC was turned off, the controller 16 sends a command to search DTSS to the decoder superimposed dialogue 14 (step SP1001) and waits for the DTSS detector 13 detects DTSS (step SP1002). The controller then loads detected DTSS (step SP1003). Since then, the code buffer superimposed dialogue 12 may be stuffed as it was not running STC, thereby preventing the flow of the team start decoding at the decoder superimposed dialogue 14. Therefore, after receiving the signal, meaning the overflow from the buffer code superimposed dialogue 12 (step SP1004), the controller 16 places in the STC register 23 DTSS read from det the chronicity 22 (step SP1006), starts the calculation STC 24 (step SP1007) and starts the decoding of superimposed dialogue (step SP1008). After the above processing, the controller 16 enters a state of hard play.

The controller 16 enters the synchronized start only for the image in step SP804 if it receives the signal from the DTSV detector 7, indicating that DTSV was discovered after the beginning of playback synchronized only the beginning of superimposed dialogue, and then enters the state of hard play. The controller 16 enters the synchronized start only for speech in step SP904, if it has received a signal from the DTSA detector 10, indicating that DTSA were discovered after the beginning of playback synchronized only the beginning of superimposed dialogue, and then enters the state of hard play. Finally, the controller 16 enters the synchronized start speech and image in step SP604 and SP704 if it has received a signal from the DTSV detector 7, and the DTSA detector 10, which indicate that DTSV and DTSA were discovered after the beginning of playback synchronized only the beginning of superimposed dialogue, and then enters the state of hard of hard play, the controller 16 detects errors in videosynchnization, detects and corrects errors in autosynchronization, finds other errors, controls the decoder superimposed dialogue and checks the program playback.

(3-11) the Detection of errors in synchronization.

During decoding of data by the video decoder 8 and the audio decoder 11 of the necessary tools to identify and correct the difference between the start time of the decoding of video data and the start time of the decoding of the audio data, that is, errors in synchronization of the reproduced images with sound output, which is called "synchronization with the movement of the lips".

Possible errors in synchronization include the difference between the clock signal of the system STC and the start time of the decoding image DTSV, as well as the difference between the clock signal of the system STC and the start time of the decoding of the audio signal DTSA. There are two ways to detect synchronization errors. One way is to detect both differential signals to take corrective action so that both the differential signal can be substantially eliminated. The second way is to be considered one of the differential signals, the standard upon detection of the second differential signal and panulirus all differential signals from a given reference signal STC to correct errors in synchronism video data with audio data. In addition, if a differential signal between a clock frequency of STC system and the start time of the decoding of video data DTSV is considered as the benchmark, then the second method initializes STC to DTSV periodically or at a specified time interval to eliminate this differential signal in a mathematical sense.

In the second method, a differential signal between the STC system and the start time of the decoding of the audio data DTSA is represented as its original value plus the value of the differential signal between STC and DTSV. Errors in synchronization of video, audio and data superimposed dialogue can, thus, relatively corrected by removing only difference signal associated with DTSA.

In the first method, the differential signal between STC and DTSV and the differential signal between STC and DTSA is found as follows: Fig. 23 shows the flow produced by the controller 16 in the first method, the synchronization error detection video data. That is, when receiving a signal from the video decoder 8, which means that was found title I-image (step SP2000), the controller 16 loads the most recent DTSV from the DTSV detector 7 and the STC register 23 STC (steps SP2001 and SP2002) and calculates a differential signal between DTSV and STC, is OK processing, produced by the controller 16 in the first method, the synchronization error detection of the audio data. When receiving a signal from the detector 10 DTSA, which means that was discovered DTSA (step SP3000), the controller 16 loads the most recent DTSA from detector 10 DTSA and STC register 23 STC (steps SP3001, SP3002 and calculates a differential signal between the DTSA and STC, that is, (DTSA - STC) (step SP3003), to store the result in the storage device 20 (step SP3004).

In Fig. 25 shows the flow produced by the controller 16 in the second method of synchronization error detection video data. When receiving a signal from the video decoder 8, meaning that was found title I-image (step SP4000), the controller 16 loads the most recent DTSV from the DTSV detector 7 and the STC register 23 STC (steps SP4001, SP4002) and calculates the absolute value of the difference signal between the DTSV and STC, that is (step SP4003). The controller then compares with the specified value (step SP4004) and sets

the DTSV value in the register 23 STC, if it is the specified value or less (step SP4005). If it exceeds the specified value, the controller determines that there is a serious error synchronization and that DTSV cannot be used as a reference and clears the buffer videocodec 6 and the buffer AudioCodes 9 to enter the value "0" as (DTSV - STC), if is a set value or less (step SP4006).

In Fig. 24 also shows the flow produced by the controller 16 in the second method for detecting synchronization errors in the audio signal. That is, when receiving a signal from the detector 10 DTSA, meaning that was discovered DTSA, the controller 16 loads the most recent DTSA from detector 10 DTSA and STC register 23 STC. It then calculates a differential signal between the DTSA and STC, that is, (DTSA - STC) to store the result in the storage device 20.

Hardware, such as a summing device, subtractive device and comparing the device can also be used to determine the controller 16 STC, DTSV and DTSA and to read the results of the calculation, if the controller must spend a lot of time for calculation (DTSV - STC), (DTSA - STC) and using the software.

(3-12) the Correction of synchronization errors.

Below is a description of the correction of synchronization errors associated with DTSV and DTSA, which is typically used in both methods of detection of synchronization errors. In Fig. 26 shows the flow of processing by the controller when correcting synchronization errors associated with DTSV. When Sepam is an increase (step SP5001). If (DTSV - STC) = 0, the controller does not take corrective action to the video decoder 8 (step SP5002). Then, the controller 16 compares the absolute value (DTSV - STC) with the specified value (step SP5003). If the absolute value (DTSV - STC) is large and exceeds the set value, the controller 16 determines that there is a serious synchronization error, and clears the buffer videocodec 6 and the buffer AudioCodes 9 (step SP5004) to enter the synchronized state of the beginning of the audio and video data. If the absolute value (DTSV - STC) does not exceed the specified value, it determines DTSV is positive or negative (step SP5006). If (DTSV - STC) > 0, the decoding of video data has moved forward relative to the STC. Then, the controller 16 transmits the video decoder 8 command stop decoding for the proper number of images corresponding to the value , and repeating the displaying of the display image (step SP5007). If (DTSV - STC) < 0, the decoding of video data is delayed relative to the STC, so that the controller sends to the decoder 8 a command to skip the appropriate number of images corresponding to the value (step SP5008).

In this case, if you missed the I - and P-pictures, the image data cannot be decoded delineation method in accordance with IS011172 (MPEG1) and IS013818 (MPEG2). Thus, the controller transmits to the decoder 8 a command to skip only B-pictures are not used as reference for decoding subsequent pictures, and, therefore, may be omitted.

In Fig. 27 shows the flow produced by the controller to correct synchronization errors associated with DTSA. When in the storage device 20 saves the new differential signal (DTSA - STC) (step SP6000), the controller 16 loads the given value (step SP6001). If (DTSA - STC) = 0, the controller does not corrective actions for the audio decoder 11 (step SP6002). Then, the controller 16 compares the absolute value (DTSA - STC) with the specified value (step SP6003). If the absolute value (DTSA - STC) is large and exceeds the set value, the controller 16 determines that there is a serious synchronization error, and clears the buffer VideoCAD 6 and the buffer audiocode 9 (step SP6004) to enter the synchronized state of the beginning of the audio and video data. If the absolute value (DTSA - STC) does not exceed the specified value, it determines DTSA is positive or negative (step SP6006). If (DTSA - STC) > 0, the decoding of the audio data has moved forward relative to the STC. Therefore, the controller 16 transmits AU and repeat the decoding of the audio data (step SP6007). If (DTSA - STC) < 0, the decoding of the audio data is delayed relative to the STC, so that the controller sends to the audio decoder 11 a command to skip the audio data over an appropriate period of time corresponding to the value (step SP6008).

In the above process of detecting and correcting synchronization errors, the controller 16 may send the device visual display 19 and the postprocessor 15 command to light the bulb, indicating that, perhaps, had lost a significant amount of video data, and display on the display screen this state if he determines that there is a serious error synchronization (steps SP5006, SP6005).

(3-13) error Detection.

Although data read from DSM 1, have errors, fix device error correction 3, if any, data containing a large number of error data may be transferred to the video decoder 8, 11 audio decoder audio or decoder superimposed dialogue 14 via the demultiplexer 5 without full error correction. In this case, the signs of the errors contained in the error data, allow the video decoder 8, the audio decoder 11 and the decoder superimposed dialogue 14 to detect bugs.

In addition, since (MPEG2), they can detect errors trying to find data that does not conform to this syntax. In any case, if an error is detected, the video decoder 8, the audio decoder audio 11 and the decoder superimposed dialogue 14 are passed to the controller 16 signal, informing him of the error.

If the decoder 8 or the audio decoder 11 is detected decoding error, it means that could be lost video or audio data and synchronization visually displayed images with the output speech signals may, therefore, fail if continues to play. Such synchronization error can be corrected by the above method of detecting and correcting synchronization errors. In addition to the correction of synchronization errors, the controller 16 can calculate the frequency of occurrence of errors for understanding the conditions that generate errors on the disk. This allows you to change the error correction device correcting errors 3 or notify the user about generating an error.

The controller 16 calculates the frequency of occurrence of errors on the disk or on the track or within the last a specified length of time by counting the number of received signals, communication is s error: memory frequency errors on the disk, the area of memory error rate on the track and field of memory error rate for three seconds, and these areas act as counters. In Fig. 28, 29 and 30 shows the flow produced by the controller for error detection using each of the counters. Region memory frequency errors on the disk is translated to its original state when the status "stop" switch is in the ready state to the reproduction region of memory error rate on the track also translates to its original state when the status "stop" switches to the ready to play and to play the new track and the field memory error rate for three seconds also translates to its original state when the status "stop" switches to the ready to play every three seconds (steps SP7000, SP7003, SP8000, SP8003, SP8004, SP9000, SP9003, SP9004).

When the controller 16 receives the error signal from the video decoder 8, the audio decoder 11 or decoder superimposed dialogue 14 (steps SP7001, SP8001, SP9001), it adds 1 to each of the values stored in the areas of memorization frequency errors on the disk, track and within three seconds (steps SP7002, SP8002, SP9002). After addition, if the value in alloploidy DSM 1 has a large number of defects (step SP7004) to enter state "stop".

If the value in the field memory frequency errors on the track exceeds a predefined threshold (step SP8005), the controller 16 determines that this track has a lot of defects, and temporarily interrupts the playback to start playback of the next track (steps SP8006, SP8007). However, he temporarily interrupt playback to enter the state of "stop", if it detects from the TOC data, the following data does not exist. If the value in the field of memory error rate for three seconds exceeds a predefined threshold (step SP9005), the controller 16 transmits the video decoder 8 and the decoder superimposed dialogue 14 command shutdown output on the display screen, and the audio decoder 11 to suppress the outputs for the next three seconds (step SP9006).

(3-14) Identification of the currently playing tracks.

In steady state, the playback controller 16 loads the data sector numbers from the demultiplexer 5 when receiving from the demultiplexer 5 signal indicating that the detected number of sectors. The controller compares the loaded data sector numbers with the starting and ending numbers of sectors per track in the TOC data, shown in Fig. 5, to determine pristraivaemy currently track the controller 16 transmits the device visual display 19 and the postprocessor 15 command to light the bulb, pointing to the fact that has changed playing track and/or changed the number of the current track, and display them on the screen display.

In addition, the controller 16 transmits the demultiplexer 5, the command "stop" demultiplexing, if he found that playback of the last track is completed. Then, the controller 16 waits for the error signal disappears, discharges, indicating that the buffer videocodec 8, the buffer AudioCodes 11 and buffer codes superimposed dialogue 12 - all become empty, and then enters the state "stop".

In steady state, the playback controller 16 loads the data of subcodes of the decoder 21 subcodes, as and when loading the sector numbers of the demultiplexer 5. As in the case of a data sector numbers read from the demultiplexer 5, the controller 16 compares the loaded data subcodes with the start and end numbers of sectors per track in the TOC data, shown in Fig. 5, to identify the track number in which the currently entered data for device error correction 3. If the identified track is different from the reproduction is ller enters the ready state to the playback to play the next track, which should be played in that order.

In steady state, the playback controller 16 enters the state "stop" if he receives a stop command from the user input device 18 or the external interface 17. In steady state, the playback controller 16 enters the recovery state if it receives a search command from the user input device 18 or the external interface 17. In steady state, the playback controller 16 enters the paused state, if he accepts the command pause from the user input device 18 or the external interface 17.

(3-15) Control decoder superimposed dialogue.

Data superimposed dialogue is encoded on each screen. In the header data superimposed dialogue contained in the leading data of the screen, superimposed dialogue, stored DTSS indicating the start time of the decoding of each screen superimposed dialogue. At the beginning of the screen superimposed dialogue in each data superimposed dialogue stores the time duration indicating a duration of a visual display screen superimposed dialogue. DTSS is not stored in the header data superimposed dialogue, except in the leading data for capoten search DTSS.

In Fig. 31 shows the flow produced by the controller 16 to control the decoder superimposed dialogue in the steady state of play. In steady state, the playback controller 16 checks the start time of the decoding when receiving the detection signal of DTSS DTSS detector 25. First, he reads the found of DTSS DTSS detector 25 and the current value of the STC from the STC register 23 (step SP33, SP34). He then compares the read data DTSS with this STC (step P35). If DTSS < STC, it determines that missed bronirovanie decoding, and clears the buffer of superimposed dialogue (step SP43). The controller then issues the DTSS detector 25 and the decoder superimposed dialogue 14 the search command DTSS (step SP30). After that it waits for the signal detection of DTSS DTSS detector 25 (step SP31), and if DTSS is detected, checks the start time of the decoding to the next screen superimposed dialogue.

If DTSS = STC, the controller determines that the decoding should begin, and issues a command decoding data for one screen. In addition, if DTSS > STC, it performs the same operation occurs when DTSS = STC, after determining that it is too early to start decoding (step SP36, SP37, SP38, SP39). When receiving a command decode the data on which they receive from the buffer 12 codes superimposed dialogue through the DTSS detector 25, and stores them in its internal memory frames. Then he starts to output the data to the post-processor 15.

Next, the controller 16 waits until it encounters DTSS + duration > STC (step SP40, SP41). During this operation continues the visual display screen superimposed dialogue. When she meets DTSS + duration > STC, the controller produces the decoder 14 superimposed dialogue command "stop" display (step SP42) to stop the visual display screen superimposed dialogue. DTSS corresponding to the leading data for the next screen, superimposed dialogue, can be detected in that time, the controller 16 waits until DTSS + duration > STC. In this case, the controller does not process up until not meet DTSS + duration > STC, to stop the visual display screen superimposed dialogue.

After the end of the visual display screen superimposed dialogue controller reads DTSS to the next screen superimposed dialogue from the DTSS detector 25 in order to check the starting time of the decoding, if the detected DTSS corresponding to the leading data for the next screen, superimposed dialogue, while control is streetsa DTSS = STC, after downloading the DTSS and determine what DTSS > STC, from the video decoder 8 can be transferred to the detection signal of the I-picture, and DTSV corresponding to the I-picture, can cause the translation to the original state register STC. Count STC can then be interrupted to cause DTSS < STC, thereby not allowing DTSS = STC to set how long the controller waits.

Thus, if there is DTSS < STC (step SP37) if (STC - DTSS) is less than a threshold, such as time duration, while the controller is waiting to meet DTSS = STC, after determining that the DTSS > STC, this screen superimposed dialogue still needs to visually display and decoder superimposed dialogue 14 is forced to start decoding for one screen. However, if (STC - DTSS) has a greater value, the controller 16 determines that there is a serious error synchronization, and outputs the decoder superimposed dialogue 14 and the DTSS detector 25 the search command DTSS (step SP30). When detected DTSS, it checks the start time of the decoding for this screen, superimposed dialogue.

(3-16) the status of the search.

The state of the search is an operation for playback only I-video images that appear through salandria video stored in DSM 1, within a smaller period of time than normal playback. Selective visual display only the I-pictures in the same direction as in normal playback, is called direct search, and selective visual display of I-pictures in the direction opposite to the direction normal playback, i.e. in the direction in which the playback time becomes successively earlier.

In Fig. 32 shows the flow of processing by the controller 16 in his state of the search. When the state is entered search controller 16 transmits the video decoder 8, the signal, which means that he has entered a state of the search (step SP50). When receiving a signal, meaning that was put into the state of search, the video decoder 8 decodes only data I-images from the data downloaded from the DTSV detector 7, and passes the remaining data, i.e. data P - and B-images, rather than decoding. The decoded I-picture visually displayed immediately after decoding.

The controller also transmits the audio decoder 11 the command "stop" decoding and suppression of output sounds, and transmits to the decoder superimposed dialogue 14 the command "stop" decoding and command ostalog in the search process.

Searching for forward search, the controller 16 transmits the block driver 2 team make the transition paths in the forward direction of movement of the sensor, while the reverse lookup is passed to the block driver 2 command to perform the reverse transition paths (step SP53). In response to the command, direct or reverse jump track block drivers 2 causes the sensor to move in a way that commands the direct transition track data can be read from the large number of sectors relative to the current position of the sensor, while for the team to revert back to the track data can be read from a small number of sectors with respect to the same position.

The amount of movement of the sensor during the transition paths should not be set accurately. That is, unlike search commands, when the number of sectors, which should move the sensor rigidly set, these commands do not require a precise indication of the size of the transition due to a combination of DSM 1 and block 2 drivers that may indicate only the approximate direction and approximate amount of motion when the desired leap is quick and is associated with a very large amount of traffic.

When the motion sensor zavershenie subcodes in the format it is shown in Fig. 2, is loaded into the decoder 21 of the subcodes. The controller 16 loads the data sector numbers and the flag prohibit playback of these subcodes loaded in the decoder subcodes 21 (step SP54).

If you set the loaded flag ban playback (step SP55), this means that reproduction is prohibited, the controller 16 determines that after the transition paths of the sensor entered the area input area output or the TOC area, and enters the state "stop". Otherwise, the multiplexed data in the sector number to be read after the transition paths, served in a video decoder 8, the audio decoder audio 11 and the decoder superimposed dialogue 14.

Since the video decoder 8 is in search, it looks for the header, I-image, to reproduce only the I-picture. Upon detection of a header of the I-picture, the video decoder 8 transmits to the controller 16 a signal which informs that was found title I-image, and quickly decode the I-picture for its output immediately after decoding. If he then finds the header P - or B-picture, it informs the controller 16 about the discovery and begins searching for the next title I-image instead of decoding data P - or B-izobrajayuschie about what was found title I-image (step SP56). When receiving the detection signal of title I of the image it starts waiting for the next signal detection header P - or B-picture (step SP58). When receiving the detection signal header P - or B-picture, the controller 16 determines that the decoded I-picture is complete. And again to the direct search for the controller 16 transmits the block driver 2 command transition paths of the sensor in the forward direction, while a reverse lookup, it sends to the block driver 2 signal transition paths of the sensor in the opposite direction to repeat the above-mentioned state of the search (step SP53).

Searching audio data and the data of superimposed dialogue is loaded into the buffer AudioCodes 9 and buffer codes superimposed dialogue 12 respectively. However, because audio decoder audio 11 and the decoder superimposed dialogue 14 made a stop decoding buffer AudioCodes 9 and/or the buffer codes superimposed dialogue 12 can be crowded, not allowing the demultiplexer 5 to transmit data in the buffer videocodec 6, the buffer AudioCodes 9 and the DTSS detector 25.

Therefore, in the state of the search controller 16 periodically clears the buffer AudioCodes 9 and buffer codes narozeninovy I-, P - or B-picture from the video decoder 8 (step SP57, SP58). In searching, the controller 16 enters the state determine how synchronized the beginning, if he accepts command of the liberation of searches from the user input device 18 or the external interface 17. In searching, the controller 16 enters the state "stop" if he receives a stop command from the user input device 18 or the external interface 17.

(3-17), the paused State.

In Fig. 33 shows the flow of processing by the controller 16 in its paused. When you enter a paused state, the controller 16 starts to expect the vertical synchronization signal from the generator vertical synchronization (step SP70). When the detection signal of the vertical alarm, it sends to the decoder 8 command shutdown and the audio decoder 11 to the command "stop" and simultaneously transmits the calculation STC team of premature termination, automatic counting STC (step SP71, SP72, SP73).

When receiving the stop command, the video decoder 8 stops decoding and continues to visually display the last decoded screen. In this case, if the decoded image is an alternating image, where Dicetyl number, forming this image, for visually displaying the selected field, even when the second field is visually displayed, thereby limiting the flicker. When receiving the command "stop" audio decoder audio decoding 11 quickly aborts decoding.

In pause mode, if visually screen is displayed superimposed dialogue at a time when the state of normal playback switches to pause mode, this screen continues to visually displayed. Otherwise, no screen superimposed dialogue is not visually displayed. In pause mode when receiving a command of release of pause of the user input device 18 or the external interface 17, the controller 16 starts to expect the vertical synchronization signal from the signal generator vertical synchronization (step SP74, SP75). Upon detection of the vertical synchronization signal he gives to the video decoder 8 team release pause and the audio decoder 11 click start decoding and simultaneously transmits the calculation STC command to initiate an automatic count STC (step SP76, SP77, SP78). Then, the controller 16 enters a state of normal playback.

In pause mode, the controller 16 enters the flow status of the frame of the EU is, 34 shows the flow of processing by the controller 16 in his state of the feeder frame. When the input condition of the feeder frame, the controller 16 first passes the buffer AudioCodes 9 team purification (step SP90). This is done in order to avoid the disappearance of the discharge buffer AudioCodes during the next decoding one screen scaler.

The controller then causes the video decoder 8 to decode one frame. That is, the controller waits for the vertical synchronization signal from the circuit 22 generating vertical synchronization signals (step SP91), transmits the video decoder 8 command start decoding in response to the vertical synchronization signal (step SP92), and issues a stop command in response to the next vertical synchronization signal (step SP93, SP94). The controller then advances STC forward by one frame (step SP95). That is, the controller 16 reads the STC from the STC register 23 to add one frame time visual display to this STC, and then puts the result back in the STC register 23. Next, the controller 16 determines does the user input device 18 or the external interface 17 team release feeder frame (step SP96), and if not, repeats the above processing.

In this case, the controller producing the standing time visually screen is displayed superimposed dialogue, he gives the decoder superimposed dialogue 14 the command "stop" visual display when it encounters DTSS + duration > STC, to thereby visually display screen superimposed dialogue. Otherwise, it sends to the decoder superimposed dialogue 14 team visual display of the next screen, superimposed dialogue, when she meets DTSS < STC. After completing the above processing, the controller proceeds from the state of the filing of the frame in the pause mode

As described above, the present invention implements the playback device data, and a medium for storing data for reproducing multiplexed data of video, audio and data superimposed dialogue, compressed at a variable speed and perform various functions.

Industrial applicability.

Environment for data storage according to this invention is applicable to digital video discs (DVD), in which the stored bit streams compressed using MPEG. In addition, the playback device data according to this invention is applicable to a display device to play these DVDs.

The caption to Fig. 1:

2. Block drivers
8. Video

9. Buffer audiocode

10. The DTSA detector

11. Audio decoder audio

12. Buffer code superimposed dialogue

13. The DTSS detector

14. The decoder superimposed dialogue

15. The postprocessor

16. Controller

17. The external interface

18. The user input device

19. The player information

20. Tool storage

21. The decoder subcode

22. The generated signal is a vertical synchronization

23. Register STC

24. CCT count STC

25. ROM

26. CLT CCT ring buffer

27. Video

28. Audio output

The caption to Fig. 2:

1. Bytes

2. Data subcode

3. Summarized data

4. Data error correction

The caption to Fig. 3:

1. Sector number

2. Inner primary area

3. Data TOC

4. Fixed

5. Reserved for a different format

6. Mutable

7. Data 1st track

8. 2 data tracks

9. The data of the 3rd track

10. Data of the N-th track

11. The inner end region

The caption to Fig. 4:

1. Sector number

2. Inner primary area

3. Data TOC

4. Fixed

5. Reserved for a different format

6. Mutable
11. The inner end region

12. The header track

13. The final track

The caption to Fig. 5:

1. Title TOC

2. The size of the TOC

3. The number of tracks

4. Track number

5. The number of the first sector

6. Number end sector

7. The flag of the header track

8. The flag end of the track

9. The initialization flag playback

10. Flag seal image

11. Flag seal of speech

12. Flag superimposed dialogue

13. Flag the correct flag seal

14. Information track (first track)

15. Information track (second track)

16. Information track (N-I track)

17. The table header input points

18. The number of input points

19. Table input points

20. Information input points (1-I input point)

21. Information of an input point (M-I input point)

22. Sector number of input points

23. Time code input points

24. Mark the end of the TOC

The caption to Fig. 6:

1. Title TOC

2. The size of the TOC

3. The number of tracks

4. Track number

5. The number of the first sector

6. Number end sector

7. Flag ban playback

8. Flag seal image

9. F is armacia track (first track)

13. Information track (second track)

14. Information track (N-I track)

15. The table header input points

16. The number of input points

17. Table input points

18. Information input points (1-I input point)

19. Information of an input point (M-I input point)

20. Sector number of input points

22. Time code input points

The caption to Fig. 7A, 7B, 7C, 7D:

1. Sector N

2. Sector number

3. The header system

4. Title video

5. Video

6. The header of the audio data

7. Audio

8. Title compacted dialogue

9. Data aggregated dialogue

10. Title video

11. The error flag

The caption to Fig. 8:

1. The initial code of the header system

2. The length of the header system

3. User data

The caption to Fig. 9:

1. The header format video

2. The initial code title video

3. Length video

4. The flag coding DTSV

5. The flag coding DTSV = 0

6. The header format of the audio data

7. The initial code of the audio data

8. The audio data length

9. The flag coding DTSA

10. The flag coding DTSA = 1

11. The header format imposed dial the AG coding DTSS

15. The flag coding DTSS FLAG=0

The caption to Fig. 10:

1. Bytes

2. 80 bytes

3. Sync sector (6 bytes)

4. Sector number (4 bytes)

5. Time code (8 bytes)

6. ID content subcode

7. Flag ban playback

8. 59 bytes

The caption to Fig. 11:

1. The beginning of the processing (power)

2. Initializing

3. To read TOC

4. Stop

5. Preparing for playback

6. Determine how the synchronization

7. DTSV, DTSA found

8. The beginning of the synchronization of speech and image

9. Start synchronization only image

10. Start synchronization only speech

11. Start synchronization only superimposed dialogue

12. Nothing found

13. Found DTSV, DTSA not found

14. Found DTSA, DTSV not found

15. Found only DTSS

16. Hard play

17. Search

18. Pause

19. Supply

20. State definitions synchronization or stop condition

21. The state of play or the condition of the feet

22. The stop condition

The caption to Fig. 12:

30. RAM

33. Controller

34. CCT add data

The caption to Fig. 13:

1. Beginning

2. Turn on the power tobras the error?

6. Yes

7. To display the error message memory

8. A scan request

9. Power off

10. Installed?

11. The end

12. To give the alarm

13. Return to TOC, to consider the state

14. No

The caption to Fig. 14:

1. Beginning

2. Submit a command to the means for correcting errors to enter the scan mode TOC

3. Set TOC # n=1

4. Give the command to read TOC # n

5. Complete correction of errors?

6. No

7. Yes

8. Give the command CCT ring buffer write TOC

9. Complete the entire set of readings?

10. Complete download?

11. To return to the stop condition

12. Display "error reading TOC"

13. Give the command to withdraw

14. To return to the stop condition

The caption to Fig. 15:

1. Beginning

2. Loaded TOC

3. No

4. Yes

5. Detected whether the end of the track?

6. Detected whether the header track?

7. Give the command to play the final track

8. Give the command to reproduce the header track

9. Give the command to perform an operation stop

10. Clear buffer

11. Issued whether a command to start playback?

12. Give the command displayed on the output, indicates the status of readiness

3. Initializing

4. Give the command tool fixes to enter normal play mode

5. Give the command block drivers search

6. Give the command start decompression

7. Return to the synchronized start

The caption to Fig. 17:

1. Beginning

2. To read the track from TOC

3. The correct flag is correct flag seal?

4. No

5. Yes

6. Download DTSV, DTSA AND DTSS

7. Sealed whether the image?

8. Sealed it?

9. There DTSV?

10. There DTSA?

11. There DTSS?

12. Compacted whether the superimposed dialogue?

13. To return to the stop condition

14. To return to the beginning of the synchronization of image and speech

15. To return to the beginning of the synchronization imposed only dialogue

16. To return to the beginning of the synchronization only speech

17. To return to the beginning of the synchronization only image

The caption to Fig. 18:

1. Beginning

2. Give the command stop decoding and search for a title I-images

3. Download SCR from the demultiplexer

4. Full if any C. In?

5. No

6. Yes

7. Disappeared sync image?

12. Start counting STC

13. Download STC

14. To cancel the shutdown decoding image

15. Go to the next I-picture

16. Give the command to decode the superimposed dialogue

17. Detected whether the synchronization of the image?

18. To return to the ready state of play

19. To cancel the shutdown decoding image

20. To install DTSV in STC

The caption to Fig. 19:

1. Beginning

2. To run a command to mute the output and search DTSA

3. Loaded DTSV in the image decoder?

4. Disappear if the digits in A. C. B?

5. Download DTSV

6. Matter whether DTSA?

7. Does STC?

8. Give a command for the search DTSA

9. Download STC

10. Give a command for the search DTSA

11. To cancel the shutdown decoding image

12. To return to the state of the hard play

The caption to Fig. 20:

1. Beginning

2. Give the command stop decoding and search the I-picture

3. Download SCR from the demultiplexer

4. Overflows if any C. B?

5. No

6. Yes

7. Full or V. C. B?

8. Matter whether DTSV?

9. To install DTSV in STC

10. Detected whether the synchronization of the image?

11. Does STC?


16. Give a command to decode the superimposed dialogue

17. Detected whether the synchronization of the image?

18. To return to the state of the hard play

19. To cancel the shutdown decoding image

20. To install DTSV in STC

The caption to Fig. 21:

1. Beginning

2. The way SCR

3. The way MRFB

4. Give the command to mute the output and look for the DTSA

5. Download SCR from DEMUX

6. Overflows if any C. In?

7. No

8. Yes

9. Disappear if the digits in A. C. B?

10. Matter whether DTSA?

11. To install DTSA in STC

12. Does STC?

13. Start counting STC

14. To cancel the shutdown decoding speech

15. Give the command start decoding of superimposed dialogue

16. Download STC

17. To return to the state of the hard play

18. Give a command for the search DTSA

The caption to Fig. 22:

1. Beginning

2. Whether STC?

3. No

4. Yes

5. To control the display of the superimposed dialogue in the hard state

6. Give a command for the search DTSS

7. Detected whether DTSS?

8. Download DTSS

9. Full if the code buffer superimposed dialogue?

10. Set DTSS in STC

11. Detected whether synchronization depicts the superimposed dialogue in the hard state

The caption to Fig. 23:

1. Beginning

2. Detected whether the I-picture?

3. No

4. Yes

5. Download DTSV

6. Download STC

7. To calculate DTSV-STC

8. Download DTSV-STC

9. The end

The caption to Fig. 24:

1. Beginning

2. Detected whether DTSA?

3. No

4. Yes

5. Download DTSA

6. Download STC

7. To calculate DTSA-STC

8. Download DTSA-STC

9. The end

The caption to Fig. 25:

1. Beginning

2. Detected whether the I-picture

3. No

4. Yes

5. Download DTSV

6. Download STC

7. Count :DTSV-STC:

8. Clear the buffers of image and speech

9. To install DTSV in STC

10. To return to the state of synchronization of speech and image

11. Download DTSV-STC = 0

The caption to Fig. 26:

1. Beginning

2. Loaded new DTSV-STC?

3. No

4. Yes

5. Download DTSV-STC

7. Clear the buffers of image and speech

8. Give the command stop decoding

9. Submit skip

10. Give the command alarm

11. Go to the synchronized start of image and speech

The caption to Fig. 27:

1. Beginning

2. Loaded new DTSA-STC?

3. No

4. Yes

5. Download DTSA-STC

6. Clear buffering alarm

10. Go to the synchronized start of image and speech

The caption to Fig. 28:

1. Beginning

2. To initialize the counter

3. Accepted if the error?

4. No

5. Yes

6. Add 1 to the counter value of

7. Carried out whether the transition from stop mode to the ready state play?

8. The count > Th?

9. To return to the stop condition

The caption to Fig. 29:

1. Beginning

2. To initialize the counter

3. Accepted if the error?

4. No

5. Yes

6. Add 1 to the counter value

7. Carried out whether the transition from stop mode to the ready state play?

8. Carried out whether the transition to the new track?

9. The count > Th?

10. Detected whether the next track?

11. Give the command playback of the next track

12. To return to the stop condition

The caption to Fig. 30:

1. Beginning

2. To initialize the counter

3. Accepted if the error?

4. No

5. Yes

6. Add 1 to the counter value of

7. Carried out whether the transition from stop mode to the ready state play?

8. Passed with 3?

9. The count > Th?

10. To stop giving the commands to disable if DTSS?

4. No

5. Yes

6. Download DTSS

7. Download STC

8. Download STC

9. To clear the buffer superimposed dialogue

10. Give the command start decoding

11. Download STC

12. DTSS + time duration > STC?

13. Give the command stop decoding

The caption to Fig. 32:

1. Beginning

2. Give a command for the video decoder to enter search

3. Submit a command to the audio decoder to stop decoding

4. Give the command decoder superimposed dialogue to stop the decoding

5. Give the command change tracks

6. Enter the number of the sector and the flag ban playback

7. Is it forbidden play?

8. No

9. Yes

11. To return to the stop condition

12. Detected whether title I-image?

13. Clear the buffers audiocode code and superimposed dialogue

14. Detected whether the headers P - or B-picture?

15. Clear the buffers audiocode code and superimposed dialogue

The caption to Fig. 33:

1. Beginning

2. Detected whether the synchronization image

3. No

4. Yes

5. Give the command shutdown decoder

6. Give the command stop the audio decoder

7. To stop the count STC

8. Detected whether the command Kostanov

11. Submit a command to the audio decoder to start decoding

12. Give the command begins counting STC

13. To return to the state of the hard play

The caption to Fig. 34:

1. Beginning

2. To clear the buffer audiocode

3. Detected whether the synchronization of the image?

4. No

5. Yes

6. Submit a command to the decoder to start decoding

7. Detected whether the synchronization of the image?

8. Submit a command to the decoder to stop decoding

9. To adjust the STC register

10. Detected whether the undo command filing frame?

11. To return to the paused

The caption to Fig. 35:

1. Data entry

2. Code request

3. The request signal code

4. Code

5. Output

6. The separation of the aggregated data

7. CTL ring buffer

101. Optical drive

102. The playback device

103. The demodulator

104. The detector sector

107. Memory ring buffer

109. CCT division header

111. Division CCT CTL CCT

114. Video

116. Audio decoder audio

117. Tracking device

118. The solution passes T

Description reference positions

1 - DSM 2 - block drivers, 3 - means of error correction, 4 - kolicevo the V), 8 - decoder videocode, 9 - buffer audiocode, 10 detector exemplary time interval audioactive (DTSA), 11 - decoder audiocode, 12 - buffer code superimposed dialogue, 13 detector exemplary time interval decoding of superimposed dialogue, 14 - decoder imposed code 15 - postprocessor, 16 controller, 17 - external interface 18 to a user input device, a 19 - vehicle information display, 20 - storage device 21 decoder subcode, 22 - diagram of the generation of the vertical synchronization signal, 23 - register system reference oscillator (STC), 24 - scheme of counting cycles of the reference oscillator system (STC).

1. Environment for data storage in which data is read in sectors having a first region with negative numbers of sectors and the second region with positive numbers of sectors, the information content of the stored data is stored in the first areas, wherein the multiplexed data with one or more types of data multiplexed therein, is stored in the second areas.

2. The playback device data from a storage medium for, in which data is stored in sectors, including the means of reading Denmark for reading data, stored in areas with positive numbers of sectors in the medium for storing data based on the information content of the stored data reproduced from regions with negative numbers of sectors in the medium for data storage.

3. Environment for data storage, which stores the multiplexed data with one or more types of data multiplexed therein, wherein the multiplexing information indicating the multiplexing condition in a predetermined block of data is stored in a predetermined position.

4. The playback device data from the environment for data storage, which stores the multiplexed data with one or more types of data multiplexed therein, including means for reading data from storage medium for data, a set of tools to decode the multiplexed data with one or with a large number of data types, there multiplexed read from readers, and a management tool to determine the status of multiplexing data in a predetermined block of data to control the whole collection of means of decoding in accordance with the result of the program determines the state of the multiplexing on the basis of information multiplexing the multiplexing of data stored in a predetermined block of data.

6. The playback device data under item 4, wherein the management tool determines the state of the multiplexing depending on whether the detected information about the start time of the decoding time for the start of the decoding of each data within the specified duration.

7. The playback device data under item 4, wherein the management tool selects the procedure for starting playback for a predetermined block of data in the matter multiplexed whether video, audio and data superimposed dialogue within a predetermined block of data.

8. The playback device data under item 7, characterized in that it further includes a generator of a reference clock frequency for counting a predetermined clock frequency, the combination of means decoding includes a video decoder for decoding video data and audio decoder audio decoding audio data and, if a predefined block of data contains only video data, the control passes to the generator reference clock frequency command operation and only videodead eredet the audio decoder command to start the decoding of the audio data in synchronism with the generator, the reference clock frequency.

9. The playback device data under item 7, wherein if the predefined block of data contains only audio data, the control passes to the generator reference clock frequency command operation and only the audio decoder to start decoding, and, if video data is detected after audio decoder audio started decoding, and transmits the command decoder to start decoding the video data in synchronism with the generator, the reference clock frequency.

10. The playback device data under item 7, characterized in that the combination of the decoding means includes a video decoder for decoding video data and the decoder superimposed dialogue decoding data superimposed dialogue and, if a predefined block of data contains only data superimposed dialogue, the control passes to the generator reference clock frequency command operation and only the decoder superimposed dialogue, click to start decoding, and, if video data is detected after the decoder superimposed dialogue started decoding, and transmits the command decoder to start decoding the video data in synchronism with the generator, the reference clock frequency.

11. Estatecode for decoding the audio data and the decoder superimposed dialogue decoding data superimposed dialogue and if the predefined block of data contains only data superimposed dialogue, the control passes to the generator reference clock frequency command operation and only the decoder superimposed dialogue command to start decoding, and, if the audio data is detected after the decoder superimposed dialogue started decoding, and transmits the audio decoder command to start the decoding of the audio data in synchronism with the generator, the reference clock frequency.

12. Environment for data storage, which stores the multiplexed data with one or more types of data multiplexed therein, characterized in that the set of pieces of information indicating the storage position of the set of entry points in the medium for storing data, and a set of pieces of timing information for the set of entry points is stored in predefined positions so that the positional information corresponds to the temporal information.

13. Environment for storing data under item 12, characterized in that it stores the multiplexed data with one or more types of data multiplexed therein, and that contains data that has been compressed with variable speed.

15. The playback device data under item 14, characterized in that it has means to inform the user about the error when it encounters an error during testing of the memory device.

16. The playback device data under item 14, characterized in that the control does not accept further commands from the user and/or does not play data, if in the course of the test mass storage device detected an error.

17. Device for reproducing data from a storage medium for data with two symbols of the error correction with different alternating directions stored in it, including the means of reproduction data from the storage medium for Dan the errors with different alternating directions for correcting errors in data reproduced by means of playing a variable number of times depending on the operating condition or type of reproduced data.

18. The playback device data under item 17, characterized in that environment for storing data contains information for content stored data and a means of error correction processing of correcting errors for information about the contents greater number of times than other data.

19. The playback device data under item 17, characterized in that environment for storing data contains information for content stored data and means for correcting the error repeats the processing of correcting errors for information about the contents of the specified number of times until, until the error is completely corrected.

20. The playback device data under item 18 or 19, characterized in that it contains, in addition, the control for transmission medium playback commands reset read data from the position where the error occurs, if the error is not corrected after a specified number of times processing of bug fixes.

21. The playback device data under item 17, characterized in that the medium for data storage with the rotary control for the transmission, if an error in any information can not be fixed, team player to consider other information about the content.

22. The playback device data under item 17, characterized in that the means for correcting errors includes means for adding error symptoms not amenable to correction data, the playback device has a counter for counting the above-mentioned characteristics of the error generated within a given duration, and that includes management tool for passing data or premature interruption of playback depending on the size of the account mentioned counter.

23. The playback device data from a storage medium for data in which the multiplexed data, at least one type or combination of types of data are encoded with a variable speed, comprising means read from the storage medium for data multiplexed data, at least one type or combination of types of data, encoded in there with a variable speed buffer storage device for temporary storage of data read means for reading, characterized in that it contains the device upravlyayte, moreover, the buffer device contains the information for the content data stored in the medium for data storage.

24. The playback device from a storage medium for data, video, audio, data, superimposed dialogue, and/or other data stored therein, comprising means for reading data from storage medium for data storage medium information for the contents of a storage medium for data read means for reading, a means of decoding video, audio, data, superimposed dialogue, and/or other data, characterized in that it contains a management tool for the automatic transmission when the device is actuated, the tool read command read pre-defined block of data specified by the information about the content, stored in the storage medium, and transmission means decoding the command decoding video, audio, data, superimposed dialogue, and/or other data stored in a predetermined block of data.

25. Environment for storing video, audio, data, superimposed dialogue, and/or other data stored therein, where the information content of the stored data is stored in the first area of the tives such as those what content information includes information that specifies a predefined block of data for automatic playback when the device is driven.

26. The playback device data from a storage medium for data, video, audio, data, superimposed dialogue, and/or other data stored therein, comprising means for reading data from storage medium for data storage medium information for the contents of a storage medium for data read means for reading, a means of decoding video, audio, data, superimposed dialogue, and/or other data, characterized in that it contains a management tool for the automatic transmission before regarding the device will be "stop" means read command read a predetermined data block, given information about the content stored in the storage medium, and transmission means decoding the command decoding video, audio, data, superimposed dialogue, and/or other data stored in a predetermined block of data.

27. Environment for storing video, audio, data, superimposed dialogue, and/or other data stored is superimposed dialogue and/or other data is restored in the second region, wherein the content information includes information that specifies a predefined block of data for automatic playback before regarding the device will be "stop".

28. The playback device data from the environment for data storage, which stores the multiplexed data with one or more types of data including at least video data multiplexed therein, including the means of reproduction data from the storage medium for data, at least one decoding means including at least video decoder for decoding video data, at least one detection device, at least information for start time decoding for the video data generator reference clock frequency for counting a predetermined clock frequency, characterized in that that contains a management tool to initialize the generator reference clock frequency information on the time of decoding video data, when the automatic counting of the reference clock frequency, and to compare the start time of the decoding detected by the detection means, over time, the specified gene is a means of decoding.

29. The playback device data on p. 28, characterized in that it has, in addition, at least one buffer codes for temporary storage of at least video data, and the detection means is located between the buffer codes and the decoding means, and detection means immediately before the decoding receives information about the time of the beginning of the decoding is stored in the buffer code, and inferred from it.

30. The playback device data on p. 28, characterized in that before the start of playback management tool chooses the way of starting decoding means for decoding depending on whether initiated automatically increment generator reference clock frequency.

31. The playback device data on p. 28, characterized in that it comprises, furthermore, means for generating vertical synchronization signals, and a means of automatic control starts counting the reference clock frequency in synchronism with the vertical synchronization signal.

32. The playback device data on p. 28, characterized in that it comprises, furthermore, means for generating vertical synchronization signals, and Sredetz.

33. The playback device data on p. 28, wherein the video data correspond ISO11172 (MPEG1) or ISO13818 (MPEG2), the video decoder detects the header of the I-picture and a management tool reads the information about the beginning of the decoding corresponding to the detected I-the image of the detection means to replace this value with the value of the reference clock frequency.

34. Device for reproducing data from a storage medium for data with video and audio data multiplexed therein, including means to read data for data storage, a means of separating multiplexed data read means for reading, video data and audio data buffer videocodes for temporary storage of the video data separated by means of separation, the buffer AudioCodes for temporary storage of the audio data separated by means of separation, the video encoder for decoding video data which is read by the buffer videocodec, audio decoder audio decoding audio data which is read by the buffer AudioCodes, the first means of detection information about the start time of encoding video data for the time of decoding video data, the second means of detecting information about time is holding a management tool for comparison, when playback is started, information about the start time of the decoding of video data detected by the first detection tool, with information about the start time of the decoding of the audio data detected by the second detection means, for controlling video and audio, in order to start decoding the video data before the decoding of the audio data.

35. The playback device data on p. 34, characterized in that it includes, in addition, the generator of the reference clock frequency to calculate a specific clock frequency, and the oscillator reference clock frequency is initialized with information about the start time of the decoding of video data begins when the automatic counting and decoding of audio data starts when the value of the reference clock frequency becomes equal to the start time of the decoding of the audio data.

36. The playback device data on p. 34, characterized in that it comprises, furthermore, means for generating vertical synchronization signals, and a means of automatic control starts counting the reference clock frequency in synchronism with the vertical synchronization signal.

37. The device reproduced the chronicity, moreover, control passes to the command decoder to start decoding in synchronism with the vertical synchronization signal.

38. The playback device data on p. 34, wherein the video data correspond ISO11172 (MPEG1) or ISO13818 (MPEG2), the video decoder detects the header of the I-picture and a management tool reads the information about the start time of the decoding, the corresponding I-picture, to replace this information on the value of the reference clock frequency.

39. Environment for storing data with the video data in accordance with ISO11172 (MPEG1) or ISO13818 (MPEG2) stored therein, characterized in that all I-images have relevant information about the start time of the decoding.

40. The playback device data from a storage medium for data with the coded data, including information about the start time of the decoding is stored in it, including the means of reading the encoded data from the storage medium for data, means for decoding the encoded data, means for detecting information on a start time of the decoding, the generator reference clock frequency for counting a predetermined clock frequency characterized in that stereoty with information about the start time of the decoding to detect synchronization errors, in order to eliminate the difference between the time information of the start of decoding and value of the reference clock frequency based on the comparison result.

41. The playback device data under item 40, wherein the encoded data includes video and audio data multiplexed therein, and a management tool sets the value of the reference clock frequency as the start time of the decoding of video data to substantially eliminate the difference between the start time of the decoding of video data and the start time of the decoding of the audio data.

42. The playback device data under item 40, wherein the coded data includes at least video data, and control passes to the means of decoding the command to skip the specified number of image data instead of decoding, if the start time of the decoding of video data before the time specified reference clock frequency.

43. The playback device data under item 40, wherein the coded data includes at least audio data, and control passes to the means of decoding the command to skip the audio data for dennise time the specified reference clock frequency.

44. The playback device data under item 40, wherein the coded data includes at least data superimposed dialogue and management tool transmits to a decoder the command to skip a specified number of elements superimposed dialogue instead of decoding, if the start time of the decoding data superimposed dialogue before the time specified reference clock frequency.

45. The playback device data under item 40, wherein the coded data includes at least video data, and control passes to a decoder command stop decoding video data for a specified duration, if the start time of the decoding of video data later than the time specified reference clock frequency.

46. The playback device data under item 40, wherein the coded data includes at least audio data, and control is passed to a decoder command stop decoding the audio data within the specified duration, if the start time of the decoding of the audio data later than the time of the decree that the encoded data includes, at least, the data is superimposed dialogue, and control passes to the means of decoding the command stop or delay decoding data superimposed dialogue within the specified duration, if the start time of the decoding data superimposed dialogue later than the time specified reference clock frequency.

48. The playback device data under item 40, wherein the management tool determines whether to skip the specified number of data or stop decoding, depending on whether the difference between the reference clock and the time information of the start of decoding positive or negative.

49. The playback device data on p. 48, wherein the control determines the amount of data that should be skipped, and the amount of time when the decoding is stopped in accordance with the absolute value of the difference between the value of the reference clock frequency and the above-mentioned information about the start time of the decoding.

50. The playback device data on p. 40, characterized in that it has a capability to produce synchronized to confuse the majority.

51. The playback device data under item 40, wherein the encoded data includes video data in accordance with ISO11172 (MPEG1) or ISO13818 (MPEG2), and management tool compares the information about the beginning of decoding video data with the value of the reference clock frequency, when the decoding means has detected an I-picture.

52. The playback device data from a storage medium for data with encoded data, including information about the start time of the decoding stored therein, comprising means for reading data from storage medium for a data buffer to temporarily store the encoded data read means for reading, a means of decoding the coded data read from the buffer, the locator information for the time of decoding, the generator reference clock frequency for counting a predetermined clock frequency, characterized in that it contains a management tool to compare the information about the start time of the decoding detected by the detection means, with the time specified by the reference clock frequency, and transmits the detection tool command to find the next start time of the decoding, if the time n the roan decoding data the specified start time of the decoding, which is later than the time specified reference clock frequency.

53. The playback device data on p. 52, wherein the coded data includes at least video data, and control passes to the means of decoding the command to skip the video instead of decoding or clears all or part of the buffer, if the information about the start time of the decoding of video data before the time specified reference clock frequency.

54. The playback device data on p. 52, wherein the coded data includes at least video data in accordance at least with ISO11172 (MPEG1) or ISO13818 (MPEG2), and management tool receives, when the decoding means detects an I-picture, information about the start time of the decoding, the corresponding I-picture, from the detection means for comparing the information about the start time of the decoding with the value of the reference clock frequency.

55. The playback device data on p. 52, wherein the coded data includes at least audio data, and control passes to the tool decode the information about the start time of the decoding of the audio data ahead of time, the specified reference clock frequency.

56. The playback device data on p. 52, wherein the coded data includes at least data superimposed dialogue, and control passes to the means of decoding the command to skip the data elements superimposed dialogue instead of decoding or clears all or part of the buffer, if the information about the start time of the decoding data superimposed dialogue before the time specified reference clock frequency.

57. The playback device data on p. 52, wherein the coded data includes at least data superimposed dialogue, and control passes to the means of decoding the command to start decoding when the value of the reference clock frequency becomes equal to the time of the beginning of the decoding data superimposed dialogue or when the value of the reference clock frequency is greater than the time information of the start of decoding of superimposed dialogue.

58. The playback device data from a storage medium for data with the video data in accordance with at least ISO11172 (MPEG1) or ISO13818 (MPEG2) stored in it, including the tool midrange of the data read means for reading, as well as the type of header image for decoding video data using the method of decoding corresponding to the type of image header, and selecting only the I-picture for decoding during a search operation, characterized in that it contains a management tool for transmission during the operation of the search tool read command execution transition paths each time the decoding means detects the image header P - or B-picture.

59. The playback device data on p. 58, characterized in that the medium for storing data is video data, multiplexed with audio data and/or data superimposed dialogue data stored therein, the device includes a means for temporary storage of audio data and/or data superimposed dialogue, and second means for decoding the audio data and/or data superimposed dialogue read from the storage means, and control passes to the second means of decoding the command stop decoding and periodically clears the storage medium during a search operation video.

60. The playback device data on p. 58, characterized in that CH">

61. The playback device data from a storage medium for data with encoded data, including information about the start time of the decoding is stored in it, including the generator reference clock frequency for counting a predetermined clock frequency, a detecting information about the start time of the decoding means decodes the encoded data based on the start time of the decoding detected by the detection means, and the reference clock frequency, characterized in that it contains a management tool for transmission to a decoder commands stop and cancel decoding and simultaneous stop and release the reference clock frequency when attempting to stop and cancel the decoding of the encoded data.

62. The playback device data from a storage medium for data with the coded video data, including information about the start time of the decoding is stored in it, including the generator reference clock frequency for counting a predetermined clock frequency, a detecting information about the start time of the decoding means decoding the video data based on the start time of the decoding, obnarujennaya, characterized in that it contains a management tool for transmission to a decoder commands start and premature termination of decoding video data in synchronism with the vertical synchronization signal when you try to perform an operation of the feeder frame.

63. The playback device data on p. 62, wherein the management tool adds to the value of the reference clock frequency is equal to the time required for playback of one frame, when you try to perform an operation of the feeder frame.

 

Same patents:

The invention relates to the accumulation of information

The invention relates to recording and reproduction of data on a disc-shaped media

FIELD: optical data carriers.

SUBSTANCE: for protecting optical disk from recording, information concerning protection from recording is read, which is previously recorded in at least one zone of starting or ending area of data carrier, and it is determined, whether the latter is in state of recording protection. In variant, when carrier is placed in cassette body, and body has aperture for forbidding recording protection of data on disk, it is determined, if recording protection state of recording protection data written on disk is matches with state of recording protection of said aperture, and recording of new data is prevented, if said protection data and aperture position forbid recording. In a variant information concerning recording protection is stored in zones of disk identification of at least one of zones of starting and ending area of carrier.

EFFECT: higher efficiency.

5 cl, 16 dwg

FIELD: optical data carriers.

SUBSTANCE: at least one free area is determined in position, following noted data area of user. Said free area is distributed in backward order from the last element of noted area. When replacing damaged elements of user data it is used from last elements of said free data area.

EFFECT: higher efficiency.

2 cl, 7 dwg

FIELD: data carriers.

SUBSTANCE: data carrier has formatted information for data and manufacturer information, containing identification information for recording device, which forms or modifies data on data carrier, and normalizes information, related to modification of data on carrier. Manufacturer information has individual format, incompatible to other manufacturers.

EFFECT: higher efficiency.

7 cl, 8 dwg

FIELD: data carriers.

SUBSTANCE: at least one free area is determined in location, following said user data area. Said free data area is distributed in reverse order from the last element of noted area. When replacing damaged elements of user data it is used starting from last elements of noted free data area.

EFFECT: higher efficiency.

2 cl, 5 dwg

FIELD: optical data carriers.

SUBSTANCE: data carrier has data area. The latter has multiple zones, in which code blocks with error corrections are formed and sectors remaining as a result of sliding replacement at the end of zone, number of which is less than necessary for forming of one code block with error corrections. Said sectors are not used for recording one code block with error corrections and are skipped, and said code block with error corrections is formed at the beginning of next zone after skipping sectors of zone noted above. Carrier has additional free space, necessary for skipping sectors remaining at the end of zone during sliding replacement process.

EFFECT: higher efficiency.

2 cl, 9 dwg

FIELD: optical data carriers.

SUBSTANCE: method includes following stages: forming of a group of multiple zones on disk, while a group includes data area of user, including code block with correction of mistakes, distribution of primary, free space for the group. Additional free space is distributed with possible exclusion of discontinuousness of code block with correction of mistakes contained in user data area, at the limit between zones and distribution of it at two zones. Such distribution may be realized by skipping sectors at the end of zone, of their number is less than needed for forming code block with correction of mistakes with correction of primary position of code block with correction of mistakes at limit between zones.

EFFECT: higher efficiency.

3 cl, 9 dwg

FIELD: data carriers.

SUBSTANCE: disk has several zones, while each zone has an area for user data for storing user data, and several zones form a group for controlling defects of data carrier, backup area for swapping defects for group is placed on disk, and data about source position for each zone is stored in previously set disk area, while method has following steps: reading of data concerning starting position for each zone, and access to data, on basis of read information concerning source position.

EFFECT: higher recording and reproduction stability due to possible processing of larger defects during hot swap, provided by joining several zones within limits of one group.

5 cl, 9 dwg

FIELD: data carriers.

SUBSTANCE: device has input zone, data recording zone, which is formed at outer peripheral side of input zone and into which multiple parts of content are recorded, and output zone, formed at outer peripheral side of zone for recording data. First and second information concerning control of copyright protection, by means of which copyright for multiple content portions is controlled, is recorded on data carrier is varying positions, secrecy of which is different from each other.

EFFECT: higher efficiency.

4 cl, 21 dwg

FIELD: data carriers.

SUBSTANCE: device has calculating, reserving and recording modules. Each variant of semiconductor memory card contains area for recording user data for controlling volume and area for recording user data. On carrier method for computer initialization is recorded, including calculation of size of volume control information, reserving areas and recording therein of control information for volume and user data, recording main boot record and sectors table in first section of first area, skipping preset number of sectors, recording information of boot sector of section, file allocation table and root directory element to following sectors.

EFFECT: higher efficiency.

5 cl, 59 dwg

FIELD: optical data carriers.

SUBSTANCE: method includes stages, during which manufacturer information is recorded on carrier, which is used for supporting specific function of manufacturer, while manufacturer information contains identification information of recorder manufacture, which recorded and/or modified data of data carrier, different from identification information before recording or modification.

EFFECT: higher speed of operation, higher efficiency.

6 cl, 8 dwg

Up!