Method for analysis and synthesis of speech

FIELD: analysis and synthesis of speech information outputted from computer, possible use in synthesizer-informers in mass transit means, communications, measuring and technological complexes and during foreign language studies.

SUBSTANCE: method includes: analog-digital conversion of speech signal; segmentation of transformed signal onto elementary speech fragments; determining of vocalization of each fragment; determining, for each vocalized elementary speech segment, of main tone frequency and spectrum parameters; analysis and changing of spectrum parameters; and synthesis of speech sequence. Technical result is achieved because before synthesis, in vocalized segments periods of main tone of each such segment are adapted to zero starting phase by means of transferring digitization start moment in each period of main tone beyond the point of intersection of contouring line with zero amplitude, distortions appearing at joining lines of main tone periods are smoothed out and, during transformation of additional count in the end of modified period of main tone, re-digitization of such period is performed while preserving its original length.

EFFECT: improved quality of produced modulated signal, allowing more trustworthy reproduction of sounds during synthesis of speech signal.

2 cl, 8 dwg

 

The invention relates to the analysis and synthesis of speech output from computers, and can be used informants synthesizers transport, communication, measuring and technological complexes, and also in learning foreign languages and other areas of human activity.

Speech technology is one of the areas of information technology that deals with the problems of interacting with a computer (or a person through a computer) based on the use of natural language in audio form. The rapid development of speech technologies is caused by the pressing needs of modern society in the solution of practical problems.

The efficiency of solving applied problems in the field of speech technology is determined by the completeness of the use of phonetic information obtained during the investigation of the properties of natural human speech.

Analog by nature natural speech signal can be represented in the form of amplitude values at given points in time, i.e. in the form of a sequence of numbers. This form of signal representation allows to use for its processing capabilities of computer technology.

The representation of the speech signal in digital form opens wide opportunities for its analysis and processing. Modern means of computer analysis of the sound with the of gralow show the waveform or sonogram of the sound signal on the monitor screen in the form of static images, you can view the signal to move from the beginning to the end and Vice versa, and also to repeatedly listen to the entire signal, or any part of it, and produces various kinds of modifications (for example, filtering or normalization).

The known method of analysis and synthesis of speech, including the segmentation of the speech signal, determining lokalizovannoi each segment, forming a sequence of excitation pulses is periodic with a period of the fundamental tone for vocalized segments, the formation of the spectrum of the original speech signal and the complex conjugate of the spectral excitation signal and averaging their works, a pseudo-random sequence of excitation pulses form a manifold for the generated pseudo-random sequences form a complex-conjugate spectra and match parameters of the spectral envelope of the original signal by normalizing the results of averaging works of the spectrum of the original speech signal and the complex conjugate spectra of pseudo-random sequences on the averaged spectrum of the excitation signals, and when the analysis newcaledonia segments determine the best pseudo-random the sequence according to the criterion of the maximum of the sum of capacities of all parameters of the spectral envelope, pass parameters to the spectrum of the envelope through for the best pseudo-random sequence and after receiving form the excitation signal, repeating the best pseudo-random sequence, the formation of the excitation signal, repeating the excitation signal generated for transmission parameters, and the formation of a synthesized speech signal by filtering the excitation signal in accordance with the parameters with the subsequent transmission and reception of the selected parameters (A.S. No. 1434487).

Some modification of this method are outlined in A.S. No. 1501138, which further in determining the coordinates of decomposition, perform averaging with weights, coinciding with basic functions, when determining the spectral envelope of the summed signal decomposition with weights equal to the values of the elements of the inverse correlation matrix of basis functions, the coefficients of which are the accepted parameters of the spectral envelope of the original speech signal, whereas the basis functions are B-splines.

Despite the complex processing of the speech signal both methods are unable to provide quality restoration of the speech signal, as information about the phonetic structure of the signal is not used.

The analysis and synthesis of speech information adequately described in the dissertation on competition of a scientific degree of the doctor of philological Sciences Skiline P.A. Phonetic aspects of speech technology" (St. Petersburg state University, San is t Petersburg, 1999), which was adopted as the prototype of the claimed invention.

The algorithm of the method of analysis and synthesis of speech outlined in thesis Pascalina is as follows:

speech signal is received at the input of the sound card of the computer, which converts it into digital form;

- implemented segmentation of the speech stream in order to highlight the basic speech fragments and determine their parameters: lokalizovannoi, marking periods of the fundamental tone vocalized fragments of spectrum parameters. The size and structure of the fragments depends on the tasks to be solved by synthesis;

- elementary speech fragments are combined into a sound base;

in accordance with the structure of the synthesized speech sequence being chosen fragments from the database and modifying their prosodic characteristics, resulting in a sound signal;

- generated digital speech signal is reproduced sound card of the computer or saved to file for further storage and/or processing.

In thesis two approaches to speech synthesis. The first is used when the synthesis is based on building a working model of the human vocal tract, while the second acoustic signal is modeled as such. The first approach is known as articulatory synthesis and IOM the NT almost never used because of the complexity of the implementation. The second approach is divided into two main areas: a synthesis according to rules compilation and synthesis.

The synthesis according to the rules uses the rules of formation of the physical characteristics of speech sounds according to their mathematical descriptions. So, formant synthesizers use excitation signal, which passes through the digital filter, built on multiple resonances, imitating the cavities of the vocal tract (LPC-model). The separation of the driving signal and the transfer function of the vocal tract is the basis of classical acoustic theory of receiptant.

During compilation, synthesis of natural speech sequence are cut segments of which is glued a new speech sequence. Depending on the task, the segments may have different size: from fragment phrase to aballoon. In systems of speech synthesis on arbitrary text, usually used segments equal to the allophones, defoam or Sabellianism.

Based on the compilation of the synthesis built many systems using different types of sound fragments and various methods of preparing a sound base. In such systems it is necessary to apply signal processing to bring the fundamental frequency, energy and length units to those that should characterize the synthesized speech.

We know many of the ways to change the fundamental tone. The author examines several ways, but stopped using algorithm TD-PSOLA (Time-Domain Pitch-Synchronous-Overlap-and-Add), which uses windowed mode signal processing and is based on the exact allocation of periods of the fundamental tone. PSOLA provides enough high quality modification of the primary colors, and allows you to control not only the basic tone, but the duration of vocalized sounds, by removing/reproduction periods of the fundamental tone.

However, the quality of the synthesized speech signal is poor, as in the modification of the fundamental frequency on the PSOLA algorithm is the distortion of the individual characteristics of the voice. In addition, the unnatural sound of the modified voice and the presence of high-frequency distortion.

PSOLA selected by the author due to the fact that other methods have major limitations.

Changes of the fundamental frequency in the frequency domain using the Fourier transform, introduces phase distortion in the signal, which is manifested not only in its unnatural, but often in the perceptual distortion characteristics. In addition, for accurate signal changes, requires a significant amount of the sound data. When working with an insufficient quantity of the sound data in the signal introduced additional distortions.

The change of the fundamental frequency lying in the authorized area by adding the period of samples with zero amplitude to decrease the frequency of the fundamental tone and remove part of the period to increase the fundamental frequency leads to significant distortion and noise when the change of the fundamental frequency by more than 10-15% and the discrepancy between the physical duration of periods, the perceived frequency of the fundamental tone.

Our experiments showed that the quality of the synthesis can be significantly increased by pre-treatment of fragments extracted from the original signal.

The technical object of the present invention is to develop a method of analysis and synthesis of the digitized speech signal, including processing method that does not require special devices to improve the quality of the received modulated signal, allowing the synthesis of the speech signal to obtain a more accurate reproduction of the sounds.

The technical result is achieved due to the fact that in the known method, including analog-to-digital conversion of the speech signal, segmenting the speech signal into elementary pieces of speech, definition of lokalizovannoi each segment, the determination of the fundamental frequency and spectral parameters selected speech fragment that examine and modify to obtain the synthesized speech sequence, and then reproduce the received signal is introduced changes and additions, namely, that before synthesis:

- vocalized segments subjected to additional processing, which consists in in the introduction of periods of the fundamental tone of the speech signal to a zero initial phase by moving the beginning of the digitized speech signal in each period in the intersection of the envelope of zero amplitude;

- subsequent smoothing distortions that occur at the joints of the above periods by translating the values of the samples at the beginning and end of each period by interpolation certain times of the current and subsequent periods;

next, if necessary, perform oversampling processed period, to preserve the number of samples equal to the number of samples in the initial period, with oversampling values of the first and last samples in the given period are stored.

The need for additional preprocessing of the digital audio signal caused by the following reasons. When digitizing a speech signal, the first count of each period of the signal is different from zero, i.e. corresponds to some initial phase. Subsequent periods also have some initial phase, and the periods following one after the other, does not necessarily have the same initial phase. There are three main causes of primary phases:

- start digitize signal is not associated with the beginning of the sound fragment to be included in the database;

the sampling frequency is not a multiple of the length of the periods;

the periods are of different duration.

The presence of the initial phases creates problems in the analysis and modification as a separate periods, and sounds in General. Usually the length of one period is not what allows you to spend a full spectral analysis due to insufficient data. The signal resulting from the multiplication of the study period, contains high-frequency noise caused by the phase difference of the first and last samples of the period.

Frequency analysis using several consecutive periods leads to the averaging of their spectra and the loss of the unique properties of each individual period. When the Russian speech synthesis, containing a large number of soft sounds, averaging properties adjacent periods is invalid.

In the modification of periods in the time domain, for example, the PSOLA algorithm on the joints periods also occur artifacts associated with difference from the zero values of the first and last samples.

Artificial pulling extreme counts to zero or adding dummy zero counts also introduces unwanted distortion in the signal.

Changing the duration vocalized sound fragments occurs due to dispersion and reproduction periods of the primary colors, respectively, on the borders of discarded/pasted periods distortion.

With the aim of reducing distortion is invited to lead the periods of the fundamental tone to zero initial phase. For this purpose it is necessary to combine the initial timing of periods of the signal at the zero phase, the latter timing periods must be derived from the fact that the first reference followed by the period will also be reduced to zero initial phase, i.e. ideally the values of the last periods of times, in all ages must match.

When building a given signal linear interpolation is used:

,

where len is the length of the period;

x - offset times required to bring the first reference to a zero value,

The offset is defined as the coordinate of the intersection with zero, the line segment connecting the last count of the previous period and the first reference current:

According to the rules of allocation and partitioning of sound fragments of the last count of any age must have a negative value, the first non-negative, i.e. must comply with the condition of 0≤x≤1. (1.1)

For the first period of each sound there is no information about the value of the last count of the previous period, therefore, the assumption about the equality of its average value of the last times of all periods fragment:

,

where- the final countdown the last period of the previous sound;

thein- the final countdown period i;

N - number of periods in the sound fragment.

Similarly to the last periods of the audio fragments no information about the values of p, the pout of the subsequent reference periods. They rely equal to the average value of the first count all periods of the fragment:

,

where- the first sample of the first period of the subsequent sound;

thei0- the first sample period i.

When casting period to a zero phase shift counts to the left, which can cause the appearance of additional reference end of the period. If it is possible to adjust the markup of sound fragments, it should make appropriate changes. If the ability to change the length of the period is absent, it is necessary to peregistrirovat signal, retaining the values of the first and last samples.

When oversampling is used, the formula (5), in which the offset value is defined as (6)

,

where (int) - means discard the fractional part.

The value of the last samples of the period is determined from the expression (7):

where,fp- the new value of the last count of all periods of the sound sample.

The expression (7) is determined experimentally based on the minimization of the deviation of the integral of the spectrum of the given signal. When forming the above signal at the boundaries of the periods used smoothing distortions n the joints periods by translating the values of the samples at the beginning and end of the period. The translation of values used interpolation Lagrange polynomial (8):

,

wherei- smoothed value of reference;

l is the number of points involved in the interpolation.

Each period is recalculated to four countdown: two at the beginning and two at the end of the period.

When comparing the spectra of the initial and following periods, you can see a significant weakening of the high-frequency component, i.e. the impact of noise and distortion on the joints, so in the further synthesis of the speech signal is achieved more approaching to sound natural.

The proposed method is illustrated by the following drawings, where:

Figure 1 - basic sound with marking periods of the fundamental tone;

Figure 2 - example neocaledonica fragment;

Figure 3 - determination of the offset value and the transfer of samples;

Figure 4 - transfer times from education at the end of the period additional reference;

5 is a diagram of the smoothing on the joints periods;

6 is shown and the original signal periods;

Fig.7 - range of the initial period;

Fig - range of a given period.

Implementation of the proposed method of analysis and speech synthesis is performed on a personal computer using well-known software and developed by the inventors.

Recev the first signal in analog form to the input of the computer, sound card, which converts it into a digital form. In the signal are certain basic sound (difani, Subeliani or allophones) (see figure 1, 2).

Next, perform the segmentation of the selected audio fragments, resulting in the division of elementary sound bites on vocalized and newcaledonia (see figure 1) and the selection period of the basic tone for the vocalized sound bites.

Next, perform preprocessing of the received vocalized fragments as described above. Figure 3 shows the determination of the offset value (formula 2) and the direction of transfer of the samples along the envelope. It is seen that during the beginning of reference is the point of intersection of the envelope with zero.

When transferring samples used linear law (formula 1). The shape of the signal remains unchanged.

Figure 4 shows why the formation of additional reference end of the period and how it happens. If formed additional reference and there is no possibility to change the layout of the segments that produce the oversampling period, and, thus, keep the length of the period is equal to the original. When oversampling values of the first and last samples of the period remain.

Figure 5 shows the General scheme of smoothing of the signal at the junction of Perry the Dov. When smoothing is used interpolation Lagrange polynomial (formula 8). The smoothing of the initial periods of use four-point (1=4), with the following coordinates:

,,;,,,.

The smoothing internal use periods six points (1=6)with the following coordinates:

,,,,;

,,,,,;

The smoothing of the final periods using four points (l=4), with the following coordinates:

,,,,,,;

The source and the periods shown in Fig.6.

To evaluate the result of the proposed signal processing compare the spectra of the source (7) and Fig) periods. The comparison results show that the best results of transformations noticeable at frequencies above 4 kHz.

In the example described the process one vocalized fragment. Similarly handled all vocalized fragments of the base. Further modification of the prosodic characteristics of elementary sound bites (as vocalized, and not vocalized) is the prototype, as described above. For vocalized fragments PSOLA algorithm is applied, and for newcaledonia changing the duration is over harvesting (cutting) random groups of samples of the Central segment. These methods are widely used in speech synthesis.

Next, the synthesized signal is stored in a file or plays a sound card of the computer.

The advantage of the proposed method of analysis and synthesis is that in the preprocessing of the distortion introduced into the signal in the synthesis phase, is significantly reduced. Resulting in a synthesized voice sounds clearer and more natural.

Currently, the proposed method has been tested and implemented in the server, synthesis of speech from text, designed for operation in the telecommunication systems (notification, servers, entertainment etc).

1. The method of analysis and Synthe is but speech, includes analog-to-digital conversion of the speech signal, the segmentation of digitized speech signal into elementary pieces of speech, definition of lokalizovannoi each of the elementary speech fragments, determining for each vocalized elementary speech segment of the fundamental frequency and spectral parameters, analyze, and change the parameters of the spectrum to obtain the synthesized speech sequence, the speech synthesis sequence, characterized in that before synthesis vocalized elementary speech segments is subjected to additional processing, which lead the periods of the fundamental tone of each vocalized elementary speech segment to zero initial phase by moving the beginning of the digitized speech signal in each period of the main tone at the point of intersection of the envelope with zero amplitude, then smooth out the distortions that occur at the joints of periods of the fundamental tone, with this cast and in the case of formation of an additional reference at the end of the given period of the fundamental tone are such oversampling given period of the fundamental tone, while retaining its original length.

2. The method according to claim 1, characterized in that the smoothing distortions that occur at the joints of the above periods, the implementation of estlat by translating the values of the samples at the beginning and at the end of the above period, the interpolation method certain times of the current and subsequent the above periods.



 

Same patents:

FIELD: digital speech encoding.

SUBSTANCE: speech compression system provides encoding of speech signal into bits flow for later decoding for generation of synthesized speech, which contains full speed codec, half speed codec, one quarter speed codec and one eighth speed codec, which are selectively activated on basis of speed selection. Also, codecs of full and half speed are selectively activated on basis of type classification. Each codec is activated selectively for encoding and decoding speech signal for various speeds of transfer in bits, to accent different aspects of speech signal to increase total quality of synthesized speech signal.

EFFECT: optimized width of band, required for bits flow, by balancing between preferred average speed of transfer in bits and perception quality of restored speech.

11 cl, 12 dwg, 9 tbl

The invention relates to a speech encoding and reduces the sparsity in the input digital signal comprising a first sequence of sample values

The invention relates to speech recognition

The invention relates to a speech decoder, used in radio communications systems with mobile objects

The invention relates to processing of speech signals

FIELD: digital speech encoding.

SUBSTANCE: speech compression system provides encoding of speech signal into bits flow for later decoding for generation of synthesized speech, which contains full speed codec, half speed codec, one quarter speed codec and one eighth speed codec, which are selectively activated on basis of speed selection. Also, codecs of full and half speed are selectively activated on basis of type classification. Each codec is activated selectively for encoding and decoding speech signal for various speeds of transfer in bits, to accent different aspects of speech signal to increase total quality of synthesized speech signal.

EFFECT: optimized width of band, required for bits flow, by balancing between preferred average speed of transfer in bits and perception quality of restored speech.

11 cl, 12 dwg, 9 tbl

FIELD: analysis and synthesis of speech information outputted from computer, possible use in synthesizer-informers in mass transit means, communications, measuring and technological complexes and during foreign language studies.

SUBSTANCE: method includes: analog-digital conversion of speech signal; segmentation of transformed signal onto elementary speech fragments; determining of vocalization of each fragment; determining, for each vocalized elementary speech segment, of main tone frequency and spectrum parameters; analysis and changing of spectrum parameters; and synthesis of speech sequence. Technical result is achieved because before synthesis, in vocalized segments periods of main tone of each such segment are adapted to zero starting phase by means of transferring digitization start moment in each period of main tone beyond the point of intersection of contouring line with zero amplitude, distortions appearing at joining lines of main tone periods are smoothed out and, during transformation of additional count in the end of modified period of main tone, re-digitization of such period is performed while preserving its original length.

EFFECT: improved quality of produced modulated signal, allowing more trustworthy reproduction of sounds during synthesis of speech signal.

2 cl, 8 dwg

FIELD: systems/methods for filtering signals.

SUBSTANCE: in accordance to invention, filtration of input signal is performed for generation of first filtered signal; first filtered signal is combined with aforementioned input signal for production of difference signal, while stage of filtering of input signal for producing first filtered signal contains: stage of production of at least one delayed, amplified and filtered signal, and production stage contains: storage of signal, related to aforementioned input signal in a buffer; extraction of delayed signal from buffer, filtration of signal for forming at least one second filtered signal, while filtration is stable and causative; amplification of at least one signal by amplification coefficient, while method also contains production of aforementioned first filtered signal, basing on at least one aforementioned delayed, amplified and filtered signal.

EFFECT: development of method for filtering signal with delay cycle.

10 cl, 10 dwg

FIELD: method for encoding a signal, in particular, sound signal.

SUBSTANCE: in accordance to the method, first set of values is provided, which is related to serial spans of time in first time interval of signal; second set of values is provided, which is related to successive periods of time in second time interval of signal; where first time interval has certain overlapping with second time interval; aforementioned overlapping contains at least two successive time periods of second interval; where at least one of values of second set, which are related to at least two successive time periods in aforementioned overlapping, is encoded relatively to the value of first set, which is closer in time to at least one value of second set, than any other value in second set.

EFFECT: increased efficiency of signal encoding.

9 cl, 4 dwg

FIELD: physics, measurement.

SUBSTANCE: invention relates to a method and device for quantisation of linear prediction parameters in audio signal coding at a variable bit rate, in which the input vector of the linear prediction parameters is accepted, the audio signal frame corresponding to the input vector of the linear prediction parameters is classified, the prediction vector is calculated, the calculated prediction vector is deleted from the input vector of the linear prediction parameters in order to create a prediction error vector, and the prediction error vector is quantised. The prediction vector calculation involves selection of one of the many prediction patterns concerning the audio signal frame classification and prediction error vector processing using the selected prediction pattern. The invention relates to a method and device for reverse quantisation of linear prediction parameters in audio signal decoding at a variable bit rate; in which at least one quantisation index and the audio signal frame classification data corresponding to the quantisation index are received, the prediction error vector is restored by applying the index to at least one quantisation table, the prediction vector is recreated, and the linear prediction parameter vector is created depending on the restored prediction error vector and the recreated prediction vector. The prediction vector recreation involves processing of the restored prediction error vector using one of the many prediction patterns depending on the frame classification data.

EFFECT: decrease in quantisation error quantity.

57 cl, 8 dwg

FIELD: acoustics.

SUBSTANCE: invention pertains to the method and device for subsequent processing of a decoded sound signal. The decoded signal is divided into a set of signals at frequency sub-ranges. Subsequent processing is done to at least, one of the signals in the frequency sub-ranges. After processing of at least one signal from the frequency sub-ranges, the signals from the frequency sub-ranges are summed up to form an output decoded sound signal, subject to the next processing. In that way, processing is localised in the necessary sub-range or sub-ranges, leaving the other sub-ranges practically unchanged.

EFFECT: increased perceptible quality of the decoded sound signal.

54 cl, 14 dwg

FIELD: physics.

SUBSTANCE: invention claims systems and methods of speech signal classification and coding. Signal classification passes over three stages, with recognition of a definite signal class at each stage. First, active speech detector recognises active and inactive speech frames. If an inactive speech frame is found, the classification is finished, and the frame is encoded by comfortable noise generation. If an active speech frame is found, it undergoes second classification recognising non-vocalised frames. If the frame is recognised as non-vocalised speech signal, the classification is finished, and the frame is encoded by a method optimised for non-vocalised signals. In the opposite case, the speech frame is directed to 'stable vocalised' signal classification module. If the frame is classified as a stable vocalised frame, it is encoded by a method optimised for stable vocalised signals. In the opposite case, if the frame contains instable speech segment, e.g. vocalised initial or rapidly evolving signal, then a speech coder is applied.

EFFECT: improved speech quality at a given average data transfer speed.

84 cl, 12 dwg, 5 tbl

FIELD: physics.

SUBSTANCE: invention is related to coding audio signals with flows of audio data. Invention consists in combination of separate flows of audio data into multi-channel flows of audio data by means of data unit modification in audio data flow, which is divided into data units with audio data of determination unit and data unit, for instance, by supplementing, adding or replacing of their part, so that they include indicator of length, which displays value or length of data, respectively, of audio data of data unit or value or length of data, respectively, of data unit, in order to receive the second flow of audio data with modified data units. Alternatively, flow of audio data with indicators in determination units, which point to audio data of determination unit connected to these units of determination, but distributed among different data units, is transformed into flow of audio data, in which audio data of determination unit are combined into audio data of continuous determination unit. Then audio data of continuous determination unit may be included into self-sufficient element of channel together with their determination unit.

EFFECT: simplification of audio data manipulation in relation to combination of separate flows of audio data into multi-channel flows of audio data or general manipulation of audio data flow.

13 cl, 9 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention refers to predictive encoding of information signals, e.g., audio signals, specifically to adaptive predictive encoding. Substance of method proceeds from adaptive prediction algorithm controlled by speed ratio, so that in case speed ratio has the first value, the first adaptation speed and the first adaptation accuracy are taken, and in case speed ratio has the second value, the second adaptation speed lower than the first one and the second adaptation accuracy higher than the first one are taken. Herewith adaptation cycles following reset time wherein prediction errors are extended due to prediction factors yet to be adapted, can be reduced owing to that speed ratio is at first set for the first value, and after a while for the second value (50). After speed ratio is reset predefined cycle later, it is set for the second value again thus decreasing prediction errors.

EFFECT: enabled error tolerance within difference values of encoded information signal.

25 cl, 7 dwg

FIELD: physics, acoustics.

SUBSTANCE: invention relates to the method of decoding signals, which are coded using a hybrid coder. Essence of the invention lies in the method of reducing interference (S-OUT) in the decoded signal, which is formed from the first decoded signal constituent (S_CELP) and the second decoded signal constituent (S_TDAC), containing the following stages: obtaining the first enveloping characteristics (ENV_CELP) of energy and the second enveloping characteristics (EN_TDAC) of energy of the first decoded signal constituent (S_CELP) and the second decoded signal constituent (S_TDAC), forming the characteristic number (R) depending on the comparison of the first and second enveloping characteristics (ENV_CELP, ENV_TDAC) of energy; output of the amplification coefficient (G) depending on the characteristic number (R); in the preferred manner, multiplication of second decoded signal constituent with the amplification coefficient.

EFFECT: providing a reduction of interference noises anticipating an echo and the following echo for signals, coding using a hybrid coder without requiring additional information.

13 cl, 4 dwg

Up!