EP0285276A2 - Coding of acoustic waveforms - Google Patents
Coding of acoustic waveforms Download PDFInfo
- Publication number
- EP0285276A2 EP0285276A2 EP88302063A EP88302063A EP0285276A2 EP 0285276 A2 EP0285276 A2 EP 0285276A2 EP 88302063 A EP88302063 A EP 88302063A EP 88302063 A EP88302063 A EP 88302063A EP 0285276 A2 EP0285276 A2 EP 0285276A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- phase
- coding
- frequency components
- frequency
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
Definitions
- the field of this invention is speech technology generally and, in particular, methods and devices for analyzing, digitally-encoding, modifying and synthesizing speech or other acoustic waveforms.
- Digital speech coding methods and devices are the subject of considerable present interest, particularly at rates compatible with conventional transmission lines (i.e., 2.4 - 9.6 kilobits per second).
- the typical approaches to speech modelling such as the so-called “binary excitation models” are ill-suited for coding applications and, even with linear predictive coding or other state of the art coding techniques, yield poor quality speech transmissions.
- speech is viewed as the result of passing a glottal excitation waveform through a time-varying linear filter that models the resonant characteristics of the vocal tract. It is assumed that the glottal excitation can be in one of two possible states corresponding to voiced or unvoiced speech. In the voiced speech state the excitation is periodic with a period which varies slowly over time. In the unvoiced speech state, the glottal excitation is modeled as random noise with a flat spectrum.
- Serial No. 712,866 discloses an alternative to the binary excitation model in which speech analysis and synthesis as well as coding can be accomplished simply and effectively by employing a time-frequency representation of the speech waveform which is independent of the speech state. Specifically, a sinusoidal model for the speech waveform is used to develop a new analysis-synthesis technique.
- the basic method of U.S. Serial No. 712,866 includes the steps of: (a) selecting frames (i.e. windows of about 20 - 40 milliseconds) of samples from the waveform; (b) analyzing each frame of samples to extract a set of frequency components; (c) tracking the components from one frame to the next; and (d) interpolating the values of the components from one frame to the next to obtain a parametric representation of the waveform.
- a synthetic waveform can then be constructed by generating a set of sine waves corresponding to the parametric representation.
- the method is employed to choose amplitudes, frequencies, and phases corresponding to the largest peaks in a periodogram of the measured signal, independently of the speech state.
- the amplitudes, frequencies, and phases of the sine waves estimated on one frame are matched and allowed to continuously evolve into the corresponding parameter set on the successive frame. Because the number of estimated peaks is not constant and is slowly varying, the matching process is not straightforward. Rapidly varying regions of speech such as unvoiced/voiced transitions can result in large changes in both the location and number of peaks.
- phase continuity of each sinusoidal component is ensured by unwrapping the phase.
- the phase is unwrapped using a cubic phase interpolation function having parameter values that are chosen to satisfy the measured phase and frequency constraints at the frame boundaries while maintaining maximal smoothness over the frame duration.
- the corresponding sinusoidal amplitudes are simply interpolated in a linear manner across each frame.
- pitch estimates can be used to establish a set of harmonic frequency bins to which the frequency components are assigned.
- Pitch is used herein to mean the fundamental rate at which a speaker's vocal cords are vibrating).
- the amplitudes of the components are coded directly using adaptive differential pulse code modulation (ADPCM) across frequency or indirectly using linear predictive coding.
- ADPCM adaptive differential pulse code modulation
- the peak having the largest amplitude is selected and assigned to the frequency at the center of the bin. This results in a harmonic series based upon the coded pitch period.
- the phases are then coded by using the frequencies to predict phase at the end of the frame, unwrapping the measured phase with respect to this prediction and then coding the phase residual using 4-5 bits per phase peak.
- New encoding techniques based on a sinusoidal speech representation model are disclosed.
- a pitch-adaptive channel encoding technique for amplitude coding is disclosed in which the channel spacing is varied in accordance with the pitch of the speaker's voice.
- a phase synthesis technique is disclosed which locks rapidly-varying phases into synchrony with the phase of the fundamental.
- the parameters of the sinusoidal model are the amplitudes, frequencies and phases of the underlying sine waves, and since for a typical low-pitched speaker there can be as many as 80 sine waves in a 4 kHz speech bandwidth, it is not possible to code all of the parameters directly and achieve transmission rates below 9.6 kbps.
- the first step in reducing the size of the parameter set to be coded is to employ a pitch extraction algorithm which lead to a harmonic set of sine waves that are a "perceptual" best fit to the measured sine waves.
- a pitch extraction algorithm which lead to a harmonic set of sine waves that are a "perceptual" best fit to the measured sine waves.
- a predictive model for the phases of the sine waves is also developed, which not only leads to a set of residual phases whose dynamic ranges are a fraction of the [- ⁇ , ⁇ ] extent of the measured phases, but also leads to a model from which the phases of the high frequency, sine waves can be regenerated from the set of coded baseband phases.
- very natural and intelligible coded speech is obtained at 8.0 kbps.
- STC Sinusoidal Transform Coder
- DPCM differential pulse code modulation
- a set of linearly-spaced frequencies in the baseband and a further set of logarithmically-space frequencies in the higher frequency region are employed in the transmitter to code amplitudes.
- another amplitude envelope is constructed by linearly interpolating between the channel amplitudes. This is then sampled at the pitch harmonics to produce the set of sine-wave amplitudes to be used for synthesis.
- the system phase can be predicted from the coded log-amplitude using homomorphic techniques which when combined with a prediction of the excitation phase can restore complete fidelity during synthesis by merely coding phase residuals.
- phase predictions are poor, but the same sort of behavior can be simulated by replacing each residual phase by a uniformly-distributed random variable whose standard deviation is proportional to the degree to which the analyzed speech is unvoiced.
- a coding scheme for a very low data rate transmission lines (i.e., below 4.8 kbps), a coding scheme has been devised that essentially eliminates the need to code phase information.
- systems are disclosed herein for maintaining phase coherence and introducting an artificial phase dispersion.
- a synthetic phase model is disclosed which phase-locks all the sine waves to the fundamental and adds a pitch-dependent quadratic phase dispersion and a voicing-dependent random phase to each phase track.
- Speech is analyzed herein as having two components to the phase: a rapidly-varying component that changes with every sample and a slowly varying component that changes with every frame.
- the rapidly-varying phases are locked into synchrony with the phase of the fundamental and, furthermore, the pitch onset time simply establishes the time at which all the excitation sine waves come into phase. Since the sine waves are phase-locked, this onset time represents a delay which is not perceptible by the ear and, hence, can be ignored. Therefore, the phase of the fundamental can be generated by integrating the instantaneous pitch frequency and the rapidly-varying phases will be multiples of the phase of the fundamental.
- the speech waveform is modeled as a sum of sine waves.
- the first step in coding speech is to express the input speech waveform, s(n), in terms of the sinusoidal model, where A k , ⁇ k and ⁇ k are the amplitudes, frequencies and phases corresponding to the peaks of the magnitude of the high-resolution short-time Fourier transform. It should be noted that the measured frequencies will not in general be harmonic.
- the speech waveform can be modeled as the result of passing a glottal excitation waveform through a vocal tract filter.
- (3a) ⁇ k ⁇ k - arg H( ⁇ k ). (3b)
- a k A k /
- ⁇ k ⁇ k - arg H( ⁇ k ).
- 3b) In order to calculate the excitation phase in (3b), it is necessary to compute the amplitude and phase of the vocal tract filter. This can be done either by using homomorphic techniques or by fitting an all-pole model to the measured sine-wave amplitudes. These techniques are discussed in U.S. Serial No. 712,866.
- FIG. 1 is a block diagram showing the basic analysis/synthesis system of the present invention.
- the peaks of the magnitude of the discrete Fourier transform (DFT) of a windowed waveform are found simply by determining the locations of a change in slope (concave down).
- Phase measurements are derived from the discrete Fourier transform by computing the arctangents at the estimated frequency peaks.
- the speech waveform can be digitized at a 10kHz sampling rate, low-passed filtered at 5 kHz, and analyzed at 10-20 msec frame intervals employing an analysis window of variable duration in which the width of the analysis window is pitched adaptive, being set, for example, at 2.5 times the average pitch period with a minimum width of 20 msec.
- STC sinusoidal transform coder
- One way to encode amplitude information at low rates is to exploit a perception-based strategy.
- DPCM DPCM
- further efficiencies are gained by allowing the channel separation to increase logarithmically with frequency, thereby exploiting the critical band properties for the ear. This can be done by constructing an envelope of the sine-wave amplitudes by linearly interpolating between sine-wave peaks. This envelope is then sampled at predefined frequencies.
- a 22-channel design was developed which allowed for 9 linearly-spaced frequencies at 93 Hz/channel in the baseband and 11 logarithmically-spaced frequencies in the higher-frequency region.
- DPCM coding was used with 3 bits/channel for the channels 2 to 9 and 2 bits/channel for channels 10 to 22. It is not necessary to explicitly code channel 1 since its level is chosen to obtain the desired energy level.
- another amplitude envelope is constructed by lineary interpolating between the channel amplitudes. This is then sampled at the pitch harmonics to produce the set of sine-wave amplitudes to be used for synthesis.
- the expansion factor ⁇ is chosen such that F N is close to the 4000 Hz band edge. If the pitch is at or below 93 Hz, then the fixed 93 Hz linear/logarithmic design can be used, and if it is above 93 Hz, then the pitch-adaptive linear/log design can be used. Furthermore, if the pitch is above 174 Hz, then a strictly linear design can be used. In addition, the bit allocation per channel can be pitch-adaptive to make efficient use of all of the available bits.
- the DPCM encoder is then Applied to the logarithm of the envelope samples at the pitch-adaptive channel frequencies. Since the quantization noise has essentially a flat spectrum in the quefrequency domain (the Fourier transform of the log magnitudes) and since the speech envelope spectrum varies as 1/n2 in this domain, then optimal reduction of the quantization noise is possible by designing a Weiner filter. This can be approximated by an appropriately designed cepstral low-pass filter.
- This amplitude encoding algorithm was implemented on a real-time facility and evaluated using the Diagnostic Rhyme Test. For 3 male speakers, the average scores were 95.2 in the quiet, 92.5 in airborne-command-post noise and 92.2 in office noise. For females, the scores were about 2 DRT points lower in each case.
- the pitch-adaptive 22-channel amplitude encoder is designed for operation at 4.8 kbps, it can operate at any rate from 1.8 kbps to 8.0 kbps simply by changing the bit allocations for the amplitudes and phases. Operation at rates below 4.8 kbps was most easily obtained by eliminating the phase coding. This effectively defaulted the coder into a "magnitude-only" analysis/synthesis system whereby the phase tracks are obtained simply by integrating the instantaneous frequencies associated with each of the sine waves. In this way, operation at 3.1 kbps was achieved without any modification to the amplitude encoder. By further reducing the bit allocations for each channel, operation at rates down to 1.8 kbps was possible.
- phase modeling is to develop a parametric model to describe the phase measurements in (4).
- the intuition behind the new phase model stems from the fact that during steady voicing the excitation waveform will consist of a sequence of pitch pulses.
- a pitch pulse occurs when all of the sine waves add coherently (i.e., are in phase).
- n o is the onset time of the pitch pulse measured with respect to the center of the analysis frame.
- the phase model depends on the two parameters, n o and ⁇ which should be chosen to make e(n) "close to" e(n).
- n o denotes the maximizing value
- the function l(n o ) is highly non-linear in n o , and it is not possible to find a simple analytical solution for the optimum value.
- the first step used in coding the sine wave parameters is to assign one sine wave to each harmonic frequency bin. Since it is this set of sine wave which will ultimately be reconstructed at the receiver, it is to this reduced set of sine waves that the new phase model will be applied.
- an amplitude envelope is created by applying linear interpolation to the amplitudes of the reduced set of sine waves. This is used to flatten the amplitudes and then homomorphic methods are used to estimate and remove the system phase to create the sine wave representation of the glottal excitation waveform. The onset time and the system phase ambiguity are then estimated and used to form a set of residual phases. If the model were perfect, then these phase residuals would be zero.
- the model is not perfect; hence, for good synthetic speech it is necessary to code the residuals.
- An example of such a set of residuals is shown in FIG. 4 for the same data illustrated in FIG 2. Since only the sine waves in the baseband (up to 1000 Hz) will be coded, the model is actually fitted to the sine wave phase data only in the baseband region. The main point is that whereas the original phase measurements has values that were uniformly distributed over the [- ⁇ , ⁇ ) region, the dynamic range of the phase residuals is much less than ⁇ , hence, coding efficiencies can be obtained.
- the final step in coding the sine wave parameters is to quantize the frequencies. This is done by quantizing the residual frequency obtained by replacing the measured frequency by the center frequency of the harmonic bin in which the sine wave lies. Because of the close relationship between the measured excitation phase of a sine wave and its frequency, it is desirable to compensate the phase should the quantized frequency be significantly different from the measured value. Since the final decoded excitation phase is the phase predicted by the model plus the coded phase residual, some phase compensation is inherent in the process since the phase model will be evaluated at the coded frequency and, hence, will better preserve the pitch structure in the synthetic waveform.
- the glottal excitation can be thought of as a sequence of periodic impulses which can be decomposed into a set of harmonic sine waves that add coherently at the time of occurrence of each pitch pulse.
- A( ⁇ ) is the amplitude envelope
- n o is the pitch onset time
- ⁇ o is the pitch frequency
- ⁇ ( ⁇ ) is the system phase
- ⁇ (m ⁇ o ) is the residual phase at the m th harmonic
- ⁇ 2 ⁇ f/f s is the angular frequency in radians, relative to the sampling frequency f s . Since under a minimum-phase assumption the system phase can be determined from the coded log-amplitude using homomorphic techniques, then the fidelity of the harmonic reconstruction depends only on the number of bits that can be assigned to the coding of the phase residuals.
- phase residuals that were essentially zero, while during unvoiced speech, the phase predictions were poor resulting in phase residuals that appeared to be random values within [- ⁇ , ⁇ ].
- the behavior of the phase residuals was somewhere between these two extremes.
- the same sort of behavior can be simulated by replacing each residual phase by a uniformly-distributed random variable whose standard deviation is proportional to the degree to which the analyzed speech is unvoiced.
- phase dispersion Since the system phase ⁇ ( ⁇ ) is derived from the coded log-magnitude, it is minimum-phase, which causes the synthetic waveform to be "spiky" and, in turn, leads to the perceived "buzziness".
- the flexibility of the STC system allows for a pitch-adaptive speaker-dependent design.
- phase model There are two components to the phase: a rapidly-varying component that changes with every sample, and a slowly-varying component that changes with every frame.
- phase-locked synthesizer has been implemented on the real-time system and found to dramatically improve the quality of the synthetic speech. Although the improvements are most noticeable at the lower rates below 3 kbps where no phase coding is possible, the phase-locking technique can also be used for high-frequency regeneration in those cases where not all of the baseband phases are coded. In fact, very good quality can be obtained at 4.8 kbps while coding fewer phases than was used in the earlier designs. Furthermore, since Eqs. (16-20) depend only on the measured pitch frequency, ⁇ o , and a voicing probability, P v , reduction in the data rate below 4.8 kbps is not possible with less loss in quality even though no explicit phase information is coded.
Abstract
Description
- The field of this invention is speech technology generally and, in particular, methods and devices for analyzing, digitally-encoding, modifying and synthesizing speech or other acoustic waveforms.
- Digital speech coding methods and devices are the subject of considerable present interest, particularly at rates compatible with conventional transmission lines (i.e., 2.4 - 9.6 kilobits per second). At such rates, the typical approaches to speech modelling, such as the so-called "binary excitation models", are ill-suited for coding applications and, even with linear predictive coding or other state of the art coding techniques, yield poor quality speech transmissions.
- In the binary excitation models, speech is viewed as the result of passing a glottal excitation waveform through a time-varying linear filter that models the resonant characteristics of the vocal tract. It is assumed that the glottal excitation can be in one of two possible states corresponding to voiced or unvoiced speech. In the voiced speech state the excitation is periodic with a period which varies slowly over time. In the unvoiced speech state, the glottal excitation is modeled as random noise with a flat spectrum.
- U.S. parent application, Serial No. 712,866 discloses an alternative to the binary excitation model in which speech analysis and synthesis as well as coding can be accomplished simply and effectively by employing a time-frequency representation of the speech waveform which is independent of the speech state. Specifically, a sinusoidal model for the speech waveform is used to develop a new analysis-synthesis technique.
- The basic method of U.S. Serial No. 712,866 includes the steps of: (a) selecting frames (i.e. windows of about 20 - 40 milliseconds) of samples from the waveform; (b) analyzing each frame of samples to extract a set of frequency components; (c) tracking the components from one frame to the next; and (d) interpolating the values of the components from one frame to the next to obtain a parametric representation of the waveform. A synthetic waveform can then be constructed by generating a set of sine waves corresponding to the parametric representation. The disclosures of U.S. Serial No. 712,866 are incorprated herein by reference.
- In one illustrated embodiment described in detail in U.S. Serial No. 712,866, the method is employed to choose amplitudes, frequencies, and phases corresponding to the largest peaks in a periodogram of the measured signal, independently of the speech state. In order to reconstruct the speech waveform, the amplitudes, frequencies, and phases of the sine waves estimated on one frame are matched and allowed to continuously evolve into the corresponding parameter set on the successive frame. Because the number of estimated peaks is not constant and is slowly varying, the matching process is not straightforward. Rapidly varying regions of speech such as unvoiced/voiced transitions can result in large changes in both the location and number of peaks. To account for such rapid movements in spectral energy, the concept of "birth" and "death" of sinusoidal components is employed in a nearest-neighbor matching method based on the frequencies estimated on each frame. If a new peak appears, a "birth" is said to occur and a new track is initiated. If an old peak is not matched, a "death" is said to occur and the corresponding track is allowed to decay to zero. Once the parameters on successive frames have been matched, phase continuity of each sinusoidal component is ensured by unwrapping the phase. In one preferred embodiment the phase is unwrapped using a cubic phase interpolation function having parameter values that are chosen to satisfy the measured phase and frequency constraints at the frame boundaries while maintaining maximal smoothness over the frame duration. Finally, the corresponding sinusoidal amplitudes are simply interpolated in a linear manner across each frame.
- In speech coding applications, U.S. Serial No. 712,866 teaches that pitch estimates can be used to establish a set of harmonic frequency bins to which the frequency components are assigned. (Pitch is used herein to mean the fundamental rate at which a speaker's vocal cords are vibrating). The amplitudes of the components are coded directly using adaptive differential pulse code modulation (ADPCM) across frequency or indirectly using linear predictive coding. In each harmonic frequency bin, the peak having the largest amplitude is selected and assigned to the frequency at the center of the bin. This results in a harmonic series based upon the coded pitch period. The phases are then coded by using the frequencies to predict phase at the end of the frame, unwrapping the measured phase with respect to this prediction and then coding the phase residual using 4-5 bits per phase peak.
- At low data rates (i.e., 4.8 kilobits per second or less), there can sometimes be insufficient bits to code amplitude information, especially for low-pitched speakers using the above-described techniques. Similarly, at low data rates, there can be insufficient bits available to code all the phase information. There exists a need for better methods and devices for coding acoustic waveforms, particularly for coding speech at low data rates.
- New encoding techniques based on a sinusoidal speech representation model are disclosed. In one aspect of the invention, a pitch-adaptive channel encoding technique for amplitude coding is disclosed in which the channel spacing is varied in accordance with the pitch of the speaker's voice. In another aspect of the invention, a phase synthesis technique is disclosed which locks rapidly-varying phases into synchrony with the phase of the fundamental.
- Since the parameters of the sinusoidal model are the amplitudes, frequencies and phases of the underlying sine waves, and since for a typical low-pitched speaker there can be as many as 80 sine waves in a 4 kHz speech bandwidth, it is not possible to code all of the parameters directly and achieve transmission rates below 9.6 kbps.
- The first step in reducing the size of the parameter set to be coded is to employ a pitch extraction algorithm which lead to a harmonic set of sine waves that are a "perceptual" best fit to the measured sine waves. With this strategy, coding of individual sine-wave frequencies is avoided. A new set of sine-wave amplitudes and phases is then obtained by sampling an amplitude and phase envelope at the pitch harmonics. Efficiencies are gained in coding the amplitudes by exploiting the correlation that exists between the amplitudes of neighboring sine waves. A predictive model for the phases of the sine waves is also developed, which not only leads to a set of residual phases whose dynamic ranges are a fraction of the [-π,π] extent of the measured phases, but also leads to a model from which the phases of the high frequency, sine waves can be regenerated from the set of coded baseband phases. Depending on the number of bits allowed for the amplitudes and the number of baseband phases that are coded, very natural and intelligible coded speech is obtained at 8.0 kbps.
- Techniques are also, disclosed herein for encoding the amplitudes and phases that allow the Sinusoidal Transform Coder (STC) to operate at a rate down to 1.8 kbps. The notable features of the resulting class of coders is the intelligibility and the naturalness of the synthetic speech, the preservation of speaker-identification qualities so that talkers were easily recognizable, and the robustness in a background of high ambient noise.
- In addition to using differential pulse code modulation (DPCM) to exploit the amplitude correlation between neighboring channels, further efficiencies are gained by allowing the channel separation to increase logarithmically with frequency (at least for low-pitched speakers), thereby exploiting the critical band properties of the ear. In one preferred embodiment, a set of linearly-spaced frequencies in the baseband and a further set of logarithmically-space frequencies in the higher frequency region are employed in the transmitter to code amplitudes. At the receiver, another amplitude envelope is constructed by linearly interpolating between the channel amplitudes. This is then sampled at the pitch harmonics to produce the set of sine-wave amplitudes to be used for synthesis.
- For steadily voiced speech, the system phase can be predicted from the coded log-amplitude using homomorphic techniques which when combined with a prediction of the excitation phase can restore complete fidelity during synthesis by merely coding phase residuals. During unvoiced, transitions and mixed excitation, phase predictions are poor, but the same sort of behavior can be simulated by replacing each residual phase by a uniformly-distributed random variable whose standard deviation is proportional to the degree to which the analyzed speech is unvoiced.
- Moreover, for a very low data rate transmission lines (i.e., below 4.8 kbps), a coding scheme has been devised that essentially eliminates the need to code phase information. In order to avoid the loss in quality and naturalness which would otherwise occur in a "magnitude-only" analysis/synthesis system, systems are disclosed herein for maintaining phase coherence and introducting an artificial phase dispersion. A synthetic phase model is disclosed which phase-locks all the sine waves to the fundamental and adds a pitch-dependent quadratic phase dispersion and a voicing-dependent random phase to each phase track.
- Speech is analyzed herein as having two components to the phase: a rapidly-varying component that changes with every sample and a slowly varying component that changes with every frame. The rapidly-varying phases are locked into synchrony with the phase of the fundamental and, furthermore, the pitch onset time simply establishes the time at which all the excitation sine waves come into phase. Since the sine waves are phase-locked, this onset time represents a delay which is not perceptible by the ear and, hence, can be ignored. Therefore, the phase of the fundamental can be generated by integrating the instantaneous pitch frequency and the rapidly-varying phases will be multiples of the phase of the fundamental.
- The invention will next be described in connection with certain illustrated embodiments. However, it should be clear that various changes and modifications can be made by those skilled in the art without departing from the spirit and scope of the invention. For example, although the description that follows is particularly adapted to speech coding, it should be clear that various other acoustic waveforms can be processed in a similar fashion.
-
- FIG. 1 is a schematic block diagram of the invention.
- FIG. 2 is a plot of a pitch onset likelihood function according to the invention for a frame of male speech.
- FIG. 3 is a plot of a pitch onset likelihood function according to the invention for a frame of female speech.
- FIG. 4 is an illustration of the phase residuals suitable for coding for the sampled speech data of FIG 2.
- In the present invention, the speech waveform is modeled as a sum of sine waves. Accordingly, the first step in coding speech is to express the input speech waveform, s(n), in terms of the sinusoidal model,
ak = Ak/|H(ωk)| (3a)
φk = ϑk - arg H(ωk). (3b)
In order to calculate the excitation phase in (3b), it is necessary to compute the amplitude and phase of the vocal tract filter. This can be done either by using homomorphic techniques or by fitting an all-pole model to the measured sine-wave amplitudes. These techniques are discussed in U.S. Serial No. 712,866. Both of these methods yield an estimate of the vocal tract phase that is inherently ambiguous since the same transfer characteristic is obtained for the waveform -s(n) as is obtained for s(n). This essential ambiguity is accounted for in the excitation model by writing
φk = ϑk - arg H(ωk) - βπ (4)
where β is either 0 or 1, a decision that must be accounted for in the analysis procedure. - FIG. 1 is a block diagram showing the basic analysis/synthesis system of the present invention. The peaks of the magnitude of the discrete Fourier transform (DFT) of a windowed waveform are found simply by determining the locations of a change in slope (concave down). Phase measurements are derived from the discrete Fourier transform by computing the arctangents at the estimated frequency peaks.
- In a simple embodiment, the speech waveform can be digitized at a 10kHz sampling rate, low-passed filtered at 5 kHz, and analyzed at 10-20 msec frame intervals employing an analysis window of variable duration in which the width of the analysis window is pitched adaptive, being set, for example, at 2.5 times the average pitch period with a minimum width of 20 msec.
- The earlier versions of the sinusoidal transform coder (STC) exploited the correlation that exists between neighboring sine waves by using PCM to encode the differential log-amplitudes. Since a fixed number of bits were allocated to the amplitude coding, then the number of bits per amplitude was allowed to change as the pitch changed. Since for low-pitched speakers there can be as many as 80 sine waves in a 4000 Hz speech bandwidth, then at 8.0 kbps at least 1 bit can be allocated for each differential amplitude, while leaving 4000 bits/sec for coding the pitch, energy, and about 12 baseband phases. At 4.8 kbps, assigning 1 bit/amplitude immediately exhausts the coding budget so that no phases can be coded. Therefore, a more efficient amplitude encoder is needed for operation at the lower rates.
- It has been discovered that natural speech of good quality can be obtained if about 7 baseband phases are coded. Using the predictive phase model, it has also been determined that 4 bits/phase is sufficient, provided a non-linear quantization rule was used in which the quantum step size increased as that residual phase got closer to the ±π boundaries. After allowing for coding of the pitch, energy and the parameters of the phase model, 50 bits remained for coding the amplitudes (when a 50Hz. frame rate is used).
- One way to encode amplitude information at low rates is to exploit a perception-based strategy. In addition to using the DPCM technique to exploit the amplitude correlation between neighboring channels, further efficiencies are gained by allowing the channel separation to increase logarithmically with frequency, thereby exploiting the critical band properties for the ear. This can be done by constructing an envelope of the sine-wave amplitudes by linearly interpolating between sine-wave peaks. This envelope is then sampled at predefined frequencies. A 22-channel design was developed which allowed for 9 linearly-spaced frequencies at 93 Hz/channel in the baseband and 11 logarithmically-spaced frequencies in the higher-frequency region. DPCM coding was used with 3 bits/channel for the channels 2 to 9 and 2 bits/channel for
channels 10 to 22. It is not necessary to explicitly code channel 1 since its level is chosen to obtain the desired energy level. At the receiver, another amplitude envelope is constructed by lineary interpolating between the channel amplitudes. This is then sampled at the pitch harmonics to produce the set of sine-wave amplitudes to be used for synthesis. - While this strategy may be a reasonable design technique for speakers whose pitch is below 93 Hz, it is obviously inefficient for high-pitched speakers. For example, if the pitch is above 174 Hz, then there are at most 22 sine waves, and these could have been coded directly. Based on this idea, the design was modified to allow for increased channel spacing whenever the pitch was above 93 Hz. If FO is the pitch and there are to be M linearly-spaced channels out of a total of N channels, then the linear baseband ends at frequency FM = MFO. The spacing of the (N-M) remaining channels increases logarithmically such that
Fn = (1 + α) Fn-1 n = M+1, M+2, ..., N (5)
The expansion factor α is chosen such that FN is close to the 4000 Hz band edge. If the pitch is at or below 93 Hz, then the fixed 93 Hz linear/logarithmic design can be used, and if it is above 93 Hz, then the pitch-adaptive linear/log design can be used. Furthermore, if the pitch is above 174 Hz, then a strictly linear design can be used. In addition, the bit allocation per channel can be pitch-adaptive to make efficient use of all of the available bits. - The DPCM encoder is then Applied to the logarithm of the envelope samples at the pitch-adaptive channel frequencies. Since the quantization noise has essentially a flat spectrum in the quefrequency domain (the Fourier transform of the log magnitudes) and since the speech envelope spectrum varies as 1/n² in this domain, then optimal reduction of the quantization noise is possible by designing a Weiner filter. This can be approximated by an appropriately designed cepstral low-pass filter.
- This amplitude encoding algorithm was implemented on a real-time facility and evaluated using the Diagnostic Rhyme Test. For 3 male speakers, the average scores were 95.2 in the quiet, 92.5 in airborne-command-post noise and 92.2 in office noise. For females, the scores were about 2 DRT points lower in each case.
- Although the pitch-adaptive 22-channel amplitude encoder is designed for operation at 4.8 kbps, it can operate at any rate from 1.8 kbps to 8.0 kbps simply by changing the bit allocations for the amplitudes and phases. Operation at rates below 4.8 kbps was most easily obtained by eliminating the phase coding. This effectively defaulted the coder into a "magnitude-only" analysis/synthesis system whereby the phase tracks are obtained simply by integrating the instantaneous frequencies associated with each of the sine waves. In this way, operation at 3.1 kbps was achieved without any modification to the amplitude encoder. By further reducing the bit allocations for each channel, operation at rates down to 1.8 kbps was possible. While all of the low rate systems appear to be quite intelligible, serious artifacts could be heard in the 1.8 kbps system, since in this case only 1 bit/channel was being used. At 2.4 kbps, these artifacts were essentially removed, and at 3.1 kbps, the synthetic speech was very smooth and completely free of artifacts. However, the quality of the synthetic speech at these lower rates was judged by a number of listeners to be "reverberant," "strident," and "mechanical".
- In fact, the same loss in quality and naturalness appear to occur in the uncoded magnitude-only system. It was hypothesized that a major factor in this loss of quality was lack of phase coherence in the sine waves. Therefore, if high quality speech is desired at rates below 4.8 kbps using the STC system, then provision can be made for maintaining phase coherence between neighboring sine waves. An approach for achieving this phase coherence is discussed below.
- The goal of phase modeling is to develop a parametric model to describe the phase measurements in (4). The intuition behind the new phase model stems from the fact that during steady voicing the excitation waveform will consist of a sequence of pitch pulses. In the context of the sinewave model, a pitch pulse occurs when all of the sine waves add coherently (i.e., are in phase). This means that the glottal excitation waveform can be modeled as
- As a consequence, the optimizing value was found by evaluating ℓ(no) over a range of onset times corresponding to the largest expected pitch period (20 ms in our case). Figure 2 illustrates a plot of the pitch onset likelihood function evaluated for a frame of male speech. The positive-ongoing peaks indicate that there is no ambiguity in the measured system phase. Figure 3, which corresponds to a frame of female speech, shows how the inherent ambiguity in the system phase manifests itself in negative-going peaks in the likelihood function. These results, which are typical of those obtained for voiced speech, show that it is possible to estimate the onset time of the pitch pulses from the phase measurements used in the sinusoidal representation.
- The first step used in coding the sine wave parameters is to assign one sine wave to each harmonic frequency bin. Since it is this set of sine wave which will ultimately be reconstructed at the receiver, it is to this reduced set of sine waves that the new phase model will be applied. In the most recent version of the STC system, an amplitude envelope is created by applying linear interpolation to the amplitudes of the reduced set of sine waves. This is used to flatten the amplitudes and then homomorphic methods are used to estimate and remove the system phase to create the sine wave representation of the glottal excitation waveform. The onset time and the system phase ambiguity are then estimated and used to form a set of residual phases. If the model were perfect, then these phase residuals would be zero. Of course, the model is not perfect; hence, for good synthetic speech it is necessary to code the residuals. An example of such a set of residuals is shown in FIG. 4 for the same data illustrated in FIG 2. Since only the sine waves in the baseband (up to 1000 Hz) will be coded, the model is actually fitted to the sine wave phase data only in the baseband region. The main point is that whereas the original phase measurements has values that were uniformly distributed over the [-π, π) region, the dynamic range of the phase residuals is much less than π, hence, coding efficiencies can be obtained.
- The final step in coding the sine wave parameters is to quantize the frequencies. This is done by quantizing the residual frequency obtained by replacing the measured frequency by the center frequency of the harmonic bin in which the sine wave lies. Because of the close relationship between the measured excitation phase of a sine wave and its frequency, it is desirable to compensate the phase should the quantized frequency be significantly different from the measured value. Since the final decoded excitation phase is the phase predicted by the model plus the coded phase residual, some phase compensation is inherent in the process since the phase model will be evaluated at the coded frequency and, hence, will better preserve the pitch structure in the synthetic waveform.
- The above analysis is based on the voiced speech case. If the speech should be unvoiced, the linear model will be totally in error, and the residual phase could be expected to deviate widely about the proposed straight-line model. These deviations would be random, a property which would be captured by the phase coder, hence, preserving the essential noise-like quality of the unvoiced speech.
- During steady voicing, the glottal excitation can be thought of as a sequence of periodic impulses which can be decomposed into a set of harmonic sine waves that add coherently at the time of occurrence of each pitch pulse. Based on this idea, a model for the speech waveform can be written as
ω = 2πf/fs is the angular frequency in radians, relative to the sampling frequency fs. Since under a minimum-phase assumption the system phase can be determined from the coded log-amplitude using homomorphic techniques, then the fidelity of the harmonic reconstruction depends only on the number of bits that can be assigned to the coding of the phase residuals. - Based on experiments performed during the development of the 4.8 kbps system, it was observed that during steady voicing the predictive phase model was quite accurate, resulting in phase residuals that were essentially zero, while during unvoiced speech, the phase predictions were poor resulting in phase residuals that appeared to be random values within [-π, π]. During transitions and mixed excitations, the behavior of the phase residuals was somewhere between these two extremes. The same sort of behavior can be simulated by replacing each residual phase by a uniformly-distributed random variable whose standard deviation is proportional to the degree to which the analyzed speech is unvoiced. If Pv denotes the probability that the speech is voiced, and if ϑm is a uniformly distributed randm variable on [-π, π], then
ε(mωo) = ϑm(1-Pv) (11)
provides an estimate for the phase residual. An estimate of the voicing probability is obtained from the pitch extractor being related to the degree to which the harmonic model is fitted to the measured set of sine waves. -
- Since the system phase Φ(ω) is derived from the coded log-magnitude, it is minimum-phase, which causes the synthetic waveform to be "spiky" and, in turn, leads to the perceived "buzziness". Several approaches have been proposed for reducing this effect by introducing some sort of phase dispersion. For example, a dispersive filter having a flat amplitude and quadratic phase can be used, an approach which happens to be particularly well-suited to the sinusoidal synthesizer since it can be implemented simply by replacing the system phase in (10) by
Φ(ω) = βω² (13)
The flexibility of the STC system allows for a pitch-adaptive speaker-dependent design. This can be done by considering the group delay associated with this phase characteristic which is given byP o represents the average pitch period, then T(π)=αP o leads to the design ruleω o = 2π/P o is the average pitch frequency and O < α < 1 controls the length of the chirp. The synthesis model then becomes - For lower rate applications, it is necessary to use an even more constrained phase model. There are two components to the phase: a rapidly-varying component that changes with every sample, and a slowly-varying component that changes with every frame. The rapidly-varying component can be written as
φm(n) = (n-no)mωo = nφo(n) (17)
where
φo(n) = (n-no)ωo. (18)
This shows that the rapidly-varying phases are locked in synchrony with the phase of the fundamental and, furthermore, that the pitch onset time simply establishes the time at which all of the excitation sine waves come into phase. But since the sine waves are phase-locked, this onset time simply represents a delay which is not perceptible by the ear and, hence, can be ignored. Therefore, the phase of the fundamental can be generated by integrating the instantaneous pitch frequency, but now as a consequence of (10), the phase relationship between neighboring sine waves will be preserved. Therefore, the rapidly-varying phases are multiples of the phase of the fundamental, which now becomes - The resulting phase-locked synthesizer has been implemented on the real-time system and found to dramatically improve the quality of the synthetic speech. Although the improvements are most noticeable at the lower rates below 3 kbps where no phase coding is possible, the phase-locking technique can also be used for high-frequency regeneration in those cases where not all of the baseband phases are coded. In fact, very good quality can be obtained at 4.8 kbps while coding fewer phases than was used in the earlier designs. Furthermore, since Eqs. (16-20) depend only on the measured pitch frequency, ωo, and a voicing probability, Pv, reduction in the data rate below 4.8 kbps is not possible with less loss in quality even though no explicit phase information is coded.
Claims (20)
sampling the speech to obtain a series of discrete samples and constructing therefrom a series of frames, each frame spanning a plurality of samples;
analyzing each frame of samples to extract a set of frequency components having individual amplitudes and phases;
tracking said components from one frame to the next frame;
interpolating the values of the components from the one frame to the next frame to obtain a parametric representation of the waveform whereby a synthetic speech waveform can be constructed by generating a set of sine waves corresponding to the interpolated values of the parametric representation; and
coding the frequency components for digital transmission, such that excitation contributions to the phases of the frequency components are locked into synchrony.
sampling the speech to obtain a series of discrete samples and constructing therefrom a series of frames, each frame spanning a plurality of samples;
analyzing each frame of samples to extract a set of frequency components having individual amplitudes and phases;
tracking said components from one frame to the next frame;
interpolating the values of the components from the one frame to the next frame to obtain a parametric representation of the waveform whereby a synthetic speech waveform can be constructed by generating a set of sine waves corresponding to the interpolated values of the parametric representation; and
coding the frequency components for digital transmission, such that the frequency components are limited to a set of amplitude channels defined by a plurality of harmonic frequencies.
sampling means for sampling a speech waveform to obtain a series of discrete samples and constructing therefrom a series of frames, each frame spanning a plurality of samples;
analyzing means for analyzing each frame of samples by Fourier analysis to extract a set of frequency components having individual amplitude and phase values;
tracking means for tracking the components from one frame to a next frame; and
coding means for coding the components such that excitation contributions of the phases of the frequency components are locked into synchrony.
sampling means for sampling a speech waveform to obtain a series of discrete samples and constructing therefrom a series of frames, each frame spanning a plurality of samples;
analyzing means for analyzing each frame of samples by Fourier analysis to extract a set of frequency components having individual amplitude and phase values,
tracking means for tracking the components from one frame to a next frame; and
coding means for coding the components such that the frequency components are limited to a set of channels defined by a plurality of harmonic frequencies.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AT88302063T ATE95936T1 (en) | 1987-04-02 | 1988-03-10 | ENCODING OF ACOUSTIC WAVEFORMS. |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US3409787A | 1987-04-02 | 1987-04-02 | |
US34097 | 1987-04-02 |
Publications (3)
Publication Number | Publication Date |
---|---|
EP0285276A2 true EP0285276A2 (en) | 1988-10-05 |
EP0285276A3 EP0285276A3 (en) | 1989-11-23 |
EP0285276B1 EP0285276B1 (en) | 1993-10-13 |
Family
ID=21874290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP88302063A Expired - Lifetime EP0285276B1 (en) | 1987-04-02 | 1988-03-10 | Coding of acoustic waveforms |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP0285276B1 (en) |
JP (2) | JP3191926B2 (en) |
AT (1) | ATE95936T1 (en) |
AU (2) | AU612351B2 (en) |
CA (1) | CA1332982C (en) |
DE (1) | DE3884839T2 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5029509A (en) * | 1989-05-10 | 1991-07-09 | Board Of Trustees Of The Leland Stanford Junior University | Musical synthesizer combining deterministic and stochastic waveforms |
EP0527535A2 (en) * | 1991-08-14 | 1993-02-17 | Philips Patentverwaltung GmbH | Apparatus for transmission of speech |
US5327521A (en) * | 1992-03-02 | 1994-07-05 | The Walt Disney Company | Speech transformation system |
EP0642129A1 (en) * | 1993-08-02 | 1995-03-08 | Koninklijke Philips Electronics N.V. | Transmission system with reconstruction of missing signal samples |
EP0666557A2 (en) * | 1994-02-08 | 1995-08-09 | AT&T Corp. | Decomposition in noise and periodic signal waveforms in waveform interpolation |
EP0780831A2 (en) * | 1995-12-23 | 1997-06-25 | Nec Corporation | Coding of a speech or music signal with quantization of harmonics components specifically and then residue components |
EP1008138A1 (en) * | 1996-11-07 | 2000-06-14 | Creative Technology Ltd. | System for fourier transform-based modification of audio |
WO2002003381A1 (en) * | 2000-02-29 | 2002-01-10 | Qualcomm Incorporated | Method and apparatus for tracking the phase of a quasi-periodic signal |
US6449592B1 (en) | 1999-02-26 | 2002-09-10 | Qualcomm Incorporated | Method and apparatus for tracking the phase of a quasi-periodic signal |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04150233A (en) * | 1990-10-09 | 1992-05-22 | Matsushita Electric Ind Co Ltd | Signal transmission method |
JP2606756B2 (en) * | 1990-10-22 | 1997-05-07 | 財団法人鉄道総合技術研究所 | Digital signal transmission equipment |
CN1193347C (en) * | 2000-06-20 | 2005-03-16 | 皇家菲利浦电子有限公司 | Sinusoidal coding |
PL376861A1 (en) * | 2002-11-29 | 2006-01-09 | Koninklijke Philips Electronics N.V. | Coding an audio signal |
WO2005024783A1 (en) * | 2003-09-05 | 2005-03-17 | Koninklijke Philips Electronics N.V. | Low bit-rate audio encoding |
KR101441474B1 (en) | 2009-02-16 | 2014-09-17 | 한국전자통신연구원 | Method and apparatus for encoding and decoding audio signal using adaptive sinusoidal pulse coding |
US8494199B2 (en) | 2010-04-08 | 2013-07-23 | Gn Resound A/S | Stability improvements in hearing aids |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1986005617A1 (en) * | 1985-03-18 | 1986-09-25 | Massachusetts Institute Of Technology | Processing of acoustic waveforms |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4856068A (en) * | 1985-03-18 | 1989-08-08 | Massachusetts Institute Of Technology | Audio pre-processing methods and apparatus |
-
1988
- 1988-03-01 CA CA000560230A patent/CA1332982C/en not_active Expired - Lifetime
- 1988-03-10 AT AT88302063T patent/ATE95936T1/en not_active IP Right Cessation
- 1988-03-10 EP EP88302063A patent/EP0285276B1/en not_active Expired - Lifetime
- 1988-03-10 DE DE88302063T patent/DE3884839T2/en not_active Expired - Lifetime
- 1988-03-16 AU AU13145/88A patent/AU612351B2/en not_active Ceased
- 1988-03-31 JP JP07665188A patent/JP3191926B2/en not_active Expired - Lifetime
-
1991
- 1991-04-12 AU AU74364/91A patent/AU643769B2/en not_active Ceased
-
2000
- 2000-12-25 JP JP2000393559A patent/JP2001228898A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1986005617A1 (en) * | 1985-03-18 | 1986-09-25 | Massachusetts Institute Of Technology | Processing of acoustic waveforms |
Non-Patent Citations (3)
Title |
---|
ICASSP 85 PROCEEDINGS IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, Tampa, 26th-29th March 1985, vol. 2, pages 489-492, IEEE; T.E. QUATIERI et al.: "Speech transformations based on a sinusoidal representation" * |
ICASSP 85 PROCEEDINGS IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, Tampa, 26th-29th March 1985, vol. 3, pages 945-948, IEEE; R.J. McAULAY et al.: "Mid-rate coding based on a sinusoidal representation of speech" * |
ICASSP 87 PROCEEDINGS IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, Dallas, 6th-9th April 1987, vol. 3, pages 1645-1648, IEEE; R.J. McAULAY et al.: "Multirate sinusoidal transform coding at rates from 2.4 KBPS to 8 KBPS" * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5029509A (en) * | 1989-05-10 | 1991-07-09 | Board Of Trustees Of The Leland Stanford Junior University | Musical synthesizer combining deterministic and stochastic waveforms |
EP0527535A2 (en) * | 1991-08-14 | 1993-02-17 | Philips Patentverwaltung GmbH | Apparatus for transmission of speech |
EP0527535A3 (en) * | 1991-08-14 | 1993-10-20 | Philips Patentverwaltung | Apparatus for transmission of speech |
US5327521A (en) * | 1992-03-02 | 1994-07-05 | The Walt Disney Company | Speech transformation system |
EP0642129A1 (en) * | 1993-08-02 | 1995-03-08 | Koninklijke Philips Electronics N.V. | Transmission system with reconstruction of missing signal samples |
BE1007428A3 (en) * | 1993-08-02 | 1995-06-13 | Philips Electronics Nv | Transmission of reconstruction of missing signal samples. |
EP0666557A2 (en) * | 1994-02-08 | 1995-08-09 | AT&T Corp. | Decomposition in noise and periodic signal waveforms in waveform interpolation |
EP0666557A3 (en) * | 1994-02-08 | 1997-08-06 | At & T Corp | Decomposition in noise and periodic signal waveforms in waveform interpolation. |
EP0780831A2 (en) * | 1995-12-23 | 1997-06-25 | Nec Corporation | Coding of a speech or music signal with quantization of harmonics components specifically and then residue components |
EP0780831A3 (en) * | 1995-12-23 | 1998-08-05 | Nec Corporation | Coding of a speech or music signal with quantization of harmonics components specifically and then residue components |
EP1008138A1 (en) * | 1996-11-07 | 2000-06-14 | Creative Technology Ltd. | System for fourier transform-based modification of audio |
EP1008138A4 (en) * | 1996-11-07 | 2002-02-20 | Creative Technoloy Ltd | System for fourier transform-based modification of audio |
US6449592B1 (en) | 1999-02-26 | 2002-09-10 | Qualcomm Incorporated | Method and apparatus for tracking the phase of a quasi-periodic signal |
WO2002003381A1 (en) * | 2000-02-29 | 2002-01-10 | Qualcomm Incorporated | Method and apparatus for tracking the phase of a quasi-periodic signal |
Also Published As
Publication number | Publication date |
---|---|
DE3884839D1 (en) | 1993-11-18 |
AU612351B2 (en) | 1991-07-11 |
EP0285276B1 (en) | 1993-10-13 |
JPH01221800A (en) | 1989-09-05 |
CA1332982C (en) | 1994-11-08 |
ATE95936T1 (en) | 1993-10-15 |
JP3191926B2 (en) | 2001-07-23 |
EP0285276A3 (en) | 1989-11-23 |
DE3884839T2 (en) | 1994-05-05 |
AU7436491A (en) | 1991-07-11 |
JP2001228898A (en) | 2001-08-24 |
AU643769B2 (en) | 1993-11-25 |
AU1314588A (en) | 1988-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5054072A (en) | Coding of acoustic waveforms | |
USRE36478E (en) | Processing of acoustic waveforms | |
US4856068A (en) | Audio pre-processing methods and apparatus | |
Tribolet et al. | Frequency domain coding of speech | |
CN1838239B (en) | Apparatus for enhancing audio source decoder and method thereof | |
EP0285276B1 (en) | Coding of acoustic waveforms | |
US6377916B1 (en) | Multiband harmonic transform coder | |
US4937873A (en) | Computationally efficient sine wave synthesis for acoustic waveform processing | |
EP0243562B1 (en) | Improved voice coding process and device for implementing said process | |
CA1243122A (en) | Processing of acoustic waveforms | |
McAulay et al. | Multirate sinusoidal transform coding at rates from 2.4 kbps to 8 kbps | |
McAulay et al. | Mid-rate coding based on a sinusoidal representation of speech | |
US6052658A (en) | Method of amplitude coding for low bit rate sinusoidal transform vocoder | |
EP1497631B1 (en) | Generating lsf vectors | |
Akamine et al. | ARMA model based speech coding at 8 kb/s | |
Fette et al. | High Quality 2400 bps Vocoder Research | |
Owens et al. | Speech coding | |
Marr et al. | Two dimensional prediction and interpolation for data rate compression of LPC parameters | |
Abu-Shikhah et al. | A hybrid LP-harmonics model for low bit-rate speech compression with natural quality | |
WO1996018187A1 (en) | Method and apparatus for parameterization of speech excitation waveforms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH DE ES FR GB GR IT LI LU NL SE |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): AT BE CH DE ES FR GB GR IT LI LU NL SE |
|
17P | Request for examination filed |
Effective date: 19900310 |
|
17Q | First examination report despatched |
Effective date: 19920326 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE CH DE ES FR GB GR IT LI LU NL SE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Effective date: 19931013 Ref country code: LI Effective date: 19931013 Ref country code: ES Free format text: THE PATENT HAS BEEN ANNULLED BY A DECISION OF A NATIONAL AUTHORITY Effective date: 19931013 Ref country code: CH Effective date: 19931013 Ref country code: SE Effective date: 19931013 Ref country code: NL Effective date: 19931013 Ref country code: AT Effective date: 19931013 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 19931013 |
|
REF | Corresponds to: |
Ref document number: 95936 Country of ref document: AT Date of ref document: 19931015 Kind code of ref document: T |
|
ITF | It: translation for a ep patent filed |
Owner name: BARZANO' E ZANARDO MILA |
|
REF | Corresponds to: |
Ref document number: 3884839 Country of ref document: DE Date of ref document: 19931118 |
|
ET | Fr: translation filed | ||
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
NLV1 | Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act | ||
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 19940331 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed | ||
REG | Reference to a national code |
Ref country code: GB Ref legal event code: IF02 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20070327 Year of fee payment: 20 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20070430 Year of fee payment: 20 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20070523 Year of fee payment: 20 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: PE20 Expiry date: 20080309 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20070319 Year of fee payment: 20 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION Effective date: 20080309 |