US4827517A - Digital speech processor using arbitrary excitation coding - Google Patents

Digital speech processor using arbitrary excitation coding Download PDF

Info

Publication number
US4827517A
US4827517A US06/810,920 US81092085A US4827517A US 4827517 A US4827517 A US 4827517A US 81092085 A US81092085 A US 81092085A US 4827517 A US4827517 A US 4827517A
Authority
US
United States
Prior art keywords
signal
speech
signals
representative
transform domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
US06/810,920
Inventor
Bishnu S. Atal
Isabel M. M. Trancoso
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AMERICAN TELEPHONE AND TELEGRAPH COMPANY AT&T BELL LABORATORIES
AT&T Corp
Original Assignee
AMERICAN TELEPHONE AND TELEGRAPH COMPANY AT&T BELL LABORATORIES
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AMERICAN TELEPHONE AND TELEGRAPH COMPANY AT&T BELL LABORATORIES filed Critical AMERICAN TELEPHONE AND TELEGRAPH COMPANY AT&T BELL LABORATORIES
Priority to US06/810,920 priority Critical patent/US4827517A/en
Assigned to BELL TELEPHONE LABORATORIES, INCORPORATED reassignment BELL TELEPHONE LABORATORIES, INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: TRANCOSO, ISABEL M. M., ATAL, BISHNU S.
Priority to DE8686111494T priority patent/DE3685324D1/en
Priority to EP86111494A priority patent/EP0232456B1/en
Priority to JP61198297A priority patent/JP2954588B2/en
Priority to KR1019860007063A priority patent/KR950013372B1/en
Priority to CA000517118A priority patent/CA1318976C/en
Publication of US4827517A publication Critical patent/US4827517A/en
Application granted granted Critical
Priority to US07/694,583 priority patent/USRE34247E/en
Priority to KR1019950025265A priority patent/KR950013373B1/en
Priority to KR1019950025266A priority patent/KR950013374B1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0212Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using orthogonal transformation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/12Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being prediction coefficients
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L2019/0001Codebooks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L2019/0001Codebooks
    • G10L2019/0013Codebook search algorithms
    • G10L2019/0014Selection criteria for distances
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique

Definitions

  • Our invention relates to speech processing and more particularly to digital speech coding arrangements.
  • Digital speech communication systems including voice storage and voice response facilities utilize signal compression to reduce the bit rate needed for storage and/or transmission.
  • a speech pattern contains redundancies that are not essential to its apparent quality. Removal of redundant components of the speech pattern significantly lowers the number of digital codes required to construct a replica of the speech. The subjective quality of the speech replica, however, is dependent on the compression and coding techniques.
  • One well known digital speech coding system such as disclosed in U.S. Pat. No. 3,624,302 issued Nov. 30, 1971 includes linear prediction analysis of an input speech signal.
  • the speech signal is partitioned into successive intervals of 5 to 20 milliseconds duration and a set of parameters representative of the interval speech is generated.
  • the parameter set includes linear prediction coefficient signals representative of the spectral envelope of the speech in the interval, and pitch and voicing signals corresponding to the speech excitation. These parameter signals may be encoded at a much lower bit rate than the speech signal waveform itself.
  • a replica of the input speech signal is formed from the parameter signal codes by synthesis.
  • the synthesizer arrangement generally comprises a model of the vocal tract in which the excitation pulses of each successive interval are modified by the interval spectral envelope representative prediction coefficients in an all pole predictive filter.
  • the foregoing pitch excited linear predictive coding is very efficient and reduces the coded bit rate, e.g., from 64 kb/s to 2.4 kb/s.
  • the produced speech replica exhibits a synthetic quality that makes speech difficult to understand.
  • the low speech quality results from the lack of correspondence between the speech pattern and the linear prediction model used. Errors in the pitch code or errors in determining whether a speech intervals is voiced or unvoiced cause the speech replica to sound disturbed or unnatural. Similar problems are also evident in formant coding of speech.
  • Alternative coding arrangements in which the speech excitation is obtained from the residual after prediction, e.g., APC provide a marked improvement because the excitation is not dependent upon an inexact model.
  • the excitation bit rate of these systems is at least an order of magnitude higher than the linear predictive model. Attempts to lower the excitation bit rate in the residual type systems have generally resulted in a substantial loss in quality.
  • the optimum Gaussian innovation sequence is obtained by comparing a speech waveform segment, typically 5 ms. in duration, to synthetic speech waveforms derived from a plurality of random Gaussian innovation sequences.
  • the innovation sequence that minimizes a perceptual error criterion is selected to represent the segment speech waveform.
  • the foregoing object is realized by replacing the exhaustive search of innovation sequence stochastic or other arbitrary codes of a speech analyzer with an arrangement that converts the stochastic codes into transform domain code signals and generates a set of transform domain patterns from the transform codes for each time frame interval.
  • the transform domain code patterns are compared to the transfer of the time interval speech pattern obtained from the input speech to select the best matching stochastic code and an index signal corresponding to the best matching stochastic code is output to represent the time frame interval speech.
  • Transform domain processing reduces the complexity and the time required for code selection.
  • the index signal is applied to a decoder in which it is used to select a stochastic code stored therein.
  • the stochatic codes may represent the time frame speech pattern excitation signal whereby the code bit rate is reduced to that required for the index signals and the prediction parameters of the time frame.
  • the stochastic codes may be predetermined overlapping segments of a string of stochastic numbers to reduce storage requirements.
  • the invention is directed to an arrangement for processing a speech message in which a set of arbitrary value code signals such as random numbers together with index signals indentifying the arbitrary value code signals and signals representative of transforms of the arbitrary valued codes are formed.
  • the speech message is partitioned into time frame interval speech patterns and a first signal representative of the speech pattern of each successive time frame interval is formed responsive to the partitioned speech.
  • a plurality of second signals representative of time frame interval patterns formed from the transform domain code signals are generated.
  • One of said artitrary code signals is selected for each time frame interval jointly responsive to the first signal and the second signals of the time frame interval and the index signal corresponding to said selected transform signal is output.
  • forming of the first signal includes generating a third signal that is a transform domain signal corresponding to the current time frame interval speech pattern and the generation of each second signal includes producing a fourth signal that is a transform domain signal corresponding to a time frame interval pattern responsive to said transform domain code signals.
  • Arbitrary code selection comprises generating a signal representative of the similariti es between said third and fourth signals and determining the index signal corresponding to the fourth signal having the maximum similarities signal.
  • the transform domain code signals are frequency domain transform codes derived from the arbitrary codes.
  • the transform domain code signals are Fourier transforms of the arbitrary codes.
  • a speech message is formed from the arbitrary codes by receiving a sequence of said outputted index signals, each identifying a predetermined arbitrary code. Each index signal corresponds to a time frame interval speech pattern.
  • the arbitrary codes are concatenated responsive to the sequence of said received index signals and the speech message is formed responsive to the concatenated codes.
  • a speech message is formed using a string of arbitrary value coded signals having predetermined segments thereof identified by index signals.
  • a sequence of signals identifying predetermined segments of said string are received.
  • Each of said signals of the sequence corresponds to speech patterns of successive time frame intervals.
  • the predetermined segments of said arbitrary valued code string are selected responsive to the sequence of received identifying signals and the selected arbitrary codes are concatenated to generate a replica of the speech message.
  • the arbitrary value signal sequences of the string are overlapping sequences.
  • FIG. 1 depicts a speech encoder utilizing a prior art stochastic coding arrangement
  • FIGS. 2 and 3 depict a general block diagram of a digital speech encoder usin arbitrary codes and transform domain processing that is illustrative of the invention
  • FIG. 4 depicts a detailed block diagram of digital speech encoding signal processing arrangement that performs the functions of the circuit shown in FIGS. 2 and 3;
  • FIG. 5 shows a block diagram of an error and scale factor generating circuit useful in the arrangement of FIG. 3;
  • FIGS. 6-11 show flow chart diagrams that illustrate the operation of the circuit of FIG. 4.
  • FIG. 12 shows a block diagram of a speech decoder circuit illustrative of the invention in which a string of random number codes form an overlapping sequence of stochastic codes.
  • FIG. 1 shows a prior art digital speech coder arranged to use stochastic codes for excitaion signals.
  • a speech pattern applied to microphone 101 is converted therein to a speech signal which is band pass filtered and sampled in filter and sampler 105 as is well known in the art.
  • the resulting samples are converted into digital codes by analog-to-digital converter 110 to produce digitally coded speech signal s(n).
  • Signal s(n) is processed in LPC and pitch predictive analyzer 115.
  • the processing includes dividing the coded samples into successive speech frame intervals and producing a set of parameter signals corresponding to the signal s(n) in each successive frame.
  • a(p) represent the short delay correlation or spectral related features of the interval speech pattern
  • parameter signals ⁇ (1), ⁇ (2), ⁇ (3), and m represent long delay correlation or pitch related features of the speech pattern.
  • the speech signal is partitioned in frames or blocks, e.g., 5 msec or 40 samples in duration.
  • stochastic code store 120 may contain 1024 random white Gaussian codeword sequences, each sequence comprising a series of 40 random numbers.
  • Each codeword is scaled in scaler 125, prior to filtering, by a factor ⁇ that is constant for the 5 msec block.
  • the speech adaptation is done in recursive filters 135 and 145.
  • Filter 135 uses a predictor with large memory (2 to 15 msec) to introduce voice periodicity and filter 145 uses a predictor with short memory (less than 2 msec) to introduce the spectral envelope in the synthetic speech signal.
  • Such filters are described in the article "Predictive coding of speech at low bit rates" by B. S. Atal appearing in the IEEE Transactions on Communicatons, Vol. COM-30, pp. 600-614, April 1982.
  • the error representing the difference between the original speech signal s(n) applied to differencer 150 and synthetic speech signal s(n) applied from filter 145 is further processed by linear filter 155 to attenuate those frequency components where the error is perceptually less important and amplify those frequency components where the error is perceptually more important.
  • the stochastic code sequence from store 120 which produces the minimum mean-squared subjective error signal E(k) and the corresponding optimum scale factor ⁇ are selected by peak picker 170 only after processing of all 1024 code word sequences in store 120.
  • synthesis filters 135 and 145 and perceptual weighting filter 155 can be combined into one linear filter.
  • the impulse response of this equivalent filter may be represented by the sequence f(n). Only a part of the equivalent filter output is determined by its input in the current 5 msec frame since, as is well known in the art, a portion of the filter output corresponds to signals carried over from preceding frames.
  • the filter memory from the previous frames plays no role in the search for the optimum innovation sequence in the present frame. The contributions of the previous memory to the filter output in the present frame can thus be subtracted from the speech signal in determining the optimum code word from stochastic code stoe 120.
  • the residual after subtracting the contributions of the filter memory carried over from the previous frames may be represented by the signal x(n).
  • the filter output contributed by the kth codeword from store 120 in the present frame is ##EQU1## where c.sup.(k) (i) is the ith sample of the kth codeword.
  • the error signal expressed in equation 11 can be processed much faster than the expression in equation 5. If Fc(k) is processed in a recursive filter of order p (typically 20), processing according to equation 11 can substantially reduce the processing time requirements for stochastic coding.
  • the reduced processing time may also be obtained by extending the operations of equation 5 from the time domain to a transform domain such as the frequency domain.
  • a transform domain such as the frequency domain.
  • the filter output can be expressed in the frequency domain as
  • X.sup.(k) (i), H(i) and C.sup.(k) (i) are discrete Fourier transforms (DFTs) of x.sup.(k) (n),h(n) and c.sup.(k) (n), respectively.
  • DFTs discrete Fourier transforms
  • the duration of the filter output can be considered to be limited to a 10 msec time interval and zero outside.
  • a DFT with 80 points is sufficiently accurate for expressing equation 13.
  • the total squared error E(k) is expressed in frequency-domain notations as ##EQU5## where X(i) is the DFT of x(n).
  • Equation 14 is then transformed to ##EQU6## Again, the scale factor ⁇ (k) can be eliminated from equation 17 and the total error can be expressed as ##EQU7## where ⁇ (i)* is complex conjugate ⁇ (i).
  • the frequency-domain search has the advantage that the singular-value decomposition of the matrix F is replaced by discrete fast Fourier transforms whereby the overall processing complexity is significantly reduced. In the transform domain using either the singular value decomposition or the discrete Fourier transform processing, further savings in the computational load can be achieved by restricting the search to a subset of frequencies (or eigenvectors) corresponding to large values of d(i) (or b(i)).
  • the processing is substantially reduced whereby real time operation with microprocessor integrated circuits is realizable. This is accomplished by replacing the time domain processing involved in the generation of the error between the synthetic speech signal formed responsive to the innovation code and the input speech signal of FIG. 1 with transform domain processing as described hereinbefore.
  • FIGS. 2 and 3 A transform domain digital speech encoder using arbitrary codes for excitation for excitation signals illustrative of the invention is shown in FIGS. 2 and 3.
  • the arbitrary codes may take the form of random number sequences or may, for example, be varied sequences of +1 and -1 in any order. Any arrangement of varied sequences may be used with the broad restriction that the overall average of the sequences is small.
  • a speech pattern such as a spoken message received by microphone transducer 201 is bandlimited and converted into a sequence of pulse samples in filter and sampler circuit 203 and supplied to linear prediction coefficient (LPC) analyzer 209 via analog-to-digital converter 205.
  • LPC linear prediction coefficient
  • the filtering may be arranged to remove frequency components of the speech signal above 4.0 KHz, and the sampling may be at an 8.0 KHz rate as is well known in the art.
  • Each sample from circuit 203 is transformed into an amplitude representative digital code in the analog-to-digital converter.
  • the analyzer also forms a set of perceptually weighted linear predictive coefficient signals
  • the speech samples from A/D converter 205 are delayed in delay 207 to allow time for the formation of speech parameter signals a(k) and the delayed samples are supplied to the input of prediction residual generator 211.
  • the prediction residual generator as is well known in the art, is responsive to the delayed speech samples s(n) and the prediction parameters a(k) to form a signal ⁇ (n) corresponding to the differences between speech samples and their predicted values.
  • the formation of the predictive parameters and the prediction residual signal for each frame in predictive analyzer 209 may be performed according to the arrangement disclosed in U.S. Pat. No. 3,740,476 issued to B. S. Atal, June 19, 1973, and assigned to the same assignee, or in other arrangements well known in the art.
  • Prediction residual signal generator 211 is operative to subtract the predictable portion of the frame signal from the sample signals s(n) to form signal ⁇ (n) in accordance with ##EQU8## where p, the number of the predictive coefficients, may be 12, N the number of samples in a speech frame, may be 40, and a(k) are the predictive coefficients of the frame. Predictive residual signal ⁇ (n) corresponds to the speech signal of the frame with the short term redundancies removed.
  • an optimum arbitrary code ##EQU10## is selected to represent the frame excitation, and a signal K* that indexes the selected arbitrary excitation code is transmitted.
  • the speech code bit rate is minimized without adversely affecting intelligibility.
  • the arbitrary code is selected in the transform domain to reduce the selection processing so that it can be performed in real time with microprocessor components.
  • Selection of the arbitrary code for excitation includes combining the predictive residual with the perceptually weighted linear predictive parameters of the frame to generate a signal y(n).
  • Speech pattern signal y(n) corresponding to the perceptually weighted speech signal contains a component y(n) due to the preceding frames. This preceding frame component y(n) is removed prior to the selection processing so that the stored arbitrary codes are in effect compared to only the current frame excitation.
  • Signal y(n) is formed in predictive filter 217 responsive to the perceptually weighted predictive parameter and the predictive residual signals of the frame as per the relation ##EQU11## and are stoed in y(n) store 227.
  • the preceding frame speech contribution signal y(n) is generated in preceding frame contribution signal generator 222 from the perceptually weighted predictive parameter signal b(k) of the current frame, the pitch predictive parameters ⁇ (1), ⁇ (2), ⁇ (3) and m obtained from store 230 and the selected
  • Generator 222 may comprise well known processor arrangements adapted to form the signals of equations 24.
  • the past frame speech contribution signal y(n) of store 240 is subtracted from the perceptually weighted signal of store 227 in subtractor circut 247 to form the current frame speech pattern signal with past frame components removed.
  • the difference signal x(n) from subtractor 247 is then transformed into a frequency domain signal set by discrete Fourier transform (DFT) generator 250 as follows.
  • DFT discrete Fourier transform
  • N f is the number of DFT points, e.g., 80.
  • the DFT transformation generator may operate as described in the U.S. Pat. No. 3,588,460 issued to Richard A. Smith, June 28, 1971, and assigned to the same assignee, or may comprise any of the well known discrete Fourier transform circuits.
  • the frequency domain impulse response signal H(i) and the frequency domain perceptually weighted speech signal with preceding frame contributions removed X(i) are applied to transform parameter signal converter 301 in FIG. 3 wherein the signals d(i) and ⁇ (i) are formed according to
  • Each code output C.sup.(k) (i) of transform domain code store 305 is applied to one of the K error and scale factor generators 315-1 through 315-K wherein the transformed arbitrary code is compared to the time frame speech signal represented by signals d(i) and ⁇ (i) for the time frame obtained from parameter signal converter 301.
  • FIG. 5 shows a block diagram arrangement that may be used to produce the error and scale factor signals for error and scale factor generator 315-K. Referring to FIG. 5, arbitrary code sequence C.sup.(k) (1), C.sup.(k) (2), . . . , C.sup.(k) (i), . . .
  • C.sup.(k) (N) is applied to speech pattern cross correlator 501 and speech pattern energy coefficient generator 505 which serves as a normalizer.
  • Signal d(i) from transform parameter signal converter 301 is supplied to cross correlator 501 and normalizer 505, while ⁇ (i) from converter 301 is supplied to cross correlator 501.
  • Cross correlator 501 is operative to generate the signal ##EQU20## which represents the correlation of the speech frame signal with past frame components removed ⁇ (i) and he frame speech signal derived from the transformed arbitrary code d(i) C k (i) while squarer circuit 510 produces the signal ##EQU21##
  • the error using code sequence ##EQU22## is formed in divider circuit 515 responsive to the outputs of cross correlator 501 and normalizer 505 over the current speech time frame according to ##EQU23## and the scale factor is produced in divider 520 responsive to the outputs of cross correlator circuit 510 and normalizer 505 as per ##EQU24##
  • the arbitrary code that best matches the characteristics of the current frame speech pattern is selected in code selector 320 of FIG. 3, and the index of the selected code K* as well as the scale factor for the code ⁇ (K*) are supplied to multiplexer 325.
  • the multiplexer is adapted to combine the excitation code signals K* and ⁇ (K*) with the current speech time frame LPC parameter signals a(k) and pitch parameter signals ⁇ (1), ⁇ (2), ⁇ (3) and m into a form suitable for transmission or storage.
  • Index signal K* is also applied to selector 326 so that the time domain code for the index is selected from store 330.
  • the selected time domain code ##EQU25## is fed to preceding frame contribution generator 222 in FIG. 2 where it is used in the formation of the signal y(n) for the next speech time frme processing. ##EQU26##
  • FIG. 4 depicts a speech encoding arrangement according to the invention wherein the operations described with respect to FIGS. 2 and 3 are performed in a series of digital signal processors 405, 410, 415, and 420-1 through 420-K under control of control processor 435.
  • Processor 405 is adapted to perform the predictive coefficient signal processing associated with LPC analyzer 209, LPC and weighted LPC signal stores 213 and 215, prediction residual signal generator 217, and pitch predictive analyzer 220 of FIG. 2.
  • Predictive residual signal processor 410 performs the functions described with respect to predictive filter 217, preceding frame contribution signal generator 222, subtractor 247 and impulse response generator 225.
  • Transform signal processor 415 carries out the operations of DFT generators 245 and 250 of FIG. 2 and transform parameter signal converter 301 of FIG. 3.
  • Processors 420-1 and through 420-K produce the error and scale factor signals as would be obtained from error and scale factor generators 315-1 through 315-K of FIG. 3.
  • Each of the digital signal processors may be the WE® DSP32 Digital Signal Processor described in the article “A 32 Bit VLSI Digital Signal Processor", by P. Hays et al, appearing in the IEEE Journal of Solid State Circuits, Vol. SC20, No. 5, pp. 998, October 1985, and the control processor may be the Motorola type 68000 microprocessor and associated circuits described in the publication "MC68000 16 Bit Microprocessor User's Manual", Second Edition, Motorola Inc., 1980.
  • Each of the digital signal processors has associated therewith a memory for storing data for its operation, e.g., data memory 408 connected to prediction coefficient signal processor 405.
  • Common data memory 450 stores signals from one digital signal processor that are needed for the operation of another signal processor.
  • Common program store 430 has therein a sequence of permanently stored instruction signals used by control processor 435 and the digital signal processors to time and carry out the encoding functions of FIG. 4.
  • Stochastic code store 440 is a read only memory that includes random codes C k (n) as described with respect to FIG. 3 and transform code signal store 445 is another read only memory that holds the Fourier transformed frequency domain code signals corresponding to the codes in store 440.
  • the encoder of FIG. 4 may form a part of a communication system in which speech applied to microphone 401 is encoded to a low bit rate digital signal, e.g., 4.8 kb/s, and transmitted via a communication link to a receiver adapted to decode the arbitrary code indices and frame parameter signals.
  • a low bit rate digital signal e.g., 4.8 kb/s
  • the output of the encoder of FIG. 4 may be stored for later decoding in a store and forward system or stored in read only memory for use in speech synthesizers of the type that will be described.
  • control processor 435 is conditioned by a manual signal ST from a switch or other device (not shown) to enable the operation of the encoder. All of the operations of the digital signal processors of FIG.
  • step 601 signal ST is produced to enable predictive coefficients processor 405 and the instructions in common program store 430 are accessed to control the operation of processor 405.
  • Speech applied to microphone 401 is filtered and sampled in filter and sampler 403 and converted to a sequence of digital signals in A/D converter 404.
  • Processor 405 receives the digitally coded sample signals from converter 404, partitions the samples into time frame segments as they are received and stores the successive frame samples in data memory 408 as indicated in step 705 of FIG. 7.
  • Short delay coefficient signals a(k) and perceptually weighted short delay signals b(k) are produced in accordance with aforementioned U.S. Pat. No. 4,133,476 and equation 19 for the current time frame as per step 71.
  • the current frame predictive residual signal ⁇ (n) are generated in accordance with equation 20 from the current frame speech samples s(n) and the LPC coefficient signals a(k) in step 715.
  • an end of short delay analysis signal is sent to control processor 435 (step 720).
  • the STELPC signal is used to start the operations of processor 410 as per step 615 of FIG. 6.
  • Processor 405 may be adapted to form the predictive coefficient signals as described in the aforementioned patent 4,133,976
  • the signals a(k), b(k), ⁇ (n), and ⁇ (n) and m of the current speech frame are transferred to common data memory 450 for use in residual signal processing.
  • control processor 435 is responsive to the STELPC signal to activate prediction residual signal processor 410 by means of step 801 in FIG. 8.
  • the operations of processor 410 are done under control of common program store 430 as illustrated in the flow chart of FIG. 8.
  • the formation and storage of the current frame perceptually weighted signal y(n) is accomplished in step 805 according to equation 23.
  • Long delay predictor contribution signals ⁇ (n) are generated as per equation 24 in step 810.
  • Short delay predictor contributions signal y(n) is produced in step 815 as per equation 24.
  • the current frame speech pattern signal with preceding frame components removed (x(n)), is produced by subtracting signal y(n) from signal y(n) in step 820 and impulse response signal h(n) is formed from the LPC coefficient signals a(k) as described in aforementioned U.S. Pat. No. 4,133,476 (step 825). Signals x(n) and h(n) transferred to and stored in common data memory 450 for use in transform signal processor 415.
  • control processor 435 Upon completion of the generation of signals x(n), h(n) for the current time frame, control processor 435 receives signal STEPSP from processor 410. When both signals STEPSP and SEPTCA are received by control processor 435 (step 621 of FIG. 6), the operation of transform signal processor 415 is started by transmitting the STEPSP signal to processor 415 as per step 625 in FIG. 6. Processor 415 is operative to generate the frequency domain speech frame representative signals x(i) and H(i) by performing a discrete Fourier transform operation on signals x(n) and h(n). Referring to FIG. 9, upon detecting signal STEPSP (step 901), the x(n) and h(n) signals are read from common data memory 450 (step 905).
  • Signals X(i) are generated from the x(n) signals (step 910) and signals H(i) are generated from the h(n) signals (step 915) by Fourier transform operations well known in the art.
  • the DFT may be implemented in accordance with the principles described in aforementioned U.S. Pat. No. 3,588,460.
  • the conversion of signals X(i) and H(i) into the speech frame representative signals d(i) and ⁇ (i) implemented in processor 415 is done in step 920 as per equation 29 and signals d(i) and ⁇ (i) are stored in common data memory 450.
  • signals STETPS is sent to control processor 435 (step 925). Responsive to signal STETPS in step 630, the control processor enables the error and scale factor signal processors 420-1 through 420-R (step 635).
  • control signal STETPS (step 1001) permits the initial setting of parameters k identifying the stochastic code being processed, K* identifying the selected stochastic code for the current frame, P(r)* identifying the cross correlation coefficient signal of the selected code for the current frame, and Q(r)* identifying the energy coefficient signal of the selected code for the current frame.
  • the currently considered transform domain arbitrary code C.sup.(k) (i) is read from transform code signal store 445 (step 1005) and the current frame transform domain speech pattern signal obtained from the transform domain arbitrary code C K (i) is formed (step 1015) from the d(i) and C k (i) signals.
  • the signal d(i)C.sup.(k) (i) represents the speech pattern of the frame produced by the arbitrary code ##EQU28##
  • code signal C.sup.(k) (i) corresponds to the frame excitation
  • signal d(i) corresponds to the predictive filter representative of the human vocal apparatus.
  • Signal ⁇ (i) stored in common data store 450 is representative of the current frame speech pattern obtained from microphone 401.
  • the two transform domain speech pattern representative signals, d(i)C.sup.(k) (i) and ⁇ (i), are cross correlated to form signal P(k) in step 1020 and an energy coefficient signal Q(k) is formed in step 1022 for normalization purposes.
  • the current deviation of the stochastic code frame speech pattern from the actual speech pattern of the frame is evaluated in step 1025. If the error between the code pattern and the actual pattern is less than the best obtained for preceding codes in the evaluation, index signal K(r)*, cross correlation signal P(r)* and energy coefficient signal Q(r)* are set to k, P(k), and Q(k) in step 1030. Step 1035 is then entered to determine if all codes have been evaluated.
  • step 1035 is entered directly from step 1025.
  • code index signal k is incremented (step 1040) and step 1010 is reentered.
  • signal K(r)* is stored and scale factor signal ⁇ * is generated in step 1045.
  • the index signal K(r)* and scale factor signal ⁇ (r)* for the codes processed in the error and scale factor signal processor are stored in common data store 450.
  • Step 1050 is then entered and the STEER control signal is sent to control processor 435 to signal the completion of the transform code selection in the error and scale factor signal processor (step 640 in FIG. 6).
  • the control processor is then operative to enable the minimum error and multiplex processor 455 as per step 645.
  • processor 455 is operative according to the flow chart of FIG. 11 to select the best matching stochastic code in store 440 having index K*. This index is selected from the best arbitrary codes indexed by signals K*(1) through K*(R) for processors 420-1 to 420-R. This index K* corresponds to the stochastic code that results in the minimum error signal.
  • processor 455 is enabled when a signal is received from control processor 435 indicating that processors 420-1 through 420-R have sent STEER signals.
  • Signals r, K*, P*, and Q* are each set to an initial value of one, and signals P(r)*, Q(r)*, K(r)* and ⁇ (r)* are read from common data memory 450 (step 1110). If the current signals P(r)* and Q(r)* result in a better matching stochastic code signal as determined in step 1115, these values are stored as K*, P*, Q*, and ⁇ * for the current frame (step 1120) and decision step 1125 is entered. Until the Rth set of signals K(R)*, P(R)*, Q(R)* are processed, step 1110 is reentered via incrementing step 1130 so that all possible candidates for the best stochastic code are considered. After the Rth set of signals are processed, signal K*, the selected index of the current frame and signal ⁇ *, the corresponding scale factor signal are stored in common data memory 450.
  • the predictive parameter signals for the current frame and signals K* and ⁇ * are then read from memory 450 (step 1140), and the signals are converted into a frame transmission code set as is well known in the art (step 1145).
  • the current frame end transmission signal FET is then generated and sent to control processor 435 to signal the beginning of the succeeding frame processing (step 650 in FIG. 6).
  • the coded speech signal of the time frame comprises a set of LPC coefficients a(k), a set of pitch predictive coefficients ⁇ (1), ⁇ (2) ⁇ (3), and m, and the stochastic code index and scale factor signals K* and ⁇ *.
  • a predictive decoder circuit is operative to pass the excitation signal of each speech time frame through one or more filters that are representative of a model of the human vocal apparatus.
  • the excitation signal is an arbitrary code stored therein which is indexed as described with respect to the speech encoder of the circuits of FIGS. 2 and 3 or FIG. 4.
  • the stochastic codes may be a set of 1024 codes each comprising a set of 40 random numbers obtained from a string of the 1024 random number codes g(1), g(2), . . . , g(1063) stored in a register.
  • the 40 element stochastic codes are arranged in overlapping fashion as illustrated in Table 1.
  • each code is a sequence of 40 random numbers that are overlapped so that each successive code begins at the second number position of the preceding code.
  • 39 positions of successive codes are overlapped without affecting their random character to minimize storage requirements.
  • the degree of overlap may be varied without affecting the operation of the circuit.
  • the overall average of the string signals g(1) through g(1063) must be relatively small.
  • the arbitrary codes need not be random numbers and the codes need not be arranged in overlapped fashion. Thus, arbitrary sequences of +1, -1 that define a set of unique codes may be used.
  • LPC coefficient signals a(k), pitch predictive coefficient signals ⁇ (1), ⁇ (2), ⁇ (3), and m, and the stochastic code index and scale factor signals K* and ⁇ * are separated in demultiplexer 1201.
  • the pitch predictive parameter signals ⁇ (k) and m are applied to pitch predictive filter 1220, and the LPC coefficient signals are supplied to LPC predictive filter 1225.
  • Filters 1220 and 1225 operate as is well known in the art and as described in the aforementioned U.S. Pat. No. 4,133,976 to modify the excitation signal from scaler 1215 in accordance with vocal apparatus features.
  • Index signal K* is applied to selector 1205 which addresses stochastic string register 1210.
  • the stochastic code best representative of the speech time frame excitation is applied to scaler 1215.
  • the stochastic codes correspond to time frame speech patterns without regard to the intensity of the actual speech.
  • the scaler modifies the stochastic code in accordance with the intensity of the excitation of the speech frame. The formation of the excitation signal in this manner minimizes the excitation bit rate required for transmission, and the overlapped code storage operates to reduce the circuit requirements of the decoder and permits a wide selection of encryption techniques.
  • the stochastic code excitation signal from scaler 1215 is modified in predictive filters 1220 and 1225, the resulting digital coded speech is applied to digital-to-analog converter 1230 wherein successive analog samples are formed. These samples are filtered in low pass filter 1235 to produce a replica of the time frame speech signal s(n) applied to the encoder of the circuit of FIGS. 2 and 3 of FIG. 4.
  • the invention may be utilized in speech synthesis wherein speech patterns are encoded using stochastic coding as shown in the circuits of FIGS. 2 and 3 or FIG. 4.
  • the speech synthesizer comprises the circuit of FIG. 12 in which index signals K* are successively applied from well known data processing apparatus together with predictive parameter signals in accordance with the speech pattern to be produced.
  • the overlapping code arrangement minimizes the storage requirements so a wide variety of speech sounds may be produced and the stochastic codes are accessed with index signals in a highly efficient manner.
  • storage of speech messages according to the invention for later reproduction only requires the storage of the prediction parameters and the excitation index signals of the successive frames so that speech compression is enhanced without reducing the intelligibility of the reproduced message.

Abstract

An arrangement for processing a speech message which uses arbitrary value codes to form time frame excitation signals. The arbitrary value codes, e.g., random numbers, are stored as well as signals indexing the codes and transform domain signals corresponding to the arbitrary codes are generated. The speech message is partitioned into time frame interval speech patterns and a first signal representative of the transform domain speech pattern of each successive time frame interval is formed responsive to the partitioned speech message. A plurality of second signals representative of time frame interval patterns corresponding to the transform code signals are generated responsive to said set of transform signals. One of the arbitrary code signals is selected jointly responsive to the first and second signals of each successive time interval to represent the time frame speech signal excitation, and the index signal corresponding to said selected arbitrary code signal is output. A replica of the speech message is formed from the arbitrary codes by concatenating a sequence of said arbitrary codes identified by the output index signals.

Description

BACKGROUND OF THE INVENTION
The Government has rights in this invention pursuant to Contract No. MDA904-84-C-6010 awarded by Maryland Procurement Office.
Our invention relates to speech processing and more particularly to digital speech coding arrangements.
Digital speech communication systems including voice storage and voice response facilities utilize signal compression to reduce the bit rate needed for storage and/or transmission. As is well known in the art, a speech pattern contains redundancies that are not essential to its apparent quality. Removal of redundant components of the speech pattern significantly lowers the number of digital codes required to construct a replica of the speech. The subjective quality of the speech replica, however, is dependent on the compression and coding techniques.
One well known digital speech coding system such as disclosed in U.S. Pat. No. 3,624,302 issued Nov. 30, 1971 includes linear prediction analysis of an input speech signal. The speech signal is partitioned into successive intervals of 5 to 20 milliseconds duration and a set of parameters representative of the interval speech is generated. The parameter set includes linear prediction coefficient signals representative of the spectral envelope of the speech in the interval, and pitch and voicing signals corresponding to the speech excitation. These parameter signals may be encoded at a much lower bit rate than the speech signal waveform itself. A replica of the input speech signal is formed from the parameter signal codes by synthesis. The synthesizer arrangement generally comprises a model of the vocal tract in which the excitation pulses of each successive interval are modified by the interval spectral envelope representative prediction coefficients in an all pole predictive filter.
The foregoing pitch excited linear predictive coding is very efficient and reduces the coded bit rate, e.g., from 64 kb/s to 2.4 kb/s. The produced speech replica, however, exhibits a synthetic quality that makes speech difficult to understand. In general, the low speech quality results from the lack of correspondence between the speech pattern and the linear prediction model used. Errors in the pitch code or errors in determining whether a speech intervals is voiced or unvoiced cause the speech replica to sound disturbed or unnatural. Similar problems are also evident in formant coding of speech. Alternative coding arrangements in which the speech excitation is obtained from the residual after prediction, e.g., APC, provide a marked improvement because the excitation is not dependent upon an inexact model. The excitation bit rate of these systems, however, is at least an order of magnitude higher than the linear predictive model. Attempts to lower the excitation bit rate in the residual type systems have generally resulted in a substantial loss in quality.
The article "Stochastic Coding of Speech Signals at Very Low Bit Rates" by Bishnu S. Atal and Manfred Schroeder appearing in the Proceedings of the International Conference on Communications-ICC'84, May 1984, pp. 1610-1613, discloses a stochastic model for generating speech excitation signals in which a speech waveform is represented as a zero mean Gaussian stochastic process with slowly-varying power spectrum. The optimum Gaussian innovation sequence is obtained by comparing a speech waveform segment, typically 5 ms. in duration, to synthetic speech waveforms derived from a plurality of random Gaussian innovation sequences. The innovation sequence that minimizes a perceptual error criterion is selected to represent the segment speech waveform. While the stochastic model described in this article results in low bit rate coding of the speech waveform excitation signal, a large number of innovation sequences are needed to provide an adequate selection. The signal processing required to select the best innovation sequence involves exhaustive search procedures to encode the innovation signals, but such search arrangements for code bit rates corresponding to 4.8 Kbit/sec code generation are very time consuming even when processed onlarge, high speed scientific computers. It is an object of the invention to provide improved speech coding and synthesis of high quality at lower bit rates utilizing arbitrary codes.
SUMMARY OF THE INVENTION
The foregoing object is realized by replacing the exhaustive search of innovation sequence stochastic or other arbitrary codes of a speech analyzer with an arrangement that converts the stochastic codes into transform domain code signals and generates a set of transform domain patterns from the transform codes for each time frame interval. The transform domain code patterns are compared to the transfer of the time interval speech pattern obtained from the input speech to select the best matching stochastic code and an index signal corresponding to the best matching stochastic code is output to represent the time frame interval speech. Transform domain processing reduces the complexity and the time required for code selection.
The index signal is applied to a decoder in which it is used to select a stochastic code stored therein. In a predictive speech synthesizer, the stochatic codes may represent the time frame speech pattern excitation signal whereby the code bit rate is reduced to that required for the index signals and the prediction parameters of the time frame. The stochastic codes may be predetermined overlapping segments of a string of stochastic numbers to reduce storage requirements.
The invention is directed to an arrangement for processing a speech message in which a set of arbitrary value code signals such as random numbers together with index signals indentifying the arbitrary value code signals and signals representative of transforms of the arbitrary valued codes are formed. The speech message is partitioned into time frame interval speech patterns and a first signal representative of the speech pattern of each successive time frame interval is formed responsive to the partitioned speech. A plurality of second signals representative of time frame interval patterns formed from the transform domain code signals are generated. One of said artitrary code signals is selected for each time frame interval jointly responsive to the first signal and the second signals of the time frame interval and the index signal corresponding to said selected transform signal is output.
According to one aspect of the invention, forming of the first signal includes generating a third signal that is a transform domain signal corresponding to the current time frame interval speech pattern and the generation of each second signal includes producing a fourth signal that is a transform domain signal corresponding to a time frame interval pattern responsive to said transform domain code signals. Arbitrary code selection comprises generating a signal representative of the similariti es between said third and fourth signals and determining the index signal corresponding to the fourth signal having the maximum similarities signal.
According to another aspect of the invention, the transform domain code signals are frequency domain transform codes derived from the arbitrary codes.
According to yet another aspect of the invention, the transform domain code signals are Fourier transforms of the arbitrary codes.
According to yet another aspect of the invention, a speech message is formed from the arbitrary codes by receiving a sequence of said outputted index signals, each identifying a predetermined arbitrary code. Each index signal corresponds to a time frame interval speech pattern. The arbitrary codes are concatenated responsive to the sequence of said received index signals and the speech message is formed responsive to the concatenated codes.
According to yet another aspect of the invention, a speech message is formed using a string of arbitrary value coded signals having predetermined segments thereof identified by index signals. A sequence of signals identifying predetermined segments of said string are received. Each of said signals of the sequence corresponds to speech patterns of successive time frame intervals. The predetermined segments of said arbitrary valued code string are selected responsive to the sequence of received identifying signals and the selected arbitrary codes are concatenated to generate a replica of the speech message.
According to yet another aspect of the invention, the arbitrary value signal sequences of the string are overlapping sequences.
BRIEF DESCRIPTION OF THE DRAWING
FIG. 1 depicts a speech encoder utilizing a prior art stochastic coding arrangement;
FIGS. 2 and 3 depict a general block diagram of a digital speech encoder usin arbitrary codes and transform domain processing that is illustrative of the invention;
FIG. 4 depicts a detailed block diagram of digital speech encoding signal processing arrangement that performs the functions of the circuit shown in FIGS. 2 and 3;
FIG. 5 shows a block diagram of an error and scale factor generating circuit useful in the arrangement of FIG. 3;
FIGS. 6-11 show flow chart diagrams that illustrate the operation of the circuit of FIG. 4; and
FIG. 12 shows a block diagram of a speech decoder circuit illustrative of the invention in which a string of random number codes form an overlapping sequence of stochastic codes.
GENERAL DESCRIPTION
FIG. 1 shows a prior art digital speech coder arranged to use stochastic codes for excitaion signals. Referring to FIG. 1, a speech pattern applied to microphone 101 is converted therein to a speech signal which is band pass filtered and sampled in filter and sampler 105 as is well known in the art. The resulting samples are converted into digital codes by analog-to-digital converter 110 to produce digitally coded speech signal s(n). Signal s(n) is processed in LPC and pitch predictive analyzer 115. The processing includes dividing the coded samples into successive speech frame intervals and producing a set of parameter signals corresponding to the signal s(n) in each successive frame. Parameter signals a(1), a(2), . . . , a(p) represent the short delay correlation or spectral related features of the interval speech pattern, and parameter signals β(1), β(2), β(3), and m represent long delay correlation or pitch related features of the speech pattern. In this type of coder, the speech signal is partitioned in frames or blocks, e.g., 5 msec or 40 samples in duration. For such blocks, stochastic code store 120 may contain 1024 random white Gaussian codeword sequences, each sequence comprising a series of 40 random numbers. Each codeword is scaled in scaler 125, prior to filtering, by a factor γ that is constant for the 5 msec block. The speech adaptation is done in recursive filters 135 and 145.
Filter 135 uses a predictor with large memory (2 to 15 msec) to introduce voice periodicity and filter 145 uses a predictor with short memory (less than 2 msec) to introduce the spectral envelope in the synthetic speech signal. Such filters are described in the article "Predictive coding of speech at low bit rates" by B. S. Atal appearing in the IEEE Transactions on Communicatons, Vol. COM-30, pp. 600-614, April 1982. The error representing the difference between the original speech signal s(n) applied to differencer 150 and synthetic speech signal s(n) applied from filter 145 is further processed by linear filter 155 to attenuate those frequency components where the error is perceptually less important and amplify those frequency components where the error is perceptually more important. The stochastic code sequence from store 120 which produces the minimum mean-squared subjective error signal E(k) and the corresponding optimum scale factor γ are selected by peak picker 170 only after processing of all 1024 code word sequences in store 120.
For purposes of analyzing the codeword processing of the circuit of FIG. 1, synthesis filters 135 and 145 and perceptual weighting filter 155 can be combined into one linear filter. The impulse response of this equivalent filter may be represented by the sequence f(n). Only a part of the equivalent filter output is determined by its input in the current 5 msec frame since, as is well known in the art, a portion of the filter output corresponds to signals carried over from preceding frames. The filter memory from the previous frames plays no role in the search for the optimum innovation sequence in the present frame. The contributions of the previous memory to the filter output in the present frame can thus be subtracted from the speech signal in determining the optimum code word from stochastic code stoe 120. The residual after subtracting the contributions of the filter memory carried over from the previous frames may be represented by the signal x(n). The filter output contributed by the kth codeword from store 120 in the present frame is ##EQU1## where c.sup.(k) (i) is the ith sample of the kth codeword. One can rewrite equation 1 in matrix notations as
x(k)=γ(k)Fc(k),                                      (2)
where F is a N×N matrix with the term in the nth row and the ith column given by f(n-i). The total squared error E(k), representing the difference between x(n) and x.sup.(k) (n), is given by
E(k)=||x-γ(k)Fc(k)||.sup.2, (3)
where the vector x represents the signal x(n) in vector notations, and || ||2 indicates the sum of the squares of the vector components. The optimum scale factor γ(k) that minimizes the error E(k) can easily be determined by setting ∂E(k)/∂γ(k)=0 and this leads to ##EQU2## The optimum codeword is obtained by finding the minimum of E(k) or the maximum of the second term on the right side in equation 5.
While the signal processing described with respect to FIG. 1 is relatively straight forward, the generation of the 1024 error signals E(k) of equation 5 is a time consuming operation that cannot be accomplished in real time in currently known high speed, large scale computers. The complexity of the search processing in FIG. 1 is due to the presence of the convolution operation represented by the matrix F in the error. The complexity is substantially reduced if the matrix F is replaced by a diagonal matrix. This is accomplished by representing the matrix F in the orthogonal form using singular-value decomposition as described in "Introduction to Matrix Computations" by G. W. Stewart, Academic Press, pp. 317-320, 1973. Assume that
F=UDV.sup.t,                                               (6)
where U and V are orthogonal matrices, D is a diagonal matrix with positive elements and Vt indicates the transpose of V. Because of the orthogonality of U, equation 3 can be written as
E(k)=||U.sup.t (x-γ(k)Fc(k)||.sup.2.             (7)
If we now replace F by its orthogonal form as expressed in equation 6, we obtain
E(k)=||U.sup.t x-γ(k)DV.sup.t c(k)||.sup.2.                           (8)
On substituting
z=U.sup.t x
and
b(k)=V.sup.t c(k),                                         (9)
in equation 8, we obtain ##EQU3## As before, the optimum γ(k) that minimizes E(k) can be determined by setting ∂E(k)/∂γ(k)=0 and equation 10 simplifies to ##EQU4## The error signal expressed in equation 11 can be processed much faster than the expression in equation 5. If Fc(k) is processed in a recursive filter of order p (typically 20), processing according to equation 11 can substantially reduce the processing time requirements for stochastic coding.
Alternatively, the reduced processing time may also be obtained by extending the operations of equation 5 from the time domain to a transform domain such as the frequency domain. If the combined impulse response of the synthesis filter with the long-delay prediction excluded and the perceptual weighting filter is represented by the sequence h(n), the filter output contributed by the kth codeword in the present frame can be expressed as a convolution between its input γ(k)c.sup.(k) (n) and the impulse response h(n). The filter output is given by
x.sup.(k) (n)=γ(k)h(n)*c.sup.(k) (n)                 (12)
The filter output can be expressed in the frequency domain as
X.sup.(k) (i)=γ(k)H(i)C.sup.(k) (i),                 (13)
where X.sup.(k) (i), H(i) and C.sup.(k) (i) are discrete Fourier transforms (DFTs) of x.sup.(k) (n),h(n) and c.sup.(k) (n), respectively. In practice, the duration of the filter output can be considered to be limited to a 10 msec time interval and zero outside. Thus a DFT with 80 points is sufficiently accurate for expressing equation 13. The total squared error E(k) is expressed in frequency-domain notations as ##EQU5## where X(i) is the DFT of x(n). If we express now
H(i)=d(i)e.sup.jφ.sbsp.i,                              (15)
and
ξ.sub.i =X(i)e.sup.-jφ.sbsp.i,                      (16)
equation 14 is then transformed to ##EQU6## Again, the scale factor γ(k) can be eliminated from equation 17 and the total error can be expressed as ##EQU7## where ξ(i)* is complex conjugate ξ(i). The frequency-domain search has the advantage that the singular-value decomposition of the matrix F is replaced by discrete fast Fourier transforms whereby the overall processing complexity is significantly reduced. In the transform domain using either the singular value decomposition or the discrete Fourier transform processing, further savings in the computational load can be achieved by restricting the search to a subset of frequencies (or eigenvectors) corresponding to large values of d(i) (or b(i)). According to the invention, the processing is substantially reduced whereby real time operation with microprocessor integrated circuits is realizable. This is accomplished by replacing the time domain processing involved in the generation of the error between the synthetic speech signal formed responsive to the innovation code and the input speech signal of FIG. 1 with transform domain processing as described hereinbefore.
DETAILED DESCRIPTION
A transform domain digital speech encoder using arbitrary codes for excitation for excitation signals illustrative of the invention is shown in FIGS. 2 and 3. The arbitrary codes may take the form of random number sequences or may, for example, be varied sequences of +1 and -1 in any order. Any arrangement of varied sequences may be used with the broad restriction that the overall average of the sequences is small. Referring to FIG. 2, a speech pattern such as a spoken message received by microphone transducer 201 is bandlimited and converted into a sequence of pulse samples in filter and sampler circuit 203 and supplied to linear prediction coefficient (LPC) analyzer 209 via analog-to-digital converter 205. The filtering may be arranged to remove frequency components of the speech signal above 4.0 KHz, and the sampling may be at an 8.0 KHz rate as is well known in the art. Each sample from circuit 203 is transformed into an amplitude representative digital code in the analog-to-digital converter. The sequence of digitally coded speech samples is supplied to LPC analyzer 209 which is operative, as is well known in the art, to partition the speech signals into 5 to 20 ms time frame intervals and to generate a set of linear prediction coefficient signals a(k), k=1, 2, . . . , p representative of the predicted short time spectrum of the speech samples of each frame. The analyzer also forms a set of perceptually weighted linear predictive coefficient signals
b(k)=ka(k),
k=1, 2, . . . , p,                                         (19)
where p is the number of the prediction coefficients.
The speech samples from A/D converter 205 are delayed in delay 207 to allow time for the formation of speech parameter signals a(k) and the delayed samples are supplied to the input of prediction residual generator 211. The prediction residual generator, as is well known in the art, is responsive to the delayed speech samples s(n) and the prediction parameters a(k) to form a signal ∂(n) corresponding to the differences between speech samples and their predicted values. The formation of the predictive parameters and the prediction residual signal for each frame in predictive analyzer 209 may be performed according to the arrangement disclosed in U.S. Pat. No. 3,740,476 issued to B. S. Atal, June 19, 1973, and assigned to the same assignee, or in other arrangements well known in the art.
Prediction residual signal generator 211 is operative to subtract the predictable portion of the frame signal from the sample signals s(n) to form signal ∂(n) in accordance with ##EQU8## where p, the number of the predictive coefficients, may be 12, N the number of samples in a speech frame, may be 40, and a(k) are the predictive coefficients of the frame. Predictive residual signal ∂(n) corresponds to the speech signal of the frame with the short term redundancies removed. Longer term redundancy of the order of several speech frames in the predictive residual signal remains and predictive parameters β(1), β(2), β(3) and m corresponding to such longer term redundancy are generated in predictive pitch analyzer 220 such that m is an integer that maximizes ##EQU9## as described in U.S. Pat. No. 4,354,057 issued to B. S. Atal et al on Jan. 9, 1979. As is well known, digital speech encoders may be formed by encoding the predictive parameters of each successive frame, and the frame predictive residual for transmission to decoder appratus or for storage for later retrieval. While the bit rate for encoding the predictive parameters is relatively low, the non-redundant nature of the residual requires a very high bit rate. According to the invention, an optimum arbitrary code ##EQU10## is selected to represent the frame excitation, and a signal K* that indexes the selected arbitrary excitation code is transmitted. In this way, the speech code bit rate is minimized without adversely affecting intelligibility. The arbitrary code is selected in the transform domain to reduce the selection processing so that it can be performed in real time with microprocessor components.
Selection of the arbitrary code for excitation includes combining the predictive residual with the perceptually weighted linear predictive parameters of the frame to generate a signal y(n). Speech pattern signal y(n) corresponding to the perceptually weighted speech signal contains a component y(n) due to the preceding frames. This preceding frame component y(n) is removed prior to the selection processing so that the stored arbitrary codes are in effect compared to only the current frame excitation. Signal y(n) is formed in predictive filter 217 responsive to the perceptually weighted predictive parameter and the predictive residual signals of the frame as per the relation ##EQU11## and are stoed in y(n) store 227.
The preceding frame speech contribution signal y(n) is generated in preceding frame contribution signal generator 222 from the perceptually weighted predictive parameter signal b(k) of the current frame, the pitch predictive parameters β(1), β(2), β(3) and m obtained from store 230 and the selected
d(n)=β(1)d(n-m-1)+β(2)d(n-m)+β(3)d(n-m+1)   (24a)
and ##EQU12## where d(l), l≦0 and y(l), l≦0 represent the past frame components. Generator 222 may comprise well known processor arrangements adapted to form the signals of equations 24. The past frame speech contribution signal y(n) of store 240 is subtracted from the perceptually weighted signal of store 227 in subtractor circut 247 to form the current frame speech pattern signal with past frame components removed.
x(n)=y(n)-y(n)
n=1, 2, . . . , N                                          (25)
The difference signal x(n) from subtractor 247 is then transformed into a frequency domain signal set by discrete Fourier transform (DFT) generator 250 as follows. ##EQU13## where Nf is the number of DFT points, e.g., 80. The DFT transformation generator may operate as described in the U.S. Pat. No. 3,588,460 issued to Richard A. Smith, June 28, 1971, and assigned to the same assignee, or may comprise any of the well known discrete Fourier transform circuits.
In order to select one of a plurality of arbitrary excitation codes for the current speech frame, it is necessary to take into account the effects of a perceptually weighted LPC filter on the excitation codes. This is done by forming a signal in accordance with ##EQU14## that represents the impulse response of the filter and converting the impulse response to a frequency domain signal by a discrete Fourier transformation as per ##EQU15## The perceptually weighted impulse response signal h(n) is formed in impulse response generator 225, and the transformation into the frequency domain signal H(i) is performed in DFT generator 245.
The frequency domain impulse response signal H(i) and the frequency domain perceptually weighted speech signal with preceding frame contributions removed X(i) are applied to transform parameter signal converter 301 in FIG. 3 wherein the signals d(i) and ξ(i) are formed according to
d(i)=|H(i)| ##EQU16## The arbitrary codes, to which the current speech frame excitation signals represented by d(i) and ξ(i) are compared, are stored in stochastic code store 330. Each code comprises a sequence of N, e.g., 40, digital coded signals c.sup.(k) (1), c.sup.(k) (2), . . . , c.sup.(k) (40). These signals may be a set of arbitrarily selected numbers within the broad restriction that the grand average is relatively small, or may be randomly selected digitally coded signals but may also be in the form of other codes well known in the art consistent with this restriction. The set of signals ##EQU17## may comprise individual codes that are overlapped to minimize storage requirements without affecting the encoding arrangements of FIGS. 2 and 3. Transform domain code store 305 contains the Fourier transformed frequency domain versions of the codes in store 330 obtained by the relation ##EQU18## While the transform code signals are stored, it is to be understood that other arrangements well known in the art which generate the transform signals from stored arbitrary codes may be used. Since the frequency domain codes have real and imaginary component signals, there are twice as many elements in the frequency domain code C.sup.(k) (i) as there are in the corresponding time domain code ##EQU19##
Each code output C.sup.(k) (i) of transform domain code store 305 is applied to one of the K error and scale factor generators 315-1 through 315-K wherein the transformed arbitrary code is compared to the time frame speech signal represented by signals d(i) and ξ(i) for the time frame obtained from parameter signal converter 301. FIG. 5 shows a block diagram arrangement that may be used to produce the error and scale factor signals for error and scale factor generator 315-K. Referring to FIG. 5, arbitrary code sequence C.sup.(k) (1), C.sup.(k) (2), . . . , C.sup.(k) (i), . . . , C.sup.(k) (N) is applied to speech pattern cross correlator 501 and speech pattern energy coefficient generator 505 which serves as a normalizer. Signal d(i) from transform parameter signal converter 301 is supplied to cross correlator 501 and normalizer 505, while ξ(i) from converter 301 is supplied to cross correlator 501. Cross correlator 501 is operative to generate the signal ##EQU20## which represents the correlation of the speech frame signal with past frame components removed ξ(i) and he frame speech signal derived from the transformed arbitrary code d(i) Ck (i) while squarer circuit 510 produces the signal ##EQU21## The error using code sequence ##EQU22## is formed in divider circuit 515 responsive to the outputs of cross correlator 501 and normalizer 505 over the current speech time frame according to ##EQU23## and the scale factor is produced in divider 520 responsive to the outputs of cross correlator circuit 510 and normalizer 505 as per ##EQU24## The cross correlator, normalizer and divide circuits of FIG. 5 may comprise well known logic circuit components or may be combined into a digital signal processor as described hereinafter. The arbitrary code that best matches the characteristics of the current frame speech pattern is selected in code selector 320 of FIG. 3, and the index of the selected code K* as well as the scale factor for the code γ(K*) are supplied to multiplexer 325. The multiplexer is adapted to combine the excitation code signals K* and γ(K*) with the current speech time frame LPC parameter signals a(k) and pitch parameter signals β(1), β(2), β(3) and m into a form suitable for transmission or storage. Index signal K* is also applied to selector 326 so that the time domain code for the index is selected from store 330. The selected time domain code ##EQU25## is fed to preceding frame contribution generator 222 in FIG. 2 where it is used in the formation of the signal y(n) for the next speech time frme processing. ##EQU26##
FIG. 4 depicts a speech encoding arrangement according to the invention wherein the operations described with respect to FIGS. 2 and 3 are performed in a series of digital signal processors 405, 410, 415, and 420-1 through 420-K under control of control processor 435. Processor 405 is adapted to perform the predictive coefficient signal processing associated with LPC analyzer 209, LPC and weighted LPC signal stores 213 and 215, prediction residual signal generator 217, and pitch predictive analyzer 220 of FIG. 2. Predictive residual signal processor 410 performs the functions described with respect to predictive filter 217, preceding frame contribution signal generator 222, subtractor 247 and impulse response generator 225. Transform signal processor 415 carries out the operations of DFT generators 245 and 250 of FIG. 2 and transform parameter signal converter 301 of FIG. 3. Processors 420-1 and through 420-K produce the error and scale factor signals as would be obtained from error and scale factor generators 315-1 through 315-K of FIG. 3.
Each of the digital signal processors may be the WE® DSP32 Digital Signal Processor described in the article "A 32 Bit VLSI Digital Signal Processor", by P. Hays et al, appearing in the IEEE Journal of Solid State Circuits, Vol. SC20, No. 5, pp. 998, October 1985, and the control processor may be the Motorola type 68000 microprocessor and associated circuits described in the publication "MC68000 16 Bit Microprocessor User's Manual", Second Edition, Motorola Inc., 1980. Each of the digital signal processors has associated therewith a memory for storing data for its operation, e.g., data memory 408 connected to prediction coefficient signal processor 405. Common data memory 450 stores signals from one digital signal processor that are needed for the operation of another signal processor. Common program store 430 has therein a sequence of permanently stored instruction signals used by control processor 435 and the digital signal processors to time and carry out the encoding functions of FIG. 4. Stochastic code store 440 is a read only memory that includes random codes Ck (n) as described with respect to FIG. 3 and transform code signal store 445 is another read only memory that holds the Fourier transformed frequency domain code signals corresponding to the codes in store 440.
The encoder of FIG. 4 may form a part of a communication system in which speech applied to microphone 401 is encoded to a low bit rate digital signal, e.g., 4.8 kb/s, and transmitted via a communication link to a receiver adapted to decode the arbitrary code indices and frame parameter signals. Alternatively, the output of the encoder of FIG. 4 may be stored for later decoding in a store and forward system or stored in read only memory for use in speech synthesizers of the type that will be described. As shown in the flow chart of FIG. 6, control processor 435 is conditioned by a manual signal ST from a switch or other device (not shown) to enable the operation of the encoder. All of the operations of the digital signal processors of FIG. 4 to generate the predictive parameter signals and the excitation code signals K* and γ* for a time frame interval occur within the time frame interval. When the on switch has been set (step 601), signal ST is produced to enable predictive coefficients processor 405 and the instructions in common program store 430 are accessed to control the operation of processor 405. Speech applied to microphone 401 is filtered and sampled in filter and sampler 403 and converted to a sequence of digital signals in A/D converter 404. Processor 405 receives the digitally coded sample signals from converter 404, partitions the samples into time frame segments as they are received and stores the successive frame samples in data memory 408 as indicated in step 705 of FIG. 7. Short delay coefficient signals a(k) and perceptually weighted short delay signals b(k) are produced in accordance with aforementioned U.S. Pat. No. 4,133,476 and equation 19 for the current time frame as per step 71. The current frame predictive residual signal ∂(n) are generated in accordance with equation 20 from the current frame speech samples s(n) and the LPC coefficient signals a(k) in step 715. When the operations of step 715 are completed, an end of short delay analysis signal is sent to control processor 435 (step 720). The STELPC signal is used to start the operations of processor 410 as per step 615 of FIG. 6. Long delay coefficient signals β(1), β(2), β(3) and m ae then formed according to equations 21 and 22 as per step 725, and an end of the predictive coefficient analysis signal STEPCA is generated (step 730). Processor 405 may be adapted to form the predictive coefficient signals as described in the aforementioned patent 4,133,976 The signals a(k), b(k), β(n), and β(n) and m of the current speech frame are transferred to common data memory 450 for use in residual signal processing.
When the current frame LPC coefficient signals have been generated in processor 405, control processor 435 is responsive to the STELPC signal to activate prediction residual signal processor 410 by means of step 801 in FIG. 8. The operations of processor 410 are done under control of common program store 430 as illustrated in the flow chart of FIG. 8. Referring to FIG. 8, the formation and storage of the current frame perceptually weighted signal y(n) is accomplished in step 805 according to equation 23. Long delay predictor contribution signals ∂(n) are generated as per equation 24 in step 810. Short delay predictor contributions signal y(n) is produced in step 815 as per equation 24. The current frame speech pattern signal with preceding frame components removed (x(n)), is produced by subtracting signal y(n) from signal y(n) in step 820 and impulse response signal h(n) is formed from the LPC coefficient signals a(k) as described in aforementioned U.S. Pat. No. 4,133,476 (step 825). Signals x(n) and h(n) transferred to and stored in common data memory 450 for use in transform signal processor 415.
Upon completion of the generation of signals x(n), h(n) for the current time frame, control processor 435 receives signal STEPSP from processor 410. When both signals STEPSP and SEPTCA are received by control processor 435 (step 621 of FIG. 6), the operation of transform signal processor 415 is started by transmitting the STEPSP signal to processor 415 as per step 625 in FIG. 6. Processor 415 is operative to generate the frequency domain speech frame representative signals x(i) and H(i) by performing a discrete Fourier transform operation on signals x(n) and h(n). Referring to FIG. 9, upon detecting signal STEPSP (step 901), the x(n) and h(n) signals are read from common data memory 450 (step 905). Signals X(i) are generated from the x(n) signals (step 910) and signals H(i) are generated from the h(n) signals (step 915) by Fourier transform operations well known in the art. The DFT may be implemented in accordance with the principles described in aforementioned U.S. Pat. No. 3,588,460. The conversion of signals X(i) and H(i) into the speech frame representative signals d(i) and ξ(i) implemented in processor 415 is done in step 920 as per equation 29 and signals d(i) and ξ(i) are stored in common data memory 450. At the end of the current frame transform prediction processing, signals STETPS is sent to control processor 435 (step 925). Responsive to signal STETPS in step 630, the control processor enables the error and scale factor signal processors 420-1 through 420-R (step 635).
Once the transform domain time frame speech representative signals for the current frame have been formed in processor 415 and stored in common data memory 450, the search operations for the stochastic code ##EQU27## that best matches the current frame speech pattern is performed in error and scale factor signal processors 420-1 through 420-K. Each processor generates error and scale factor signals corresponding to one or more (e.g., 100) transform domain codes in store 445. The error and scale factor signal formation is illustrated in the flow chart of FIG. 10. In FIG. 10, the presence of control signal STETPS (step 1001) permits the initial setting of parameters k identifying the stochastic code being processed, K* identifying the selected stochastic code for the current frame, P(r)* identifying the cross correlation coefficient signal of the selected code for the current frame, and Q(r)* identifying the energy coefficient signal of the selected code for the current frame.
The currently considered transform domain arbitrary code C.sup.(k) (i) is read from transform code signal store 445 (step 1005) and the current frame transform domain speech pattern signal obtained from the transform domain arbitrary code CK (i) is formed (step 1015) from the d(i) and Ck (i) signals. The signal d(i)C.sup.(k) (i) represents the speech pattern of the frame produced by the arbitrary code ##EQU28## In effect, code signal C.sup.(k) (i) corresponds to the frame excitation and signal d(i) corresponds to the predictive filter representative of the human vocal apparatus. Signal ξ(i) stored in common data store 450 is representative of the current frame speech pattern obtained from microphone 401.
The two transform domain speech pattern representative signals, d(i)C.sup.(k) (i) and ξ(i), are cross correlated to form signal P(k) in step 1020 and an energy coefficient signal Q(k) is formed in step 1022 for normalization purposes. The current deviation of the stochastic code frame speech pattern from the actual speech pattern of the frame is evaluated in step 1025. If the error between the code pattern and the actual pattern is less than the best obtained for preceding codes in the evaluation, index signal K(r)*, cross correlation signal P(r)* and energy coefficient signal Q(r)* are set to k, P(k), and Q(k) in step 1030. Step 1035 is then entered to determine if all codes have been evaluated. Otherwise, signals K(r)*, P(r)*, and Q(r)* remain unaltered and step 1035 is entered directly from step 1025. Until K>Kmax in tep 1035, code index signal k is incremented (step 1040) and step 1010 is reentered. When k>Kmax, signal K(r)* is stored and scale factor signal γ* is generated in step 1045. The index signal K(r)* and scale factor signal γ(r)* for the codes processed in the error and scale factor signal processor are stored in common data store 450. Step 1050 is then entered and the STEER control signal is sent to control processor 435 to signal the completion of the transform code selection in the error and scale factor signal processor (step 640 in FIG. 6). The control processor is then operative to enable the minimum error and multiplex processor 455 as per step 645.
The signals P(r)*, Q(r)*, and K(r)* resulting from the evaluation in processors 420-1 through 420-R are stored in common data memory 450 and are sent to minimum error and multiplex processor 455. Processor 455 is operative according to the flow chart of FIG. 11 to select the best matching stochastic code in store 440 having index K*. This index is selected from the best arbitrary codes indexed by signals K*(1) through K*(R) for processors 420-1 to 420-R. This index K* corresponds to the stochastic code that results in the minimum error signal. As per step 1101 of FIG. 11, processor 455 is enabled when a signal is received from control processor 435 indicating that processors 420-1 through 420-R have sent STEER signals. Signals r, K*, P*, and Q* are each set to an initial value of one, and signals P(r)*, Q(r)*, K(r)* and γ(r)* are read from common data memory 450 (step 1110). If the current signals P(r)* and Q(r)* result in a better matching stochastic code signal as determined in step 1115, these values are stored as K*, P*, Q*, and γ* for the current frame (step 1120) and decision step 1125 is entered. Until the Rth set of signals K(R)*, P(R)*, Q(R)* are processed, step 1110 is reentered via incrementing step 1130 so that all possible candidates for the best stochastic code are considered. After the Rth set of signals are processed, signal K*, the selected index of the current frame and signal γ*, the corresponding scale factor signal are stored in common data memory 450.
At this point, all signals to form the current time frame speech code are available in common data memory 450. The contribution of the current frame excitation code ##EQU29## must be generated for use in signal processor 440 in the succeeding time frame interval to remove the preceding frame component of the current time frame for forming signal x(n) as aforementioned. This is done in step 1135 where signals d(n) and y(n) are updated.
The predictive parameter signals for the current frame and signals K* and γ* are then read from memory 450 (step 1140), and the signals are converted into a frame transmission code set as is well known in the art (step 1145). The current frame end transmission signal FET is then generated and sent to control processor 435 to signal the beginning of the succeeding frame processing (step 650 in FIG. 6).
When used in a communication system, the coded speech signal of the time frame comprises a set of LPC coefficients a(k), a set of pitch predictive coefficients β(1), β(2) β(3), and m, and the stochastic code index and scale factor signals K* and γ*. As is well known in the art, a predictive decoder circuit is operative to pass the excitation signal of each speech time frame through one or more filters that are representative of a model of the human vocal apparatus. In accordance with an aspect of the invention, the excitation signal is an arbitrary code stored therein which is indexed as described with respect to the speech encoder of the circuits of FIGS. 2 and 3 or FIG. 4. The stochastic codes may be a set of 1024 codes each comprising a set of 40 random numbers obtained from a string of the 1024 random number codes g(1), g(2), . . . , g(1063) stored in a register. The 40 element stochastic codes are arranged in overlapping fashion as illustrated in Table 1.
              TABLE I                                                     
______________________________________                                    
Stochastic Code                                                           
Index K      Stochastic Code                                              
______________________________________                                    
1            g(1), g(2), . . . , g(40)                                    
2            g(2), g(3), . . . , g(41)                                    
3            g(3), g(4), . . . , g(42)                                    
4            g(4), g(5), . . . , g(43)                                    
4            g(4), g(5), . . . , g(43)                                    
4            g(4), g(5), . . . , g(43)                                    
1024         g(1024), g(1025), . . . , g(1063)                            
______________________________________                                    
Referring to Table 1, each code is a sequence of 40 random numbers that are overlapped so that each successive code begins at the second number position of the preceding code. The first entry in Table 1 includes this indes k=1 and the first 40 random numbers of the single string g(1), g(2), . . . , g(40). The second code with index k=2, corresponds to the set of random numbers g(2), g(3), . . . , g(41). Thus, 39 positions of successive codes are overlapped without affecting their random character to minimize storage requirements. The degree of overlap may be varied without affecting the operation of the circuit. The overall average of the string signals g(1) through g(1063) must be relatively small. The arbitrary codes need not be random numbers and the codes need not be arranged in overlapped fashion. Thus, arbitrary sequences of +1, -1 that define a set of unique codes may be used.
In the decoder or synthesizer circuit of FIG. 12, LPC coefficient signals a(k), pitch predictive coefficient signals β(1), β(2), β(3), and m, and the stochastic code index and scale factor signals K* and γ* are separated in demultiplexer 1201. The pitch predictive parameter signals β(k) and m are applied to pitch predictive filter 1220, and the LPC coefficient signals are supplied to LPC predictive filter 1225. Filters 1220 and 1225 operate as is well known in the art and as described in the aforementioned U.S. Pat. No. 4,133,976 to modify the excitation signal from scaler 1215 in accordance with vocal apparatus features. Index signal K* is applied to selector 1205 which addresses stochastic string register 1210. Responsive to index signal K*, the stochastic code best representative of the speech time frame excitation is applied to scaler 1215. The stochastic codes correspond to time frame speech patterns without regard to the intensity of the actual speech. The scaler modifies the stochastic code in accordance with the intensity of the excitation of the speech frame. The formation of the excitation signal in this manner minimizes the excitation bit rate required for transmission, and the overlapped code storage operates to reduce the circuit requirements of the decoder and permits a wide selection of encryption techniques. After the stochastic code excitation signal from scaler 1215 is modified in predictive filters 1220 and 1225, the resulting digital coded speech is applied to digital-to-analog converter 1230 wherein successive analog samples are formed. These samples are filtered in low pass filter 1235 to produce a replica of the time frame speech signal s(n) applied to the encoder of the circuit of FIGS. 2 and 3 of FIG. 4.
The invention may be utilized in speech synthesis wherein speech patterns are encoded using stochastic coding as shown in the circuits of FIGS. 2 and 3 or FIG. 4. The speech synthesizer comprises the circuit of FIG. 12 in which index signals K* are successively applied from well known data processing apparatus together with predictive parameter signals in accordance with the speech pattern to be produced. The overlapping code arrangement minimizes the storage requirements so a wide variety of speech sounds may be produced and the stochastic codes are accessed with index signals in a highly efficient manner. Similarly, storage of speech messages according to the invention for later reproduction only requires the storage of the prediction parameters and the excitation index signals of the successive frames so that speech compression is enhanced without reducing the intelligibility of the reproduced message.
While the invention has been described with respect to particular embodiments thereof, it is to be understood that various changes and modifications may be made by those skilled in the art without departing from the spirit or scope of the invention.

Claims (15)

What is claimed is:
1. Apparatus for encoding speech comprising
means (330) for storing a set of signals each representative of a random code and a set of index signals each identifying one of the random codes;
means (203 through 247 except 225 and 245) for partitioning the speech into successive time frame interval portions and for forming a time-domain signal representative of the portion of speech in each successive time frame interval;
means (225, 245, 250) for generating at least one transform domain signal from each such time-domain signal;
means (305) responsive to each random code signal for generating a transform domain code signal corresponding thereto, via the same type of transformation as in the aforesaid means for generating a transform domain signal;
means (315 and 320, or 501 through 520 and 320) for cross-correlating transform domain signals for each time frame interval with each of said transform domain code signals to select one of the transform domain code signals as yielding minimum error or maximum similarity as a representative of the speech portion in the time-frame interval; and
means (325) for outputting the index signal corresponding to the random code signal corresponding to the selected transform domain code signal.
2. Apparatus for encoding speech of the type claimed in claim 1 in which the means for forming a time domain signal comprises means for forming said signal as representative of the predictive parameters of the portion of speech in each successive time frame interval;
the means for generating at least one trnsform domain signal comprises means for generating a transform domain signal representative of the predictive parameters from said time domain signal representative of the predictive parameters; and
the means for generating at least one transform domain signal further comprises means (225, 245) for generating a transform domain signal representative of predictive characteristics for said portion of speech;
the means for cross-correlating includes means responsive to the predictive characteristics representative signal for forming a signal (γ) representative of the relative scaling of the transform domain code signal with respect to a transform domain signal representative of the predictive parameters for each time frame interval; and
the outputting means comprises means for outputting the relative scaling signal and the signal representative of the predictive parameters.
3. Apparatus for encoding speech of the type claimed in claim 2, in which
the means for forming a time domain signal as representative of the portion of speech in each successive time frame interval comprises
means (209, 213, 215) for generating a set of signals representative of the predictive parameters of the speech in each successive time frame interval;
means (207, 211) for forming a signal representative of the predictive residual for the speech in each successive time frame interval; and
means (217, 227, 222, 235, 240, 247) responsive to the predictive residual generating means and to the predictive parameter signal generating means for removing the contribution attributable to speech from the previous time frame.
4. Apparatus for encoding speech of the type claimed in claim 3, in which the means for partitioning and forming a time domain signal, further includes
means (220, 230), responsive to the predictive residual generating means, for producing pitch predictive parameters including contributions of previous frames; and
the combining means of the outputting means is responsive to said means for producing pitch predictive parameters.
5. Apparatus for encoding speech of the type claimed in either of claims 2 or 3 in which the cross-correlating means comprises
means (501) for cross-correlating all three of said predictive-parameter-representative transform domain signal, said transform domain signal representative of the relative scaling for the portion of speech, and said transform domain code signal;
means (505, 510, 515, 520) responsive to the output of the means for cross-correlating specifically and to one or more of the three signals for producing the relative scaling signal (γ) and for producing a cross-correlatin error signal (E.sub.(k)).
6. Apparatus for encoding speech comprising
means (330) for storing a set of signals each representative of a random code and set of index signals each identifying one of the random codes;
means (203 through 247 except 225 and 245) for partitioning the speech into successive time frame interval portions and for forming a time-domain signal representative of the portion of speech in each successive time frame interval;
means (225, 245, 250) for generating at least one transform domain signal from each such time-domain signal;
means (305) responsive to each random code signal for generating a transform domain code signal corresponding thereto, via the same type of transformation as in the aforesaid means for generating a transform domain signal;
means (315 and 320 or 501 through 520 and 320) for responding in a comparative fashion to transform domain signals for each time frame interval and, for each such signal, to each of said transform domain code signals to select one of the transform domain code signals as yielding minimum error or maximum similarity as a representative of the speech portion in the time frame interval; and
means (325) for outputting the index signal corresponding to the random code signal corresponding to the selected transform domain code signal.
7. A method for encoding speech comprising the steps of
storing a set of signals each representative of a random code and a set of index signals each identifying one of the random codes;
partitioning the speech into successive time frame interval portions;
forming a time-domain signal representative of the portion of speech in each successive time frame interval;
generating at least one transform domain signal from each such time-domain signal;
generating a transform domain code signal responsive to each random code signal, via the same type of transformation as in the aforesaid steps of generating a transform domain signal;
cross-correlating transform domain signals for each time frame interval with each of said transform domain code signals to select one of the transform domain code signals as yielding minimum error or maximum similarity as a representative of the speech portion in the time-frame interval; and
outputting the index signal corresponding to the random code signal corresponding to the selected transform domain code signal.
8. A method for encoding speech of the type claimed in claim 7 in which the step of forming a time domain signal comprises the step of forming said signal as representative of the predictive parameters of the portion of speech in each successive time frame interval;
the step of generating at least one transform domain signal comprises generating a transform domain signal representative of the predictive parameter from said time domain signal representative of the predictive parameters; and
the step of generating at least one transform domain signal further comprises step of generating a transform domain signal representative of predictive characteristics for said portion of speech;
the step of cross-correlating includes the step of forming a signal (γ) representative of the relative scaling of the transform domain code signal with respect to a transform domain signal representative of the predictive parameters for each time frame interval in response to the representative signal representative of the energy predictive characteristics; and
the outputting means comprises means for outputting the relative scaling signal and the signal representative of the predictive parameters.
9. A method for encoding speech of the type claimed in claim 8, in which
the step of forming a time domain signal as representative of the pattern of the portion of speech in each successive time frame interval comprises
generating a set of signals representative of the predictive parameters of the speech in each successive time frame interval;
forming a signal representative of the predictive residual for the speech in each successive time frame interval; and
removing the contribution attributable to speech from the previous time frame in response to the predictive residual generating means and to the predictive parameter signal generating means.
10. A method for encoding speech of the type claimed in claim 9, in which the partitioning step and the step of forming a time domain signal includes
producing pitch predictive parameters including contributions of previous frames in response to the predictive residual representative signal; and
the combining step also combines said pitch predictive parameters.
11. A method for encoding speech of the type claimed in either of claims 8 or 9 in which the cross-correlating step comprises
specifically cross-correlating all three of said predictive-parameter-representative transform domain signal, said transform domain signal representative of the relative scaling for the portion of speech, and said transform domain code signal;
applying the output of the specifically cross-correlating step and one or more of the three signals
to produce the relative scaling signal (γ) and
a cross-correlation error signal (E.sub.(k)).
12. A method for encoding speech comprising
storing a set of signals each representative of a random code and a set of index signals each identifying one of the random codes;
partitioning the speech into successive time frame interval portions;
forming a time-domain signal representative of the portion of speech in each successive time frame interval;
generating at least one transform domain signal from each such time-domain signal;
generating a transform domain code signal responsive to each random code signal via the same type of transformation as in the aforesaid step of generating a transform domain signal;
responding in a comparative fashion to transform domain signals for each time frame interval and, for each such signal, to each of said transform domain code signals to select one of the transform domain code signals as yielding minimum error or maximum similarity as a representative of the speech portion in the time frame interval; and
outputting the index signal corresponding to the random code signal corresponding to the selected transform.
13. Apparatus for producing a speech message comprising
means for receiving a sequence of speech message signals for the successive time intervals of the speech message, each time interval speech message signal including a set of transform-domain-coded signals representative of the time interval portion of the speech message, at least a portion of which are index signals corresponding to a known set of random codes
means for storing said known set of random codes in one-for-one association with the corresponding index signals
means for generating said random codes for each of the set of index signals,
and means for controlling speech wave generation for said time interval in response to said generated random codes.
14. Apparatus of the type claimed in claim 13
in which the storing means comprises means for storing the random codes sequentially so that a first portion of each succeeding one is derived from the latter portion of the preceding one.
15. A method for producing a speech message comprising
receiving a sequence of speech message signals for the successive time intervals of the speech message, each time interval speech message signal including a set of transform-domain-coded signals representative of the time interval portion of the speech message, at least a portion of which are index signals corresponding to a known set of random codes;
storing said known set of random codes in one-for-one association with the corresponding index signals;
generating said codes sequentially for each of the set of index signals;
and controlling speech wave generation for said time interval in response to said sequentially generated random codes.
US06/810,920 1985-12-26 1985-12-26 Digital speech processor using arbitrary excitation coding Ceased US4827517A (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
US06/810,920 US4827517A (en) 1985-12-26 1985-12-26 Digital speech processor using arbitrary excitation coding
DE8686111494T DE3685324D1 (en) 1985-12-26 1986-08-19 DIGITAL VOICE PROCESSOR USING Arbitrary Excitation Coding.
EP86111494A EP0232456B1 (en) 1985-12-26 1986-08-19 Digital speech processor using arbitrary excitation coding
KR1019860007063A KR950013372B1 (en) 1985-12-26 1986-08-26 Voice coding device and its method
JP61198297A JP2954588B2 (en) 1985-12-26 1986-08-26 Audio encoding device, decoding device, and encoding / decoding system
CA000517118A CA1318976C (en) 1985-12-26 1986-08-28 Digital speech processor using arbitrary excitation coding
US07/694,583 USRE34247E (en) 1985-12-26 1991-05-02 Digital speech processor using arbitrary excitation coding
KR1019950025265A KR950013373B1 (en) 1985-12-26 1995-08-14 Speech message suppling device and speech message reviving method
KR1019950025266A KR950013374B1 (en) 1985-12-26 1995-08-15 Input speech processing device and its method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US06/810,920 US4827517A (en) 1985-12-26 1985-12-26 Digital speech processor using arbitrary excitation coding

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US07/694,583 Reissue USRE34247E (en) 1985-12-26 1991-05-02 Digital speech processor using arbitrary excitation coding

Publications (1)

Publication Number Publication Date
US4827517A true US4827517A (en) 1989-05-02

Family

ID=25205042

Family Applications (1)

Application Number Title Priority Date Filing Date
US06/810,920 Ceased US4827517A (en) 1985-12-26 1985-12-26 Digital speech processor using arbitrary excitation coding

Country Status (6)

Country Link
US (1) US4827517A (en)
EP (1) EP0232456B1 (en)
JP (1) JP2954588B2 (en)
KR (1) KR950013372B1 (en)
CA (1) CA1318976C (en)
DE (1) DE3685324D1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5091944A (en) * 1989-04-21 1992-02-25 Mitsubishi Denki Kabushiki Kaisha Apparatus for linear predictive coding and decoding of speech using residual wave form time-access compression
US5119423A (en) * 1989-03-24 1992-06-02 Mitsubishi Denki Kabushiki Kaisha Signal processor for analyzing distortion of speech signals
US5138661A (en) * 1990-11-13 1992-08-11 General Electric Company Linear predictive codeword excited speech synthesizer
US5151968A (en) * 1989-08-04 1992-09-29 Fujitsu Limited Vector quantization encoder and vector quantization decoder
US5189701A (en) * 1991-10-25 1993-02-23 Micom Communications Corp. Voice coder/decoder and methods of coding/decoding
US5235669A (en) * 1990-06-29 1993-08-10 At&T Laboratories Low-delay code-excited linear-predictive coding of wideband speech at 32 kbits/sec
US5414796A (en) * 1991-06-11 1995-05-09 Qualcomm Incorporated Variable rate vocoder
US5481642A (en) * 1989-09-01 1996-01-02 At&T Corp. Constrained-stochastic-excitation coding
US5715372A (en) * 1995-01-10 1998-02-03 Lucent Technologies Inc. Method and apparatus for characterizing an input signal
US5742734A (en) * 1994-08-10 1998-04-21 Qualcomm Incorporated Encoding rate selection in a variable rate vocoder
US5751901A (en) * 1996-07-31 1998-05-12 Qualcomm Incorporated Method for searching an excitation codebook in a code excited linear prediction (CELP) coder
US5839098A (en) * 1996-12-19 1998-11-17 Lucent Technologies Inc. Speech coder methods and systems
US5911128A (en) * 1994-08-05 1999-06-08 Dejaco; Andrew P. Method and apparatus for performing speech frame encoding mode selection in a variable rate encoding system
US6691084B2 (en) 1998-12-21 2004-02-10 Qualcomm Incorporated Multiple mode variable rate speech coding
US6714540B1 (en) * 1998-02-25 2004-03-30 Matsushita Electric Industrial Co., Ltd. Data communication method, communication frame generating method, and medium on which program for carrying out the methods are recorded
US20090083040A1 (en) * 2004-11-04 2009-03-26 Koninklijke Philips Electronics, N.V. Encoding and decoding a set of signals
US20140257821A1 (en) * 2013-03-07 2014-09-11 Analog Devices Technology System and method for processor wake-up based on sensor data

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2584236B2 (en) * 1987-07-30 1997-02-26 三洋電機株式会社 Rule speech synthesizer
NL8902347A (en) * 1989-09-20 1991-04-16 Nederland Ptt METHOD FOR CODING AN ANALOGUE SIGNAL WITHIN A CURRENT TIME INTERVAL, CONVERTING ANALOGUE SIGNAL IN CONTROL CODES USABLE FOR COMPOSING AN ANALOGUE SIGNAL SYNTHESIGNAL.
IT1249940B (en) * 1991-06-28 1995-03-30 Sip IMPROVEMENTS TO VOICE CODERS BASED ON SYNTHESIS ANALYSIS TECHNIQUES.
US5490234A (en) * 1993-01-21 1996-02-06 Apple Computer, Inc. Waveform blending technique for text-to-speech system
JPH10124092A (en) 1996-10-23 1998-05-15 Sony Corp Method and device for encoding speech and method and device for encoding audible signal

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3588460A (en) * 1968-07-01 1971-06-28 Bell Telephone Labor Inc Fast fourier transform processor
US3624302A (en) * 1969-10-29 1971-11-30 Bell Telephone Labor Inc Speech analysis and synthesis by the use of the linear prediction of a speech wave
US3740476A (en) * 1971-07-09 1973-06-19 Bell Telephone Labor Inc Speech signal pitch detector using prediction error data
US3982070A (en) * 1974-06-05 1976-09-21 Bell Telephone Laboratories, Incorporated Phase vocoder speech synthesis system
US4022974A (en) * 1976-06-03 1977-05-10 Bell Telephone Laboratories, Incorporated Adaptive linear prediction speech synthesizer
US4092493A (en) * 1976-11-30 1978-05-30 Bell Telephone Laboratories, Incorporated Speech recognition system
US4133976A (en) * 1978-04-07 1979-01-09 Bell Telephone Laboratories, Incorporated Predictive speech signal coding with reduced noise effects
US4184049A (en) * 1978-08-25 1980-01-15 Bell Telephone Laboratories, Incorporated Transform speech signal coding with pitch controlled adaptive quantizing
US4354057A (en) * 1980-04-08 1982-10-12 Bell Telephone Laboratories, Incorporated Predictive signal coding with partitioned quantization
US4472832A (en) * 1981-12-01 1984-09-18 At&T Bell Laboratories Digital speech coder
US4701954A (en) * 1984-03-16 1987-10-20 American Telephone And Telegraph Company, At&T Bell Laboratories Multipulse LPC speech processing arrangement

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5816297A (en) * 1981-07-22 1983-01-29 ソニー株式会社 Voice synthesizing system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3588460A (en) * 1968-07-01 1971-06-28 Bell Telephone Labor Inc Fast fourier transform processor
US3624302A (en) * 1969-10-29 1971-11-30 Bell Telephone Labor Inc Speech analysis and synthesis by the use of the linear prediction of a speech wave
US3740476A (en) * 1971-07-09 1973-06-19 Bell Telephone Labor Inc Speech signal pitch detector using prediction error data
US3982070A (en) * 1974-06-05 1976-09-21 Bell Telephone Laboratories, Incorporated Phase vocoder speech synthesis system
US4022974A (en) * 1976-06-03 1977-05-10 Bell Telephone Laboratories, Incorporated Adaptive linear prediction speech synthesizer
US4092493A (en) * 1976-11-30 1978-05-30 Bell Telephone Laboratories, Incorporated Speech recognition system
US4133976A (en) * 1978-04-07 1979-01-09 Bell Telephone Laboratories, Incorporated Predictive speech signal coding with reduced noise effects
US4184049A (en) * 1978-08-25 1980-01-15 Bell Telephone Laboratories, Incorporated Transform speech signal coding with pitch controlled adaptive quantizing
US4354057A (en) * 1980-04-08 1982-10-12 Bell Telephone Laboratories, Incorporated Predictive signal coding with partitioned quantization
US4472832A (en) * 1981-12-01 1984-09-18 At&T Bell Laboratories Digital speech coder
US4701954A (en) * 1984-03-16 1987-10-20 American Telephone And Telegraph Company, At&T Bell Laboratories Multipulse LPC speech processing arrangement

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
IEEE Journal of Solid State Circuits, vol. SC 20, No. 5, Oct. 1985, A 32 bit VLSI Digital Signal Processor , W. P. Hays et al, pp. 998 1004. *
IEEE Journal of Solid-State Circuits, vol. SC-20, No. 5, Oct. 1985, "A 32-bit VLSI Digital Signal Processor", W. P. Hays et al, pp. 998-1004.
IEEE Transactions on Communications, vol. COM 30, No. 4, Apr. 1982, Predictive Coding of Speech at Low Bit Rates , by B. S. Atal, pp. 600 614. *
IEEE Transactions on Communications, vol. COM-30, No. 4, Apr. 1982, "Predictive Coding of Speech at Low Bit Rates", by B. S. Atal, pp. 600-614.
Introduction to Matrix Computations, Academic Press, 1973, G. W. Stewart, pp. 317 320. *
Introduction to Matrix Computations, Academic Press, 1973, G. W. Stewart, pp. 317-320.
MC 68000 16 Bit Microprocessor User s Manual, Second Edition, Motorola Inc., 1980. *
MC68000 16 Bit Microprocessor User's Manual, Second Edition, Motorola Inc., 1980.
Proceedings of the International Conference on Communications ICC 84, May 1984, Stochastic Coding of Speech Signals at Very Low Bit Rates , by B. S. Atal and M. R. Schroeder, pp. 1610 1613. *
Proceedings of the International Conference on Communications-ICC'84, May 1984, "Stochastic Coding of Speech Signals at Very Low Bit Rates", by B. S. Atal and M. R. Schroeder, pp. 1610-1613.

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5119423A (en) * 1989-03-24 1992-06-02 Mitsubishi Denki Kabushiki Kaisha Signal processor for analyzing distortion of speech signals
US5091944A (en) * 1989-04-21 1992-02-25 Mitsubishi Denki Kabushiki Kaisha Apparatus for linear predictive coding and decoding of speech using residual wave form time-access compression
US5151968A (en) * 1989-08-04 1992-09-29 Fujitsu Limited Vector quantization encoder and vector quantization decoder
US5719992A (en) * 1989-09-01 1998-02-17 Lucent Technologies Inc. Constrained-stochastic-excitation coding
US5481642A (en) * 1989-09-01 1996-01-02 At&T Corp. Constrained-stochastic-excitation coding
US5235669A (en) * 1990-06-29 1993-08-10 At&T Laboratories Low-delay code-excited linear-predictive coding of wideband speech at 32 kbits/sec
US5138661A (en) * 1990-11-13 1992-08-11 General Electric Company Linear predictive codeword excited speech synthesizer
US5414796A (en) * 1991-06-11 1995-05-09 Qualcomm Incorporated Variable rate vocoder
US5657420A (en) * 1991-06-11 1997-08-12 Qualcomm Incorporated Variable rate vocoder
US5189701A (en) * 1991-10-25 1993-02-23 Micom Communications Corp. Voice coder/decoder and methods of coding/decoding
US5911128A (en) * 1994-08-05 1999-06-08 Dejaco; Andrew P. Method and apparatus for performing speech frame encoding mode selection in a variable rate encoding system
US6484138B2 (en) 1994-08-05 2002-11-19 Qualcomm, Incorporated Method and apparatus for performing speech frame encoding mode selection in a variable rate encoding system
US5742734A (en) * 1994-08-10 1998-04-21 Qualcomm Incorporated Encoding rate selection in a variable rate vocoder
US5715372A (en) * 1995-01-10 1998-02-03 Lucent Technologies Inc. Method and apparatus for characterizing an input signal
US5751901A (en) * 1996-07-31 1998-05-12 Qualcomm Incorporated Method for searching an excitation codebook in a code excited linear prediction (CELP) coder
USRE43099E1 (en) 1996-12-19 2012-01-10 Alcatel Lucent Speech coder methods and systems
US5839098A (en) * 1996-12-19 1998-11-17 Lucent Technologies Inc. Speech coder methods and systems
US6714540B1 (en) * 1998-02-25 2004-03-30 Matsushita Electric Industrial Co., Ltd. Data communication method, communication frame generating method, and medium on which program for carrying out the methods are recorded
US6691084B2 (en) 1998-12-21 2004-02-10 Qualcomm Incorporated Multiple mode variable rate speech coding
US7496505B2 (en) 1998-12-21 2009-02-24 Qualcomm Incorporated Variable rate speech coding
US20090083040A1 (en) * 2004-11-04 2009-03-26 Koninklijke Philips Electronics, N.V. Encoding and decoding a set of signals
US20110082699A1 (en) * 2004-11-04 2011-04-07 Koninklijke Philips Electronics N.V. Signal coding and decoding
US20110082700A1 (en) * 2004-11-04 2011-04-07 Koninklijke Philips Electronics N.V. Signal coding and decoding
US8010373B2 (en) 2004-11-04 2011-08-30 Koninklijke Philips Electronics N.V. Signal coding and decoding
US7835918B2 (en) * 2004-11-04 2010-11-16 Koninklijke Philips Electronics N.V. Encoding and decoding a set of signals
US8170871B2 (en) 2004-11-04 2012-05-01 Koninklijke Philips Electronics N.V. Signal coding and decoding
US20140257821A1 (en) * 2013-03-07 2014-09-11 Analog Devices Technology System and method for processor wake-up based on sensor data
US9349386B2 (en) * 2013-03-07 2016-05-24 Analog Device Global System and method for processor wake-up based on sensor data

Also Published As

Publication number Publication date
JP2954588B2 (en) 1999-09-27
KR950013372B1 (en) 1995-11-02
KR870006508A (en) 1987-07-11
EP0232456A1 (en) 1987-08-19
EP0232456B1 (en) 1992-05-13
CA1318976C (en) 1993-06-08
DE3685324D1 (en) 1992-06-17
JPS62159199A (en) 1987-07-15

Similar Documents

Publication Publication Date Title
US4827517A (en) Digital speech processor using arbitrary excitation coding
US4472832A (en) Digital speech coder
US4701954A (en) Multipulse LPC speech processing arrangement
US5265190A (en) CELP vocoder with efficient adaptive codebook search
US4817157A (en) Digital speech coder having improved vector excitation source
US4220819A (en) Residual excited predictive speech coding system
US4896361A (en) Digital speech coder having improved vector excitation source
US5265167A (en) Speech coding and decoding apparatus
US5187745A (en) Efficient codebook search for CELP vocoders
US4899385A (en) Code excited linear predictive vocoder
US5457783A (en) Adaptive speech coder having code excited linear prediction
Trancoso et al. Efficient procedures for finding the optimum innovation in stochastic coders
KR0143076B1 (en) Coding method and apparatus
US5327519A (en) Pulse pattern excited linear prediction voice coder
US5012517A (en) Adaptive transform coder having long term predictor
US6006174A (en) Multiple impulse excitation speech encoder and decoder
US4975958A (en) Coded speech communication system having code books for synthesizing small-amplitude components
USRE32580E (en) Digital speech coder
US5179594A (en) Efficient calculation of autocorrelation coefficients for CELP vocoder adaptive codebook
US5719992A (en) Constrained-stochastic-excitation coding
US5173941A (en) Reduced codebook search arrangement for CELP vocoders
US4764963A (en) Speech pattern compression arrangement utilizing speech event identification
USRE34247E (en) Digital speech processor using arbitrary excitation coding
US5235670A (en) Multiple impulse excitation speech encoder and decoder
US5105464A (en) Means for improving the speech quality in multi-pulse excited linear predictive coding

Legal Events

Date Code Title Description
AS Assignment

Owner name: BELL TELEPHONE LABORATORIES, INCORPORATED, 600 MOU

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:ATAL, BISHNU S.;TRANCOSO, ISABEL M. M.;REEL/FRAME:004559/0716;SIGNING DATES FROM 19860509 TO 19860521

Owner name: BELL TELEPHONE LABORATORIES, INCORPORATED,NEW JERS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ATAL, BISHNU S.;TRANCOSO, ISABEL M. M.;SIGNING DATES FROM 19860509 TO 19860521;REEL/FRAME:004559/0716

STCF Information on status: patent grant

Free format text: PATENTED CASE

RF Reissue application filed

Effective date: 19910502

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY