WO1995000949A1 - Speech recognition method using a two-pass search - Google Patents
Speech recognition method using a two-pass search Download PDFInfo
- Publication number
- WO1995000949A1 WO1995000949A1 PCT/CA1994/000284 CA9400284W WO9500949A1 WO 1995000949 A1 WO1995000949 A1 WO 1995000949A1 CA 9400284 W CA9400284 W CA 9400284W WO 9500949 A1 WO9500949 A1 WO 9500949A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- allophone
- candidates
- parameter vectors
- models
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/14—Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
- G10L15/142—Hidden Markov Models [HMMs]
Definitions
- a trellis associated with each branch in the vocabulary network.
- the trellis having as its axes frame number as the abscissa and model state as the ordinate.
- the trellis has as many states associated with it as the number of states in the corresponding allophone model.
- a ten-state allophone model will have ten states associated with every branch in the vocabulary network with that label.
- the total number of operations per frame for each trellis is proportional to the total number of transitions in the corresponding model.
- the total number of operations involved in the Viterbi method is about 50 (30 sums for estimating 30 transitions plus 20 maximums to determine a best transition at each state) .
- the well known Viterbi method can be used for finding the most likely path through the vocabulary network for a given utterance.
- the method is computationally complex because it evaluates every transition in every branch of the entire vocabulary network and, therefore, hardware cost is prohibitive or expensive. Computational complexity translates into cost/channel of speech recognition.
- the Viterbi method provides only a single choice, and to provide alternatives increases computation and memory requirement even further. A single choice also eliminates the option of providing post processing refinements to enhance recognition accuracy.
- 11:2814-2816 also attempts to reduce this computational load.
- This scheme uses a three-state model, rather than a more complex topology, for initial scoring with the Viterbi method, then uses the complex topology for rescoring.
- This proposal may actually increase the computational load. For example, if the retrained three-state model has as many mixtures as the complex topologies, then equal numbers of log observation probabilities must be computed twice, once for the three-state models and once for the complex topologies. Total memory requirements for storing the two sets of models would also increase. The time taken to find the most likely path, thereby matching a vocabulary word to the unknown utterance, becomes the recognition delay of the speech recognition system.
- An object of the present invention is to provide an improved speech recognition method.
- a speech recognition method comprising the steps of: providing a first set of allophone models for use with acoustic parameter vectors of a first type; providing a second set of allophone models for use with acoustic parameter vectors of a second type; providing a network representing a recognition vocabulary,- wherein each branch of the network is one of the allophone models and each complete path through the network is a sequence of models representing a word in the recognition vocabulary; analyzing an unknown utterance to generate a frame sequence of acoustic parameter vectors for each of the first and second types of acoustic parameter, vectors; providing a reduced trellis for determining a path through the network having a highest likelihood; computing model distances for each frame of acoustic parameter vectors of the first type for all allophone models of the first set; finding a maximum model distance for each model of the first set; updating the reduced trellis for every frame assuming each allophone model is one-
- a speech recognition method comprising the steps of: providing a first set of allophone models for use with Cepstrum parameter vectors; providing a second set of allophone models for use with LSP parameter vectors; providing a network representing a recognition vocabulary, wherein each branch of the network is one of the allophone models and each complete path through the network is a sequence of models representing a word in the recognition vocabulary; providing a reduced trellis for determining a path through the network having a highest likelihood; analyzing an unknown utterance to generate a frame sequence of both Cepstrum and LSP parameter vectors; computing of Cepstrum model distances for each frame for all Cepstrum allophone models; finding a maximum model distance for each model; updating the reduced trellis for every frame assuming a one-state model with a minimum duration of two frames and a transition probability equal to its maximum model distance; sorting end values of each vocabulary network path for the reduced trellis; choosing top n values to provide
- the present invention a two-pass search is used.
- the first pass uses a reduced one-state model whose transition probabilities are assigned the maximum value computed for the observation probability of the corresponding allophone model.
- This reduced model has its minimum duration constrained to a few frames. Conveniently, either two or three frame minimum durations may be used.
- An advantage of the present invention is simplifying the complexity of the recognition method enough to allow the use of cost effective processing hardware while maintaining recognition accuracy.
- Figs. la and lb illustrate portions of the vocabulary network in accordance with an embodiment of the present invention
- Fig. 2 illustrates a four-state hidden Markov model (HMM) representing an allophone in accordance with an embodiment of the present invention
- Fig. 3 illustrates, in a chart, a method of speech recognition in accordance with an embodiment of the present invention
- Fig. 4 graphically illustrates a reduced trellis referred to in Fig. 3;
- Fig. 5 graphically illustrates allophone segmentation from Cepstrum parameters and frames used for LSP model distance computation referred to in Fig. 3;
- Fig. 6 illustrates, in a block diagram, a typical speech recognizer for using the method of speech recognition in accordance with an embodiment of the present invention.
- Figs, la and lb there are illustrated portions of the vocabulary network in accordance with an embodiment of the present invention.
- Fig. la each path 10, 12, and 14 begins at an entry node 16.
- the path 10 includes a branch 18 representing the allophone r from the node 16 to a node 20, a branch 22 representing the allophone a from the node 18 to a node 24, a branch 26 representing the allophone b from the node 24 to a node 28, a branch 30 representing the allophone I from the node 28 to a node 32, and a branch 34 representing the allophone d from the node 32 to an exit node 36.
- the path 12 includes a branch 38, a node 40, a branch 42, a node 44, a branch 46, a node 48, a branch 50 and an exit node 52 and the path 14 includes a branch 54, a node 56, a branch 58, a node 60, a branch 62, a node 64, a branch 66, and an exit node 68.
- the vocabulary network is generally a tree structure as shown in Fig. la but may have paths that recombine as illustrated by Fig. lb representing two allophone transcriptions of the word 'record' .
- the transcriptions for record are represented by: an entry node 68, a branch 70, a node 72, a branch 74, a node 70, a branch 78, a node 80, a branch 82, a node 84, a branch 86, a node 88, a branch 90 and an exit node 92; and a branch 93, a node 94, a branch 96, a node 98, a branch 100, a node 102, a branch 104, then the node 88, the branch 90, and the exit node 92.
- Each branch of the vocabulary network is represented by a hidden Markov model.
- the four-state HMM includes first, second, third, and fourth states 110, 112, 114, and 116, respectively. Transitions from states can in most instances be of three types: self, next-state, and skip-next-state.
- the self is a transition 118
- the next-state is a transition 120
- the skip-next-state is a transition 122.
- the second state 1-12 has a self transition 124, a next-state transition 126, and a skip-next-state transition 128.
- the third state 114 has no skip-next-state transitions.
- the third state 114 has a self transition 130 and a next-state transition 132.
- the fourth state 116 being an exit state has only an inter-model transition 136.
- the first state 110 being an entry state also has an inter-model transition 138.
- the inter-model transitions 136 and 138 allow concatenation of models into a chain representing vocabulary words.
- FIG. 3 there is illustrated a chart of a speech recognition method in accordance with an embodiment of the present invention.
- the chart provides steps and actions that occur in two time frames: the first is in real time with respect to the incoming speech and is labeled A) Frame Synchronous; the second is processing time following speech reception and is labeled as B) Recognition Delay.
- Part A) includes seven steps.
- Step 1) is identifying, with an endpointer, the beginning of words or phrases to start a frame synchronous search method by initializing a reduced trellis.
- Step 2) is the computing of Cepstrum model distances for each frame for all allophone models.
- Step 3) is finding a maximum model distance for each model (e.g. 130 models means 130 maximum found) .
- Step 4) is updating the reduced trellis for every frame assuming a one-state model with a minimum duration of two frames. The transition probability for this model is the same as the maximum model distance computed in Step 3.
- Step 5) is identifying, with the endpointer, the end of speech to stop the updating of the reduced trellis.
- Step 6) is sorting end values of each vocabulary network path for the reduced trellis.
- Step 7) is choosing top n values to provide n candidates for recognition, for example, a typical value for n is 30. This completes the frame synchronous part of the speech recognition method in accordance with an embodiment of the present invention.
- Part B) includes seven steps (Steps 8 - 14) and may include one or more additional steps (as represented by Step 15) to enhance recognition accuracy.
- Step 8) is rescoring the top n candidates using the Viterbi method with the model distances computed in Step 2). Having reduced the number of recognition candidates from every vocabulary word down to n candidates in the frame synchronous part, the computationally complex Viterbi method can be used efficiently to,rescore each of those n candidates with the complete set of model distances computed in Step 2.
- Step 9) is sorting candidates by score in descending order.
- Step 10) is choosing the top m candidates for further rescoring using alternate parameters, for example, LSP parameters. A typical value of m is 3.
- Step 11) is finding allophone segmentation using Cepstrum parameters. These segment boundaries are used to limit the frames used to compute model distances in Step 12. Because of the computational burden imposed by the computation of model distances, without constraining to the frames identified in Step 11) and the candidates identified in Step 10) , the use of alternative parameters would introduce an unacceptable additional delay.
- Step 12) is computing LSP model distances for the m candidates. For example, in Fig. 5, the top brackets show segmentation produced using Cepstrum, while the bottom brackets show the frames used for computing LSP model distances.
- Step 13) is rescoring the m candidates using the Viterbi method with the LSP model distances computed in Step 12.
- Step 14) is comparing the top m candidates' scores for Cepstrum and LSP parameters.
- Step 15) represents additional optional post-processing to enhance accuracy of the selection.
- the optional post processing may include using allophone duration constraints to enhance the recognition accuracy.
- the embodiment described uses a one-state model of two frames minimum duration for the frame synchronous search.
- Table A there is presented data for inclusion rate for the correct choice for minimum durations of two and three frames for a 4321 word test set.
- Table B provides recognition accuracy on the 4321 word test set after rescoring the top n candidates using the Viterbi method.
- the inclusion rate for the correct choice is higher for a minimum duration of three frames than it is for two frames.
- the two recognizers give virtually the same recognition accuracy.
- the two-frame recognizer requires less computing, it is preferred. If a rescoring method can be found that will perform better than the Viterbi method, by taking advantage of the three-frame duration's high inclusion rate, the higher computing burden of the three- frame duration will be worthwhile.
- a one-state model having a duration of two frames is used.
- an allophone transcription of the word 'for' is plotted vertically, with each allophone being allotted two points on the axis.
- the transition probability used for each allophone model is the maximum found during the actual model distance calculations. 5
- the one-state models for the reduced trellis do not require additional computing of model distances, only determining the maximum of those distances calculated for each model. However, these model distances are stored for use in the second pass.
- Initial conditions of the trellis are set, then for each frame, the trellis is updated by applying the maximum transition probability to each transition in every branch in the vocabulary network.
- Initial conditions are set- by assigning probabilities of ' _. ' to the initial state 150 of the silence model ( ⁇ ) and initial state 154 of the model (f) and by assigning probabilities of '0' everywhere else on the trellis vertical axis, 156 - 168.
- the step of updating the trellis consists of multiplying the initial probabilities by the maximum transitional probabilities for
- the maximum transitional probability, p ⁇ multiplies the initial value of '1' for transitions 170 and 172 for one of the multiplications.
- the probability p ⁇ multiplies the initial value of '0' for transition 174.
- the initial probabilities of 1 for initial states 150 and 154 indicate the possibility that the word 'for' may have an initial silence or breath.
- the transition 204 from the state 164 to the state 202 indicates that the final silence or breath is also optional.
- the state 202 retains the value representing the best likelihood for the current frame.
- transitional probabilities have been described as ranging from '0' to ' _. ' and new values in trellis updating as being derived by multiplication of the current value by the next transitional probability.
- typically transitional probabilities are represented by logarithms so that multiplications of probabilities can be carried out on computing platforms by computationally simpler additions.
- Computation of model distances is a complex task and, hence, imposes a large burden on computing resources. in order to compute LSP model distances during the recognition delay portion of the speech recognition method without an unacceptable increase in that delay, the number of frames for which the computations are made is constrained.
- Fig. 5 The steps of finding allophone segmentation using Cepstrum parameters and computing LSP model distances are described with reference to Fig. 5.
- the allophone transcription for the vocabulary word 'for' is graphically illustrated in Fig. 5.
- the horizontal axis represents frames of speech.
- the Cepstrum parameter allophone segments are indicated by bars 210, 212, 214, and 216, denoting the segmentation of allophones f, o, and r as indicated by brackets 218, 220, and 222, respectively. This corresponds to Step 11) of Fig. 3.
- Fig. 3 In the example of Fig.
- the frames for allophones whose model distances are to be computed are constrained to within 18 frames (230 ms) of the segment boundaries determined using the Cepstrum parameters.
- the LSP model distances computation for allophones f, o, and r are computed over the frames indicated by brackets 224, 226, and 228, respectively.
- Fig. 6 there is illustrated in a block diagram a typical ⁇ speech recognizer configured to use the speech recognition method of the present invention.
- the speech recognizer includes an input for speech 290, estimators for Cepstrum and LSP parameters 292, and 294, respectively, having parameter outputs 296 and 298, respectively, to an input data buffer 302.
- the input data buffer is connected to a data bus 304.
- processing elements 306 Also connected to the data bus are processing elements 306, a recognition data tables store 308, an intermediate result store 310, and a recognition result output block 312 having an output 314.
- the speech applied to the input 290 is analyzed in Cepstrum analyzer 292 and LSP analyzer 294 to produce Cepstrum and LSP parameter vector output via 296 and 298, respectively, to the input data buffer 302 every 12.75 msec.
- processing elements 306 compute model distances. for each frame of speech data for all Cepstrum allophone models stored in the recognition data tables store 308. The computed model distances are stored in the intermediate result store 310 for use later in the Viterbi rescoring of the top n choices.
- the trellis is established in the intermediate results store 310 and is updated for each frame.
- the stored Cepstrum model distances are used by the Viterbi method to rescore the top n choices with the ordered list stored in the intermediate result store 310.
- the top n choices are again reordered using Viterbi scores.
- the top m choices are then rescored using LSP parameters from the input data buffer 302.
- LSP model distances are computed by processing elements 306 for the LSP allophone models found in the top m choices using those stored in the recognition data tables store 308. For each allophone model, only the frames provided by the Cepstrum segmentation are used.
- the computed model distances are stored in the intermediate result store 310 and used in the Viterbi rescoring of the top m choices. Comparison of the Cepstrum and LSP top m choices takes place to provide a recognition stored in the recognition results output 312. The result is passed to an application via output 314 as a recognition. As described hereinabove further post processing may be done to enhance recognition accuracy.
- a hardware implementation of the speech recognizer of Fig. 6 uses six TMS 320C31 microprocessors by Texas Instruments for processing elements 306, and a total memory of about 16 Mbytes which is used to provide the input data buffer 302, the recognition data table store 308 and intermediate result store 310.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP94916113A EP0705473B1 (en) | 1993-06-24 | 1994-05-18 | Speech recognition method using a two-pass search |
DE69420842T DE69420842T2 (en) | 1993-06-24 | 1994-05-18 | VOICE RECOGNITION USING A TWO-WAY SEARCH METHOD |
CA002163017A CA2163017C (en) | 1993-06-24 | 1994-05-18 | Speech recognition method using a two-pass search |
JP7502266A JP3049259B2 (en) | 1993-06-24 | 1994-05-18 | Voice recognition method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/080,543 US5515475A (en) | 1993-06-24 | 1993-06-24 | Speech recognition method using a two-pass search |
US08/080,543 | 1993-06-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1995000949A1 true WO1995000949A1 (en) | 1995-01-05 |
Family
ID=22158066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA1994/000284 WO1995000949A1 (en) | 1993-06-24 | 1994-05-18 | Speech recognition method using a two-pass search |
Country Status (6)
Country | Link |
---|---|
US (1) | US5515475A (en) |
EP (1) | EP0705473B1 (en) |
JP (1) | JP3049259B2 (en) |
CA (1) | CA2163017C (en) |
DE (1) | DE69420842T2 (en) |
WO (1) | WO1995000949A1 (en) |
Families Citing this family (207)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3453456B2 (en) * | 1995-06-19 | 2003-10-06 | キヤノン株式会社 | State sharing model design method and apparatus, and speech recognition method and apparatus using the state sharing model |
US5706397A (en) * | 1995-10-05 | 1998-01-06 | Apple Computer, Inc. | Speech recognition system with multi-level pruning for acoustic matching |
US5987414A (en) * | 1996-10-31 | 1999-11-16 | Nortel Networks Corporation | Method and apparatus for selecting a vocabulary sub-set from a speech recognition dictionary for use in real time automated directory assistance |
US5839107A (en) * | 1996-11-29 | 1998-11-17 | Northern Telecom Limited | Method and apparatus for automatically generating a speech recognition vocabulary from a white pages listing |
US5987408A (en) * | 1996-12-16 | 1999-11-16 | Nortel Networks Corporation | Automated directory assistance system utilizing a heuristics model for predicting the most likely requested number |
US6122613A (en) * | 1997-01-30 | 2000-09-19 | Dragon Systems, Inc. | Speech recognition using multiple recognizers (selectively) applied to the same input sample |
US5884259A (en) * | 1997-02-12 | 1999-03-16 | International Business Machines Corporation | Method and apparatus for a time-synchronous tree-based search strategy |
JP3962445B2 (en) * | 1997-03-13 | 2007-08-22 | キヤノン株式会社 | Audio processing method and apparatus |
US6236715B1 (en) | 1997-04-15 | 2001-05-22 | Nortel Networks Corporation | Method and apparatus for using the control channel in telecommunications systems for voice dialing |
US5956675A (en) * | 1997-07-31 | 1999-09-21 | Lucent Technologies Inc. | Method and apparatus for word counting in continuous speech recognition useful for reliable barge-in and early end of speech detection |
US6018708A (en) * | 1997-08-26 | 2000-01-25 | Nortel Networks Corporation | Method and apparatus for performing speech recognition utilizing a supplementary lexicon of frequently used orthographies |
US6122361A (en) * | 1997-09-12 | 2000-09-19 | Nortel Networks Corporation | Automated directory assistance system utilizing priori advisor for predicting the most likely requested locality |
US5995929A (en) * | 1997-09-12 | 1999-11-30 | Nortel Networks Corporation | Method and apparatus for generating an a priori advisor for a speech recognition dictionary |
CA2216224A1 (en) * | 1997-09-19 | 1999-03-19 | Peter R. Stubley | Block algorithm for pattern recognition |
US6253178B1 (en) | 1997-09-22 | 2001-06-26 | Nortel Networks Limited | Search and rescoring method for a speech recognition system |
FR2769118B1 (en) * | 1997-09-29 | 1999-12-03 | Matra Communication | SPEECH RECOGNITION PROCESS |
US6253173B1 (en) | 1997-10-20 | 2001-06-26 | Nortel Networks Corporation | Split-vector quantization for speech signal involving out-of-sequence regrouping of sub-vectors |
US6098040A (en) * | 1997-11-07 | 2000-08-01 | Nortel Networks Corporation | Method and apparatus for providing an improved feature set in speech recognition by performing noise cancellation and background masking |
JP3914709B2 (en) * | 1997-11-27 | 2007-05-16 | 株式会社ルネサステクノロジ | Speech recognition method and system |
US6182038B1 (en) * | 1997-12-01 | 2001-01-30 | Motorola, Inc. | Context dependent phoneme networks for encoding speech information |
US6963871B1 (en) * | 1998-03-25 | 2005-11-08 | Language Analysis Systems, Inc. | System and method for adaptive multi-cultural searching and matching of personal names |
US8855998B2 (en) | 1998-03-25 | 2014-10-07 | International Business Machines Corporation | Parsing culturally diverse names |
US8812300B2 (en) | 1998-03-25 | 2014-08-19 | International Business Machines Corporation | Identifying related names |
US6052443A (en) * | 1998-05-14 | 2000-04-18 | Motorola | Alphanumeric message composing method using telephone keypad |
US6137867A (en) * | 1998-05-14 | 2000-10-24 | Motorola, Inc. | Alphanumeric message composing method using telephone keypad |
US5974121A (en) * | 1998-05-14 | 1999-10-26 | Motorola, Inc. | Alphanumeric message composing method using telephone keypad |
US6208964B1 (en) | 1998-08-31 | 2001-03-27 | Nortel Networks Limited | Method and apparatus for providing unsupervised adaptation of transcriptions |
SE9802990L (en) * | 1998-09-04 | 2000-03-05 | Ericsson Telefon Ab L M | Speech recognition method and systems |
US6493705B1 (en) * | 1998-09-30 | 2002-12-10 | Canon Kabushiki Kaisha | Information search apparatus and method, and computer readable memory |
WO2000022607A1 (en) * | 1998-10-09 | 2000-04-20 | Sony Corporation | Learning device and method, recognizing device and method, and recording medium |
US6148285A (en) * | 1998-10-30 | 2000-11-14 | Nortel Networks Corporation | Allophonic text-to-speech generator |
JP3420965B2 (en) * | 1999-02-25 | 2003-06-30 | 日本電信電話株式会社 | Interactive database search method and apparatus, and recording medium recording interactive database search program |
US7058573B1 (en) * | 1999-04-20 | 2006-06-06 | Nuance Communications Inc. | Speech recognition system to selectively utilize different speech recognition techniques over multiple speech recognition passes |
US6542866B1 (en) | 1999-09-22 | 2003-04-01 | Microsoft Corporation | Speech recognition method and apparatus utilizing multiple feature streams |
US6480827B1 (en) * | 2000-03-07 | 2002-11-12 | Motorola, Inc. | Method and apparatus for voice communication |
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
KR100446289B1 (en) * | 2000-10-13 | 2004-09-01 | 삼성전자주식회사 | Information search method and apparatus using Inverse Hidden Markov Model |
ITFI20010199A1 (en) | 2001-10-22 | 2003-04-22 | Riccardo Vieri | SYSTEM AND METHOD TO TRANSFORM TEXTUAL COMMUNICATIONS INTO VOICE AND SEND THEM WITH AN INTERNET CONNECTION TO ANY TELEPHONE SYSTEM |
EP1488410B1 (en) * | 2002-03-27 | 2010-06-02 | Nokia Corporation | Distortion measure determination in speech recognition |
EP1575031A3 (en) * | 2002-05-15 | 2010-08-11 | Pioneer Corporation | Voice recognition apparatus |
US7191130B1 (en) * | 2002-09-27 | 2007-03-13 | Nuance Communications | Method and system for automatically optimizing recognition configuration parameters for speech recognition systems |
US7117153B2 (en) * | 2003-02-13 | 2006-10-03 | Microsoft Corporation | Method and apparatus for predicting word error rates from text |
US20040186714A1 (en) * | 2003-03-18 | 2004-09-23 | Aurilab, Llc | Speech recognition improvement through post-processsing |
US20040254790A1 (en) * | 2003-06-13 | 2004-12-16 | International Business Machines Corporation | Method, system and recording medium for automatic speech recognition using a confidence measure driven scalable two-pass recognition strategy for large list grammars |
DE102004001212A1 (en) * | 2004-01-06 | 2005-07-28 | Deutsche Thomson-Brandt Gmbh | Process and facility employs two search steps in order to shorten the search time when searching a database |
KR100612839B1 (en) * | 2004-02-18 | 2006-08-18 | 삼성전자주식회사 | Method and apparatus for domain-based dialog speech recognition |
US20070005586A1 (en) * | 2004-03-30 | 2007-01-04 | Shaefer Leonard A Jr | Parsing culturally diverse names |
US8924212B1 (en) * | 2005-08-26 | 2014-12-30 | At&T Intellectual Property Ii, L.P. | System and method for robust access and entry to large structured data using voice form-filling |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US7633076B2 (en) | 2005-09-30 | 2009-12-15 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
US20070132834A1 (en) * | 2005-12-08 | 2007-06-14 | International Business Machines Corporation | Speech disambiguation in a composite services enablement environment |
US7877256B2 (en) * | 2006-02-17 | 2011-01-25 | Microsoft Corporation | Time synchronous decoding for long-span hidden trajectory model |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
KR101415534B1 (en) * | 2007-02-23 | 2014-07-07 | 삼성전자주식회사 | Multi-stage speech recognition apparatus and method |
JP5229216B2 (en) * | 2007-02-28 | 2013-07-03 | 日本電気株式会社 | Speech recognition apparatus, speech recognition method, and speech recognition program |
JP4322934B2 (en) * | 2007-03-28 | 2009-09-02 | 株式会社東芝 | Speech recognition apparatus, method and program |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US9053089B2 (en) | 2007-10-02 | 2015-06-09 | Apple Inc. | Part-of-speech tagging using latent analogy |
US8620662B2 (en) | 2007-11-20 | 2013-12-31 | Apple Inc. | Context-aware unit selection |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8065143B2 (en) | 2008-02-22 | 2011-11-22 | Apple Inc. | Providing text input using speech data and non-speech data |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8464150B2 (en) | 2008-06-07 | 2013-06-11 | Apple Inc. | Automatic language identification for dynamic text processing |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8768702B2 (en) | 2008-09-05 | 2014-07-01 | Apple Inc. | Multi-tiered voice feedback in an electronic device |
US8898568B2 (en) | 2008-09-09 | 2014-11-25 | Apple Inc. | Audio user interface |
US8583418B2 (en) | 2008-09-29 | 2013-11-12 | Apple Inc. | Systems and methods of detecting language and natural language strings for text to speech synthesis |
US8712776B2 (en) | 2008-09-29 | 2014-04-29 | Apple Inc. | Systems and methods for selective text to speech synthesis |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US8862252B2 (en) | 2009-01-30 | 2014-10-14 | Apple Inc. | Audio user interface for displayless electronic device |
US8380507B2 (en) | 2009-03-09 | 2013-02-19 | Apple Inc. | Systems and methods for determining the language to use for speech generated by a text to speech engine |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10540976B2 (en) | 2009-06-05 | 2020-01-21 | Apple Inc. | Contextual voice commands |
US10255566B2 (en) * | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US8682649B2 (en) | 2009-11-12 | 2014-03-25 | Apple Inc. | Sentiment prediction from textual data |
US8600743B2 (en) | 2010-01-06 | 2013-12-03 | Apple Inc. | Noise profile determination for voice-related feature |
US8311838B2 (en) | 2010-01-13 | 2012-11-13 | Apple Inc. | Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts |
US8381107B2 (en) | 2010-01-13 | 2013-02-19 | Apple Inc. | Adaptive audio feedback system and method |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
DE202011111062U1 (en) | 2010-01-25 | 2019-02-19 | Newvaluexchange Ltd. | Device and system for a digital conversation management platform |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US8713021B2 (en) | 2010-07-07 | 2014-04-29 | Apple Inc. | Unsupervised document clustering using latent semantic density analysis |
US8719006B2 (en) | 2010-08-27 | 2014-05-06 | Apple Inc. | Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis |
US8719014B2 (en) | 2010-09-27 | 2014-05-06 | Apple Inc. | Electronic device with text error correction based on voice recognition data |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10515147B2 (en) | 2010-12-22 | 2019-12-24 | Apple Inc. | Using statistical language models for contextual lookup |
US8781836B2 (en) | 2011-02-22 | 2014-07-15 | Apple Inc. | Hearing assistance system for providing consistent human speech |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10672399B2 (en) | 2011-06-03 | 2020-06-02 | Apple Inc. | Switching between text data and audio data based on a mapping |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US8812294B2 (en) | 2011-06-21 | 2014-08-19 | Apple Inc. | Translating phrases from one language into another using an order-based set of declarative rules |
US8706472B2 (en) | 2011-08-11 | 2014-04-22 | Apple Inc. | Method for disambiguating multiple readings in language conversion |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
US8762156B2 (en) | 2011-09-28 | 2014-06-24 | Apple Inc. | Speech recognition repair using contextual information |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US8775442B2 (en) | 2012-05-15 | 2014-07-08 | Apple Inc. | Semantic search using a single-source semantic model |
WO2013185109A2 (en) | 2012-06-08 | 2013-12-12 | Apple Inc. | Systems and methods for recognizing textual identifiers within a plurality of words |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US8935167B2 (en) | 2012-09-25 | 2015-01-13 | Apple Inc. | Exemplar-based latent perceptual modeling for automatic speech recognition |
KR20230137475A (en) | 2013-02-07 | 2023-10-04 | 애플 인크. | Voice trigger for a digital assistant |
US9977779B2 (en) | 2013-03-14 | 2018-05-22 | Apple Inc. | Automatic supplementation of word correction dictionaries |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US9733821B2 (en) | 2013-03-14 | 2017-08-15 | Apple Inc. | Voice control to diagnose inadvertent activation of accessibility features |
US10642574B2 (en) | 2013-03-14 | 2020-05-05 | Apple Inc. | Device, method, and graphical user interface for outputting captions |
US10572476B2 (en) | 2013-03-14 | 2020-02-25 | Apple Inc. | Refining a search based on schedule items |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
AU2014251347B2 (en) | 2013-03-15 | 2017-05-18 | Apple Inc. | Context-sensitive handling of interruptions |
AU2014233517B2 (en) | 2013-03-15 | 2017-05-25 | Apple Inc. | Training an at least partial voice command system |
KR101857648B1 (en) | 2013-03-15 | 2018-05-15 | 애플 인크. | User training by intelligent digital assistant |
WO2014144579A1 (en) | 2013-03-15 | 2014-09-18 | Apple Inc. | System and method for updating an adaptive speech recognition model |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
EP3937002A1 (en) | 2013-06-09 | 2022-01-12 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
AU2014278595B2 (en) | 2013-06-13 | 2017-04-06 | Apple Inc. | System and method for emergency calls initiated by voice command |
DE112014003653B4 (en) | 2013-08-06 | 2024-04-18 | Apple Inc. | Automatically activate intelligent responses based on activities from remote devices |
US20160265332A1 (en) | 2013-09-13 | 2016-09-15 | Production Plus Energy Services Inc. | Systems and apparatuses for separating wellbore fluids and solids during production |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9135911B2 (en) * | 2014-02-07 | 2015-09-15 | NexGen Flight LLC | Automated generation of phonemic lexicon for voice activated cockpit management systems |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US9484022B2 (en) * | 2014-05-23 | 2016-11-01 | Google Inc. | Training multiple neural networks with different accuracy |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
AU2015266863B2 (en) | 2014-05-30 | 2018-03-15 | Apple Inc. | Multi-command single utterance input method |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179588B1 (en) | 2016-06-09 | 2019-02-22 | Apple Inc. | Intelligent automated assistant in a home environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
WO2019203794A1 (en) | 2018-04-16 | 2019-10-24 | Google Llc | Automatically determining language for speech recognition of spoken utterance received via an automated assistant interface |
US10896672B2 (en) * | 2018-04-16 | 2021-01-19 | Google Llc | Automatically determining language for speech recognition of spoken utterance received via an automated assistant interface |
CN112786035A (en) * | 2019-11-08 | 2021-05-11 | 珠海市一微半导体有限公司 | Voice recognition method, system and chip of cleaning robot |
CN111754987A (en) * | 2020-06-23 | 2020-10-09 | 国投(宁夏)大数据产业发展有限公司 | Big data analysis voice recognition method |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4587670A (en) * | 1982-10-15 | 1986-05-06 | At&T Bell Laboratories | Hidden Markov model speech recognition arrangement |
EP0438662A2 (en) * | 1990-01-23 | 1991-07-31 | International Business Machines Corporation | Apparatus and method of grouping utterances of a phoneme into context-de-pendent categories based on sound-similarity for automatic speech recognition |
US5241619A (en) * | 1991-06-25 | 1993-08-31 | Bolt Beranek And Newman Inc. | Word dependent N-best search method |
US5390278A (en) * | 1991-10-08 | 1995-02-14 | Bell Canada | Phoneme based speech recognition |
US5349645A (en) * | 1991-12-31 | 1994-09-20 | Matsushita Electric Industrial Co., Ltd. | Word hypothesizer for continuous speech decoding using stressed-vowel centered bidirectional tree searches |
US5386492A (en) * | 1992-06-29 | 1995-01-31 | Kurzweil Applied Intelligence, Inc. | Speech recognition system utilizing vocabulary model preselection |
-
1993
- 1993-06-24 US US08/080,543 patent/US5515475A/en not_active Expired - Lifetime
-
1994
- 1994-05-18 CA CA002163017A patent/CA2163017C/en not_active Expired - Fee Related
- 1994-05-18 WO PCT/CA1994/000284 patent/WO1995000949A1/en active IP Right Grant
- 1994-05-18 EP EP94916113A patent/EP0705473B1/en not_active Expired - Lifetime
- 1994-05-18 JP JP7502266A patent/JP3049259B2/en not_active Expired - Lifetime
- 1994-05-18 DE DE69420842T patent/DE69420842T2/en not_active Expired - Fee Related
Non-Patent Citations (3)
Title |
---|
L.R. BAHL ET AL.: "A fast approximate match for large vocabulary speech recognition", EUROSPEECH 89, vol. 1, 26 September 1989 (1989-09-26), PARIS, pages 156 - 158 * |
L.R. BAHL ET AL.: "A fast match for continuous speech recognition using allophonic models", ICASSP-92, vol. 1, 23 March 1992 (1992-03-23), SAN FRANSISCO, pages 17 - 20 * |
L.R. BAHL ET AL.: "Constructing candidate word lists using acoustically similar word groups", IEEE TRANS. ON SIGN. PROC., vol. 40, no. 11, November 1992 (1992-11-01), pages 2814 - 2816 * |
Also Published As
Publication number | Publication date |
---|---|
CA2163017A1 (en) | 1995-01-05 |
JP3049259B2 (en) | 2000-06-05 |
DE69420842D1 (en) | 1999-10-28 |
EP0705473B1 (en) | 1999-09-22 |
US5515475A (en) | 1996-05-07 |
JPH08506430A (en) | 1996-07-09 |
EP0705473A1 (en) | 1996-04-10 |
DE69420842T2 (en) | 2000-02-24 |
CA2163017C (en) | 2000-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5515475A (en) | Speech recognition method using a two-pass search | |
US5502791A (en) | Speech recognition by concatenating fenonic allophone hidden Markov models in parallel among subwords | |
US4817156A (en) | Rapidly training a speech recognizer to a subsequent speaker given training data of a reference speaker | |
US5072452A (en) | Automatic determination of labels and Markov word models in a speech recognition system | |
EP0314908B1 (en) | Automatic determination of labels and markov word models in a speech recognition system | |
US4819271A (en) | Constructing Markov model word baseforms from multiple utterances by concatenating model sequences for word segments | |
US5729656A (en) | Reduction of search space in speech recognition using phone boundaries and phone ranking | |
US4827521A (en) | Training of markov models used in a speech recognition system | |
US6178401B1 (en) | Method for reducing search complexity in a speech recognition system | |
EP0241768B1 (en) | Synthesizing word baseforms used in speech recognition | |
US5680509A (en) | Method and apparatus for estimating phone class probabilities a-posteriori using a decision tree | |
KR19990014292A (en) | Word Counting Methods and Procedures in Continuous Speech Recognition Useful for Early Termination of Reliable Pants- Causal Speech Detection | |
US6253178B1 (en) | Search and rescoring method for a speech recognition system | |
Schwartz et al. | Efficient, high-performance algorithms for n-best search | |
US5956676A (en) | Pattern adapting apparatus using minimum description length criterion in pattern recognition processing and speech recognition system | |
US5293451A (en) | Method and apparatus for generating models of spoken words based on a small number of utterances | |
Roucos et al. | A stochastic segment model for phoneme-based continuous speech recognition | |
JP2982689B2 (en) | Standard pattern creation method using information criterion | |
JPH07104780A (en) | Continuous voice recognizing method for unspecified number of people | |
US7818172B2 (en) | Voice recognition method and system based on the contexual modeling of voice units | |
JP3873418B2 (en) | Voice spotting device | |
EP0540328A2 (en) | Voice recognition | |
JPH1078793A (en) | Voice recognition device | |
Holter et al. | Combined Optimisation of Baseforms and Subword Models for an Hmm Based Speech Recogniser. | |
JPH07104783A (en) | Voice recognition device and method for generating standard pattern of recognition basic unit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2163017 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1994916113 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1994916113 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 1994916113 Country of ref document: EP |