Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20040176946 A1
Publication typeApplication
Application numberUS 10/685,566
Publication date9 Sep 2004
Filing date16 Oct 2003
Priority date17 Oct 2002
Also published asUS7292977, US7389229, US7424427, US20040083090, US20040083104, US20040138894, US20040163034, US20040172250, US20040204939, US20040230432, US20050038649
Publication number10685566, 685566, US 2004/0176946 A1, US 2004/176946 A1, US 20040176946 A1, US 20040176946A1, US 2004176946 A1, US 2004176946A1, US-A1-20040176946, US-A1-2004176946, US2004/0176946A1, US2004/176946A1, US20040176946 A1, US20040176946A1, US2004176946 A1, US2004176946A1
InventorsJayadev Billa, Francis Kubala
Original AssigneeJayadev Billa, Kubala Francis G.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Pronunciation symbols based on the orthographic lexicon of a language
US 20040176946 A1
Abstract
A dictionary creation component [316] converts the normal orthographic written representation of a word into a sequence of symbols that relate to the pronunciation of the word. The symbols may be used to train conventional models for speech recognition. The symbols are not phonemes and do not need to be defined by a speech expert. The symbols are created automatically by the dictionary creation component based on the written representation of the word.
Images(8)
Previous page
Next page
Claims(30)
What is claimed:
1. A method for specifying a pronunciation of a word comprising:
receiving a written version of the word defined by a series of characters;
separating the written version of the word into the series of characters; and
generating symbols that define a pronunciation of the word based solely on the series of characters.
2. The method of claim 1, wherein receiving a written version of the word includes:
receiving the written version of the word from a user.
3. The method of claim 1, wherein receiving a written version of the word includes:
receiving the written version of the word from a program that automatically scans a network.
4. The method of claim 1, wherein the generated symbols have a one-to-one correspondence with the series of characters.
5. The method of claim 1, wherein the generated symbols correspond to predetermined character groupings from the series of characters.
6. The method of claim 5, wherein the predetermined character groupings are determined based on a statistical analysis of a language.
7. The method of claim 6, wherein the statistical analysis is based on frequency of occurrence of the words in the language.
8. The method of claim 1, further comprising:
classifying the word into one of a predetermined plurality of classifications; and
generating the symbols based on the classification of the word.
9. The method of claim 8, wherein the classifications are based on word affixes.
10. A speech recognition system comprising:
speech recognition models configured to convert audio containing speech into a transcription of the speech;
a system dictionary used to train the speech recognition models by providing symbols that define pronunciations of words to the speech recognition models; and
a dictionary creation component configured to generate the symbols for the system dictionary, the symbols being based on written characters of the words.
11. The system of claim 10, wherein the dictionary creation component receives the words from a program that automatically scans a network for the words.
12. The system of claim 10, wherein the generated symbols have a one-to-one correspondence with a sequence of the written characters of the words.
13. The system of claim 10, wherein the generated symbols correspond to predetermined character groupings in a sequence of the written characters of the words.
14. The system of claim 13, wherein the predetermined character groupings are determined based on a statistical analysis of a language.
15. The system of claim 14, wherein the statistical analysis is based on frequency of occurrence of the words in the language.
16. The system of claim 10, wherein the dictionary creation component classifies each of the words into one of a predetermined plurality of classifications and generates the symbols based on the classifications.
17. A method comprising:
configuring a dictionary creation component to generate symbols that represent pronunciations of words in a target language, the symbols being generated based solely on written representations of the words and the configuring being performed based on the target language;
providing the dictionary creation component with written words; and
receiving the symbols that represent pronunciations of the written words from the dictionary creation component.
18. The method of claim 17, wherein the generated symbols have a one-to-one correspondence with a series of characters that define the written representations of the words.
19. The method of claim 17, wherein the generated symbols correspond to predetermined character groupings from a series of characters that define the written representations of the words.
20. The method of claim 19, wherein the predetermined character groupings are determined based on a statistical analysis of the target language.
21. The method of claim 20, wherein the statistical analysis is based on frequency of occurrence of the words in the target language.
22. The method of claim 17, further comprising:
classifying the words into one of a predetermined plurality of classifications; and
generating the symbols based on the classifications of the words.
23. The method of claim 22, wherein the classifications are based on word affixes.
24. A device comprising:
means for receiving a written version of a word defined by a series of characters;
means for separating the written version of the word into the series of characters; and
means for generating symbols that define a pronunciation of the word based on the series of characters.
25. The device of claim 24, wherein the generated symbols have a one-to-one correspondence with the series of characters.
26. The device of claim 24, wherein the generated symbols correspond to predetermined character groupings from the series of characters.
27. The device of claim 26, wherein the predetermined character groupings are determined based on a statistical analysis of a language.
28. The device of claim 27, wherein the statistical analysis is based on frequency of occurrence of the words in the language.
29. The device of claim 24, further comprising:
means for classifying the word into one of a predetermined plurality of classifications; and
means for generating the symbols based on the classification of the word.
30. A computer-readable medium containing programming instructions for execution by a processor, the computer-readable medium comprising:
instructions for receiving a written version of a word defined by a series of characters;
instructions for separating the written version of the word into the series of characters; and
instructions for generating symbols that define a pronunciation of the word based solely on the series of characters.
Description
    RELATED APPLICATIONS
  • [0001]
    This application claims priority under 35 U.S.C. 119 based on U.S. Provisional Application No. 60/419,214 filed Oct. 17, 2002, the disclosure of is incorporated herein by reference.
  • GOVERNMENT CONTRACT
  • [0002] The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reason-able terms as provided for by the terms of (contract No. N66001-00-C-8008) awarded by DARPA.
  • BACKGROUND OF THE INVENTION
  • [0003]
    A. Field of the Invention
  • [0004]
    The present invention relates generally to speech recognition and, more particularly, to the creation of system dictionaries for speech recognition systems.
  • [0005]
    B. Description of Related Art
  • [0006]
    Speech has not traditionally been valued as an archival information source. As effective as the spoken word is for communicating, archiving spoken segments in a useful and easily retrievable manner has long been a difficult proposition. Although the act of recording audio is not difficult, automatically transcribing and indexing speech in an intelligent and useful manner can be difficult.
  • [0007]
    Automatic transcription systems are generally based on language and acoustic models. The models are trained on a speech signal and on a corresponding signal based on a transcription of the speech. The model will “learn” how the speech signal corresponds to the transcription. Conventional models are frequently implemented based on Hidden Markov Models (HMMs).
  • [0008]
    [0008]FIG. 1 is a diagram illustrating a conventional speech recognition system. A content transcription component 102 receives an input audio stream. The content transcription component 102 converts speech in the input audio stream into text based on language and acoustic model(s) 101. The model(s) 101 are pre-trained based on a training audio stream that is expected to be similar to the run-time version of the input audio stream.
  • [0009]
    [0009]FIG. 2 is a diagram illustrating training of models 101 in additional detail. When training, models 101 receive the input audio stream 210 and a corresponding transcription 211 of the input audio stream. The transcription may have been meticulously generated by a human based on the input audio stream 210. Transcription 211 may be converted into a stream of phonemes 213 by system dictionary 212. System dictionary 212 includes information regarding the relationship between the written orthographic representation of a word and the phonemes that correspond to the word. A phoneme is generally defined as the smallest acoustic event that distinguishes one word from another.
  • [0010]
    During training, models 101 learn associations between the audio stream 210 and the phoneme stream 213. During run-time operation, models 101 may then generate phonemes for run-time audio stream 110, including boundary indications between phonemes that correspond to different words. Content transcription component 102 may use a phoneme dictionary to convert the generated phonemes into a conventional written transcription. In this manner, the run-time transcription is generated.
  • [0011]
    One disadvantage of the speech recognition system described above is that the system requires a phoneme-based system dictionary 212 to train models 101. When a user of the system wishes to add new words to the system, the user must update system dictionary 212 to include the new words and the phonemes corresponding to the new words. Generating the correct phonemes for any given word, however, is not a trivial task. In fact, this job is generally performed by a person with specialized training in this area (i.e., a speech expert). This can be a significant problem for speech recognition systems that are deployed in the field. If the user of the system is not a speech expert, adding words to the system can be a difficult proposition.
  • [0012]
    Requiring a phoneme dictionary can also make it difficult to extend the speech recognition system to additional languages. In particular, for each new language, speech expert(s) must undertake the work-intensive task of generating a new phoneme dictionary for the language.
  • [0013]
    Accordingly, it would be desirable to simplify the operation of speech recognition systems such that the systems are not dependent on manually created phoneme dictionaries.
  • SUMMARY OF THE INVENTION
  • [0014]
    Systems and methods consistent with the present invention include speech recognition systems that use a system dictionary that discards the linguistic origin of phonemes and instead uses a pronunciation model based on the normal orthographic written form of the word. Entries in the system dictionary, consistent with the present invention, may be created in an automated manner.
  • [0015]
    One aspect consistent with the invention is directed to a method for specifying a pronunciation of a word. The method includes receiving a written version of the word defined by a series of characters and separating the written version of the word into the series of characters. The method further includes generating symbols that define a pronunciation of the word based solely on the series of characters.
  • [0016]
    A second aspect consistent with the invention is directed to a speech recognition system. The system includes speech recognition models that convert audio containing speech into a transcription of the speech. A system dictionary trains the speech recognition models by providing symbols that define pronunciations of words to the speech recognition models. A dictionary creation component generates the symbols for the system dictionary, where the symbols are based on written characters of the words.
  • [0017]
    A third aspect consistent with the invention is directed to a method. The method includes configuring a dictionary creation component to generate symbols that represent pronunciations of words in a target language. The symbols are generated based solely on written representations of the words and configuration is performed based on the target language. The method also includes providing the dictionary creation component with written words and receiving the symbols from the dictionary creation component.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0018]
    The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate the invention and, together with the description, explain the invention. In the drawings,
  • [0019]
    [0019]FIG. 1 is a diagram illustrating a conventional speech recognition system;
  • [0020]
    [0020]FIG. 2 is a diagram illustrating training of models in additional detail the speech recognition system of FIG. 1;
  • [0021]
    [0021]FIG. 3 is a diagram illustrating an exemplary system in which concepts consistent with the invention may be implemented;
  • [0022]
    [0022]FIG. 4 is a diagram illustrating training of speech recognition models consistent with the present invention;
  • [0023]
    [0023]FIG. 5 is a flow chart illustrating operation of a dictionary creation component consistent with an aspect of the invention;
  • [0024]
    [0024]FIG. 6 is a flow chart illustrating operation of a dictionary creation component consistent with another aspect of the invention; and
  • [0025]
    [0025]FIG. 7 is a flow chart illustrating operation of a dictionary creation component consistent with yet another aspect of the invention.
  • DETAILED DESCRIPTION
  • [0026]
    The following detailed description of the invention refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents of the claim limitations.
  • [0027]
    Systems and methods consistent with the present invention create system dictionary entries that automatically define word pronunciations. The specification of the pronunciation of a word is based on the normal orthographic written version of the word. Thus, based on only the written version of a word, systems and methods consistent with the present invention specify a pronunciation for the word. These pronunciations can be effectively used by speech recognition systems.
  • SYSTEM OVERVIEW
  • [0028]
    Speech recognition, as described herein, may be performed by one or more processing devices or networks of processing devices. FIG. 3 is a diagram illustrating an exemplary system 300 in which concepts consistent with the invention may be implemented. System 300 includes a computing device 301 that has a computer-readable medium 309, such as random access memory, coupled to a processor 308. Computing device 301 may also include a number of additional external or internal devices. An external input device 320 and an external output device 321 are shown in FIG. 3. The input devices 320 may include, without limitation, a mouse, a CD-ROM, or a keyboard. The output devices may include, without limitation, a display or an audio output device, such as a speaker.
  • [0029]
    In general, computing device 301 may be any type of computing platform, and may be connected to a network 302. Computing device 301 is exemplary only. Concepts consistent with the present invention can be implemented on any computing device, whether or not connected to a network.
  • [0030]
    Processor 308 executes program instructions stored in memory 309. Processor 308 can be any of a number of well-known computer processors, such as processors from Intel Corporation, of Santa Clara, Calif.
  • [0031]
    Memory 309 may contain application programs and data. In particular, memory 309 may include a system dictionary 315 and a dictionary creation component 316. System dictionary 315 may be used in training models for speech recognition in a manner similar to system dictionary 212 (FIG. 2). Entries in system dictionary 315 may be generated automatically by dictionary creation component 316 based on the written version of the word. This is in contrast to a conventional system dictionary, such as system dictionary 212, in which each entry is defined as a series of phonemes derived from a human expert.
  • System Operation
  • [0032]
    [0032]FIG. 4 is a diagram illustrating training of speech recognition models 401 consistent with the present invention. Models 401 may be implemented in a manner similar to models 101. Models 401 may be trained based on an input audio stream 410 and an input symbol stream 413. Symbol stream 413 may include a phoneme-like representation of the words in audio stream 410 from system dictionary 315. System dictionary 315 defines the written version of words as a sequence of symbols that relate to the pronunciation of the words. The symbols may be created automatically by dictionary creation component 316. From the view-point of models 401, the symbols in system dictionary 315 are treated as if they were phonemes. In actuality, however, the symbols are not phonemes and do not need to be defined by a speech expert.
  • [0033]
    Models 401 may be based on HMMs. Models 401 may include acoustic models and language models. The acoustic models may describe the time-varying evolution of feature vectors for each symbol in symbol stream 413. The acoustic models may employ continuous HMMs to model each of the symbols in various phonetic contexts.
  • [0034]
    The language models may include n-gram language models, where the probability of each word is a function of the previous word (for a bi-gram language model) and the previous two words (for a tri-gram language model). Typically, the higher the order of the language model, the higher the recognition accuracy at the cost of slower recognition speeds.
  • Dictionary Creation Component
  • [0035]
    [0035]FIG. 5 is a flow chart illustrating operation of dictionary creation component 316 consistent with an aspect of the invention. The acts shown in FIG. 5 are particularly appropriate for languages in which pronunciations are “regular” in the sense that each written character tends to correspond to a sound. Arabic, Italian, and Spanish are examples of regular languages.
  • [0036]
    To begin, dictionary creation component 316 receives the written version of the words that are to be entered into system dictionary 315 (Act 501). The written version of the words may, for example, be manually entered by a user or the words may be obtained through an automated process. The automated process may include scanning documents on a network, such as a web-crawling program that scans documents on the Internet.
  • [0037]
    Symbols for system dictionary 315 are based directly on the written characters. Thus, in this implementation, dictionary creation component 316 separates the received word into its constituent characters and writes a corresponding entry to system dictionary 315 (Acts 502 and 503). For example, the Spanish word “ducha”(shower) would be processed by system dictionary creation component 316 as five sequential symbols, such as the symbols D-U-C-H-A. Similarly, the Spanish word “esponja” would correspond to seven sequential symbols, such as the symbols E-S-P-O-N-J-A.
  • [0038]
    [0038]FIG. 6 is a flow chart illustrating operation of dictionary creation component 316 consistent with another aspect of the invention. Some languages, such as English, are not regular in the sense that the written characters, depending on the context of the character within its surrounding characters, may correspond to more than one sound.
  • [0039]
    Dictionary creation component 316 begins by receiving the written version of the words that are to be entered into system dictionary 315 (Act 601). This act is identical to Act 501 of FIG. 5.
  • [0040]
    Symbols for system dictionary 315 are based on the written characters or on groupings of the written characters. Dictionary creation component 316 segments the input word into symbols that may represent a single written character or a grouping of characters (Act 602). These symbols are then entered into system dictionary 315 (Act 603).
  • [0041]
    As an example of a character grouping, consider the English word “wrought.” This word may be processed by system dictionary creation component 316 as five sequential symbols, such as the symbols W-R-O-U-GHT. The characters “W”, “R”, “O”, and “U” all correspond to individual symbols, while the three characters “GHT” together correspond to a single symbol. As another example, consider the English word “tying.” This word may be segmented into the three symbols T-Y-ING, where the three characters “ING” are considered to be a single symbol.
  • [0042]
    The determination of which character groupings are considered to be a single symbol may be determined through a statistical analysis of the written words of the language. In one implementation, the statistical analysis includes looking at character groupings of two characters (pairs) and three characters within a standard dictionary. The most frequently occurring two and three character groupings within the dictionary are determined to correspond to single symbols. The frequency threshold for when a grouping is considered to be a “most frequently occurring” grouping may be manually determined by a speech expert based on the observed effectiveness of models 401 when trained using various thresholds.
  • [0043]
    [0043]FIG. 7 is a flow chart illustrating operation of dictionary creation component 316 consistent with yet another aspect of the invention. As with the operation of dictionary creation component 316 pursuant to FIGS. 5 and 6, the method of FIG. 7 begins when the written version of a word is input to dictionary creation component 316 (Act 701). Dictionary creation component 316 then determines to which of a number of predetermined word classes the word belongs (Act 702). The word classes may be predefined by a speech expert or may be predefined based on a statistical analysis of the lexicon. For example, words whose origins derive from old English words may be classified in an “old English” classification. As another example, words with a certain suffix or prefix may be placed into another classification.
  • [0044]
    Dictionary creation component 316 converts each word into a series of pronunciation symbols based on the word classification. Each classification may be assigned to one of a number of conversion methods. For example, as shown in FIG. 7, depending on the classification, the word may be converted into symbols in which each symbol directly corresponds to a character of the word (Acts 703 and 704, identical to Acts 502 and 503). Alternatively, depending on the classification, dictionary creation component 316 may segment the input word into symbols that may represent a single written character or a grouping of characters (Acts 705 and 706, identical to Acts 602 and 603).
  • Conclusion
  • [0045]
    As described above, dictionary creation component 316 converts the normal orthographic written representation of a word into a sequence of symbols that relate to the pronunciation of the word. The symbols may be used to train conventional models for speech recognition. Depending on the language, dictionary creation component 316 may operate according to a number of conversion techniques, such as those shown in FIGS. 5-7. A speech expert may initially configure dictionary creation component 316 for each particular language. Once configured, dictionary creation component 316 may automatically generate the symbols of a word for system dictionary 315 based on only the normal written representation of the word. Accordingly, users that are not trained speech experts can easily update system dictionary 315.
  • [0046]
    The foregoing description of preferred embodiments of the invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while series of acts have been presented with respect to FIGS. 5-7, the order of the acts may be different in other implementations consistent with the present invention. Additionally, non-dependent acts may be implemented in parallel.
  • [0047]
    Certain portions of the invention have been described as software that performs one or more functions. The software may more generally be implemented as any type of logic. This logic may include hardware, such as an application specific integrated circuit a field programmable gate array, software, or a combination of hardware and software.
  • [0048]
    No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used.
  • [0049]
    The scope of the invention is defined by the claims and their equivalents.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US6024571 *24 Apr 199715 Feb 2000Renegar; Janet ElaineForeign language communication system/device and learning aid
US6714911 *15 Nov 200130 Mar 2004Harcourt Assessment, Inc.Speech transcription and analysis system and method
US6999918 *20 Sep 200214 Feb 2006Motorola, Inc.Method and apparatus to facilitate correlating symbols to sounds
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US727255823 Mar 200718 Sep 2007Coveo Solutions Inc.Speech recognition training method for audio and video file indexing on a search engine
US8301446 *30 Mar 200930 Oct 2012Adacel Systems, Inc.System and method for training an acoustic model with reduced feature space variation
US20100250240 *30 Mar 200930 Sep 2010Adacel Systems, Inc.System and method for training an acoustic model with reduced feature space variation
Classifications
U.S. Classification704/10
International ClassificationG06F17/00, G06F17/20, G10L15/00, G10L11/00, G10L13/08, G10L15/14, G06F17/21
Cooperative ClassificationG10L15/28, G10L15/32
Legal Events
DateCodeEventDescription
10 Oct 2003ASAssignment
Owner name: BBNT SOLUTIONS LLC, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BILLA, JAYADEV;KUBALA, FRANCIS;REEL/FRAME:014612/0832;SIGNING DATES FROM 20031003 TO 20031007
12 May 2004ASAssignment
Owner name: FLEET NATIONAL BANK, AS AGENT, MASSACHUSETTS
Free format text: PATENT & TRADEMARK SECURITY AGREEMENT;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014624/0196
Effective date: 20040326
Owner name: FLEET NATIONAL BANK, AS AGENT,MASSACHUSETTS
Free format text: PATENT & TRADEMARK SECURITY AGREEMENT;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014624/0196
Effective date: 20040326
4 Oct 2004ASAssignment
Owner name: BBNT SOLUTIONS LLC, MASSACHUSETTS
Free format text: CORRECTIVE DOCUMENT PREVIOUSLY RECORDED AT REEL 014612 FRAME 0832. (ASSIGNMENT OF ASSIGNOR S INTEREST);ASSIGNORS:BILLA, JAYADEV;KUBALA, FRANCIS G.;REEL/FRAME:015847/0061;SIGNING DATES FROM 20031003 TO 20031007
2 Mar 2006ASAssignment
Owner name: BBN TECHNOLOGIES CORP., MASSACHUSETTS
Free format text: MERGER;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:017274/0318
Effective date: 20060103
Owner name: BBN TECHNOLOGIES CORP.,MASSACHUSETTS
Free format text: MERGER;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:017274/0318
Effective date: 20060103
27 Oct 2009ASAssignment
Owner name: BBN TECHNOLOGIES CORP. (AS SUCCESSOR BY MERGER TO
Free format text: RELEASE OF SECURITY INTEREST;ASSIGNOR:BANK OF AMERICA, N.A. (SUCCESSOR BY MERGER TO FLEET NATIONAL BANK);REEL/FRAME:023427/0436
Effective date: 20091026