US6401065B1 - Intelligent keyboard interface with use of human language processing - Google Patents

Intelligent keyboard interface with use of human language processing Download PDF

Info

Publication number
US6401065B1
US6401065B1 US09/335,345 US33534599A US6401065B1 US 6401065 B1 US6401065 B1 US 6401065B1 US 33534599 A US33534599 A US 33534599A US 6401065 B1 US6401065 B1 US 6401065B1
Authority
US
United States
Prior art keywords
keyboard
input
voice
arcs
states
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/335,345
Inventor
Dimitri Kanevsky
Stephane Maes
Clifford A. Pickover
Alexander Zlatsin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuance Communications Inc
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US09/335,345 priority Critical patent/US6401065B1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PICKOVER, CLIFFORD A., KANEVSKY, DIMITRI, MAES, STEPHANE, ZLATSIN, ALEXANDER
Application granted granted Critical
Publication of US6401065B1 publication Critical patent/US6401065B1/en
Assigned to NUANCE COMMUNICATIONS, INC. reassignment NUANCE COMMUNICATIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0238Programmable keyboards
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Definitions

  • the present invention relates to computer keyboard technology and more particularly to an intelligent keyboard interface having human language processing capabilities.
  • a typical computer keyboard includes alphanumeric and function code key pads which are labeled to indicate the functionality of each key.
  • an application such as a word processor, spread sheet program, and/or mosaics, maps a function provided by that application to a key such that when a user presses the key, that function is initiated by the application.
  • those applications typically designate different functionalities by combining the key sequences. The user must then memorize all the key sequences and/or have to refer back to the manual.
  • the presently available devices do not provide feasible solutions to the ever-shrinking size of computers and current trends towards portable equipment such as laptops and notebooks, which do not leave sufficient space for both a screen and key board. Therefore, it is highly desirable to provide a more flexible mechanism for enabling different designations or assignment of functions on the keyboard at different times according to the functionality being employed without the problems associated with the presently available techniques described hereinabove.
  • the present invention is directed to a novel intelligent keyboard which employs automatic speech recognition (ASR) and semantic language processor (SLP) techniques to provide a user friendly interface for various keyboard uses.
  • ASR automatic speech recognition
  • SLP semantic language processor
  • the intelligent keyboard of the present invention having an ASR module will enable the individual keypads on the keyboard to display their labels in international languages.
  • the keyboard will include a capability to recognize speech such as “get Russian alphabet” and respond by displaying Russian characters on the key pad.
  • An instruction from a user such as “go back to English” will drive the keyboard to display the English characters again.
  • the displays need not be limited to a different set of languages being displayed at different times.
  • a different set of fonts and/or styles in the same language may be displayed according to a user's instructions. For instance, a user may instruct the intelligent keyboard of the present invention to display cursive style. The cursive style will then appear as labels on each keypad.
  • a user may request different applications by voice. For example, the user may vocalize “activate Internet access”, and in response, the intelligent keyboard of the present invention may display symbols on the keys which are used for accessing various Internet functions, including e-mail lists, calls to specific customers, and the like.
  • a user may activate a different application such as a word processor or a spread sheet program and direct the keyboard to display function key labels corresponding to the application by enunciating “activate Word Perfect.”
  • the Intelligent keyboard of the present invention then responds by displaying the function key labels used in Word Perfect word processor application.
  • ASR also may be employed to display different icon images used with applications such as the database applications directly on the key pads.
  • the keyboard keypad set may be transformed into a telephone dial pad when the computer and telephone are integrated.
  • user may specify “telephone” to display a telephone dialer on the keypad.
  • a user may directly dial a phone number or access any phone number previously stored.
  • the display feature may also include conference call function key labels.
  • the present invention combines a use of SLP with an intelligent keyboard to enable interpretation and processing of multiple key function actions.
  • the meaning of a particular icon may depend on other icons that are activated via the respective key buttons at the same time. If an icon represents the picture of a telephone, and other icons represent the list, home, e-mail, or personal computer pictures, then pressing only one single key for example, the icon representing the telephone, may activate a telephone dialer and related functionality. Simultaneously pressing two keys with two icons, e.g., telephone and list, causes the display of the complete list of telephone numbers.
  • Pressing the keys displaying the icons telephone and e-mail e.g., causes the display of the list of all e-mail addresses. Pressing the keys displaying the icons telephone and home activates the call to the user's home, for example, via usual telephone. Pressing the keys with labels such as telephone, home, or personal computer (PC) subsequently connects the user subnotebook remotely with his home PC, via a telephone line.
  • pressing the icon representing the telephone is interpreted by SLP of the present invention as a specific adjective or verb depending on the other keys, which are pressed simultaneously.
  • the present invention also enables combined use of ASR and SLP which allows the user optionally to either speak the commands, or enter the command by pressing multiple keys, or use the two options simultaneously.
  • An example where the two options may be used simultaneously is in a situation in which a user enunciates the word “save” while pressing a key that represents a file that should be saved. Such use makes saving files with long names especially convenient.
  • Another application for the combined use of ASR and SLP with an intelligent keyboard according to the present invent on may be video games where complex user interaction is required quickly.
  • SLP solves the complex task of recognizing the meaning of the user's combined voice-keyboard actions, i.e., whether the voice input should be displayed on a screen, e.g., typing a text, or activated on a keyboard, e.g., as an icon, or interpreted as an action, e.g., “call home”.
  • the intelligent keyboard interface of the present invention may be used with various devices including workstations, desktops, laptops, notebooks, palmtops, webphones, digital cameras, TVs and recorder controllers.
  • FIG. 1 illustrates the components of the present invention which may be integrated with a computer system
  • FIG. 2 is a flow diagram illustrating an input processing algorithm of the present invention
  • FIG. 3 is an example of an intelligent keyboard of the present invention which switches its LCD displays based on the user voice and key input;
  • FIG. 4 is illustrates an example of voice splitter functionality of the present invention
  • FIG. 5 is an example of a Hidden Markov model (HMM) employed in the present invention.
  • FIG. 6 is another example of a hybrid HMM where the voice and key inputs are processed as two types of HMM.
  • FIG. 1 illustrates the components of the intelligent program module 100 of the present invention, which may be integrated with such computer systems.
  • the system of the present invention can receive voice inputs and keypad inputs, either alone or in combination as shown by arrows 134 and 132 respectively.
  • the keyset 104 may be a mounted programmable keyset having embedded LCD displays 128 , 130 .
  • Voice input may include enunciated speech such as “I want to call” as shown at 102 .
  • a synchronize module 106 synchronizes the voice and keypad input to determine whether the combination of the two inputs are meant to be interpreted together.
  • the inputs are synchronized and determined to be related by measuring the time interval between the two inputs 132 , 134 . For example, if a time interval between the enunciation of the words, “I want to call” 102 and the activation of key stroke “home” 128 is within a short predetermined time period, the inputs are determined to be related and therefore are interpreted together as being “I want to call home”.
  • the voice input 102 then passes to a voice interpreter 108 , which activates an automatic speech recognition (ASR) module 116 .
  • ASR automatic speech recognition
  • the automatic speech recognition (ASR) module receives a voice input from the user, processes it to ascertain the meaning of the words.
  • ASR module may use any available ASR technique, which is generally known in the field of automatic speech recognition processing.
  • the interpreted voice input is then processed again by voice splitter 110 if needed.
  • the functions of a voice splitter 110 module in FIG. 1 will now be described with reference to FIG. 4 .
  • the voice splitter 110 module generally groups the interpreted voice input into separable actions. For example, if a voice input “phone PC” was received, the voice splitter 110 (FIG.
  • the voice splitter 110 determines that the two words designate a single action 404 , e.g., displaying phone number pad on PC so that the PC may be used as a telephone.
  • the voice splitter 110 (FIG. 1) would split the input into two actions, action 1 phone 402 and action 3 home 406 .
  • the key input inserter 112 combines the voice and keypad input into one phrase which is then interpreted by an interpreter 114 activating a semantic language processing (SLP) module 118 .
  • a semantic language processing (SLP) module 118 of the present invention interprets a phrase formed from the voice input 102 via the ASR module 116 and keyboard input 104 .
  • an action 120 is activated. For example, if a voice input “list” 102 was received with keypad “e-mail” 130 , the action to be activated is determined to be listing e-mail addresses.
  • table mapping e.g., by table lookup, is performed to determine which e-mail addresses should be listed for this particular user.
  • an action may either produce a display on a display monitor or on a display device such as a liquid crystal display (LCD) 128 , 130 embedded in the keyset.
  • the keyset includes the ability to display varying symbols including pictures, letters, words, and/or icons on its keys. The value of the symbols on each key at each time depends on the user's actions expressed by voice and/or finger manipulations.
  • FIG. 3 illustrates the intelligent keyboard of the present invention which changes its displays based on a user input.
  • the keyboard contents 304 would change to display words relating to the keypad pressed as shown at 306 .
  • the keypad which includes an LCD display of phone and list icon 302 is pressed, the portions of keyboard LCD displays would list “w”,“o”,“r”,“k” 308 forming the word “work”, and “h”,“o ”,“m”,“e” 310 , forming the word “home”, etc.
  • the keyboard LCD displays 306 would change to actual phone number associated with user's work place.
  • the changing of LCD displays on the keyset may be achieved by transmitting signals to the LCD display control device to change its display content and is well known to the persons skilled in the art.
  • FIG. 2 is a flow diagram illustrating an input processing algorithm of the present invention.
  • keyboard input without any additional speech input is interpreted in inter-related steps which include segmentation 202 , matching 204 , and extension 206 .
  • These steps 202 , 204 , 206 may be performed, for example, in the synchronize module 106 shown FIG. 1 .
  • the segmentation step 202 the sequence of key actions that represents one “continuous segment” is detected by timing a delay between subsequent keys that were pressed. This time delay is usually very small and may be preset. The longest sequence of keys that satisfies time delay requirements between the key strokes, referred to as a “continuous segment” is considered as a possible command phrase.
  • a matching step 204 the sequence of entered symbols is matched against a predefined set of stored pairs which relate a key sequence with a command.
  • the longest “continuous segment” that matches some command from this set of pairs will be interpreted as the command that corresponds to the entered sequence of symbols.
  • an extension step 206 if the given sequence of keys that represents the continuous segment does not match any command, then the next continuous segment is added to the current one and both these subsequent segments are matched to a command phrase. That is, segmented phrases are concatenated or appended in an attempt to form a matching command. This process continues until the command phrase is found or the limit of length for sequences of keys is reached.
  • the voice input without keyboard input is interpreted using conventional automatic speech recognition techniques known to persons skilled in the art.
  • the keyboard then displays a set of commands that a user entered via voice. This allows for an increase in the number of commands that the user can use without memorizing the specific key code sequences.
  • automatic speech recognition may also be used for purpose other than to change a keyboard display, the present invention includes a method for determining whether a voice input should be interpreted as a command. For example, a voice input may be interpreted as a command if the words in the voice input are semantically relevant to some command phrases, which are specified by symbols that are currently displayed on the keyboard.
  • SLP of the present invention interprets the phrase as a command and not as input that should be typed in a file.
  • the method and system of present invention interprets combined voice and keyboard input in a series of steps including segmentation 202 , matching 204 , and extension 206 as described hereinabove.
  • a voice input which is decoded according to a known ASR technique and a keyboard-activated sequence of symbols is considered in the present invention as related if a time interval between the enunciation of the words and the activation of key strokes is within a short predetermined time period. Then, if this sequence matches a predefined pair, e.g., “decoded class phrase”+key symbol sequence ⁇ command pair, the relevant command is executed.
  • the “decoded class phrase” denotes a semantic class of the decoded phrase, e.g., “show a picture of a dog”, “display a picture of a dog”, “show a dog” are in the same semantic class.
  • the sequence of key-entered symbols is extended as described hereinabove. If a combined decoded phrase-keyboard input cannot be extended to match a command, then the decoded input is directed to a monitor or an output file where the user's dictation is stored and only the keyboard's sequence of symbols is processed as a command. Other variations of segmentation, matching and extension are available.
  • a voice-keyboard input is sent to SLP
  • the configuration of symbols on the keyboard is changed and the user can enter new voice-keyboard input in accordance with the new configurations of symbols.
  • the user can say: “Show me . . . ” and then press the key [switch] that displays the symbol “(Customers in NY”.
  • the keyboard would display the list of customers in NY and a keyset for new possible actions, e.g., “orders”, “profile”, “calls”, “cancel” and “addresses”.
  • the user can speak the words, “What is order of” and press the key that corresponds to a particular customer.
  • the present invention provides a predefined set of pairs for the key sequences and commands, and also for the voice input phrases+key sequences and commands. User-friendliness of an interface largely depends on how multiple key actions are matched with commands.
  • the set A is indexed by sequences of classes, which may include parts of speech. Each symbol from A can belong to several different classes, for example a symbol TELEPHONE can correspond to any of three parts of speech-noun (e.g. “hang TELEPHONE”), adjective (e.g. “TELEPHONE list”) or verb (“CALL”).
  • the sequence of symbols defines a sequence of classes (e.g., “SAVE TELEPHONE LIST” defines a sequence “VERB ADJECTIVE NOUN”).
  • C denote a set of allowed sequences of classes. For example, “NOUN NOUN” is not allowed, but “ADJECTIVE NOUN” or “VERB NOUN” is allowed.
  • the sequence of symbols will be called admissible if it defines the unique sequence of classes from C. If the sequence of symbols is admissible it can be mapped to a formal expression in a formal language and further actions can be processed. For example, let the symbol LIST correspond only to the class “NOUN”. Then the sequence “TELEPHONE LIST” can only correspond to two classes, i.e., “ADJECTIVE NOUN”—telephone list, or “VERB NOUN” call list.
  • the computer would then display by default a telephone list of numbers. Otherwise if there are two more symbols entered, e.g., “PC” and “HUMAN LANGUAGE TECHNOLOGIES”, one can get the unique allowed sequence “VERB ADJECTIVE NOUN ADJECTIVE NOUN” that corresponds to the action: “send a note via e-mail to all members of the Human Language Technologies department” (‘send’ ‘electronic’ ‘note’ ‘HLT’ ‘department’).
  • the construction of the admissible set C is provided with help of semantic and grammar tools.
  • the statistical analysis and classification of typical user actions is also possible.
  • HMM Hidden Markov models
  • Hidden Markov models can include as output both type of labels, i.e., voice and key inputs.
  • overlapped voice and key labeled segments are separated and the likelihood of each alternative is computed.
  • HMM Hidden Markov model
  • users often mistype words by typing a nearby key instead, e.g., “IBN” instead “IBM”, or omit typing letters in a word, e.g., “CAL” instead of “Call”.
  • users may type abbreviated versions of words, e.g., “u” instead of “you”.
  • a user may speak as described above.
  • FIGS. 5 and 6 are state diagrams illustrating the multimodal approaches described hereinabove for recognizing the input that includes voice and/or key input.
  • Hidden Markov model (HMM) output labels may include voice and keys for multimodal approach. Every HMM represents a word in a vocabulary and HMM is used to compute a likelihood of any given string of labels. This string of labels represents a user voice and key input. Some of the voice and key inputs may overlap in time, i.e., the user may speak and type simultaneously within a given time interval. A probability of an occurrence of a string having voice and key labels may be computed for each HMM in a vocabulary using a known standard method.
  • a likelihood score for a particular string outcome may be computed by multiplying the probability of the HMM corresponding to a word.
  • the decoding word is the word that corresponds to the HMM with the highest likelihood score.
  • a time line 500 is shown in which a voice and key are input where some of voice and key input overlap in time.
  • voice and key labels are overlapped
  • several possible strings are considered in which key and voice labels are separated.
  • FIG. 5 which includes voice string v 1 , v 2 , . . . , v 10 , v 11 , v 12 502 and a key string k 1 , k 2 , k 3 , k 4 504 where v 11 , v 12 , 506 and k 1 , k 2 508 overlap in time.
  • the voice input includes three segments v 1 , v 2 , . . .
  • the string that has the highest likelihood score for some sequence of words in the vocabulary is put into a decoding stack. After several decoding paths are out into stack the best decoding path is chosen using standard methods.
  • FIG. 5 also illustrates an example of a Hidden Markov model 510 with states 512 , 514 , 516 and output arcs 518 , 520 , 522 , 524 , 526 where the arcs 518 , 522 , and 526 are loops.
  • Each arc has output labels: key data ⁇ k 1 , k 2 , . . . ⁇ and voice data ⁇ v 1 , v 2 , . . . ⁇ 530 .
  • a likelihood for one path is computed as a product of probabilities of an outcome including a voice or key label from one of the above strings produced by an arc j.
  • FIG. 6 is another example of a hybrid HMM where the voice and key inputs are processed as two types of HMM.
  • two HMMs 602 604 are connected via dotted arrows 606 608 610 .
  • the first HMM 602 models strings that include voice labels.
  • the output labels in the first HMM 602 are voice labels v 1 , v 2 , vn 612 .
  • the second HMM 604 models strings that include key input labels, k 1 , k 2 , . . . kn 614 .
  • the path jumps to the second HMM 602 , the output of key label is produced.
  • the path can at any time return to the original HMM, i.e., in this example the first HMM 602 and continue to output voice labels, continuing to produce voice output from where it stopped before switching to the second HMM 604 .
  • Each path has a different likelihood [scores] score.
  • the path that produces the highest likelihood score is considered as the most probable path and the likelihood score of this path is compared with other most probable likelihood scores for paths for different hybrids of HMMs for different words or classes of words.
  • This hybrid HMM can be described as a typical HMM with arcs that have zero or empty output labels when traversing between the HMMs.

Abstract

An intelligent use-friendly keyboard interface that is easily adaptable for wide variety of functions and features, and also adaptable to reduced size portable computers. Speech recognition and semantic processing for controlling and interpreting multiple symbols are used in conjunction with programmable switches with embedded LCD displays. Hidden Markov models are employed to interpret a combination of voice and keyboard input.

Description

TECHNICAL FIELD OF THE INVENTION
The present invention relates to computer keyboard technology and more particularly to an intelligent keyboard interface having human language processing capabilities.
BACKGROUND OF THE INVENTION
A typical computer keyboard includes alphanumeric and function code key pads which are labeled to indicate the functionality of each key. Typically, an application such as a word processor, spread sheet program, and/or mosaics, maps a function provided by that application to a key such that when a user presses the key, that function is initiated by the application. However, because most applications in use at present provide numerous functionalities which exceed the number of keys available on the existing keyboard, those applications typically designate different functionalities by combining the key sequences. The user must then memorize all the key sequences and/or have to refer back to the manual.
The most common solution employed so far is the use of templates which are placed on top of the keyboard, on which are printed the description of the functions. However this solution, although inexpensive, is limited in its usefulness since these templates obviously cannot represent all the necessary possible multiple function keys for the many applications described herein that are now commonly used. Another existing solution using electronic templates is described in U.S. Pat. No. 5,181,029 issued on Jan. 19, 1993 to Jason S. Kim and entitled “Electronic Keyboard Template”. However, the electronic templates not only require an increase in the size of the keyboard, but also are restricted to the representation of the meaning of the keys located in the upper part of the keyboard. For example, the electronic templates do not allow explanation for all the keys of the keyboard, for instance, editing in an emacs document editor.
Yet another solution has been to use pop-up menus and/or icons in place of multiple key functions. This solution however, is extremely cumbersome and inconvenient because a user must frequently switch between the keyboard and a mouse or a digitizer in order to operate the pop-up menus. Moreover, the use of pop-up menus and icons creates a major disadvantage by occupying a large portion of the space on a screen or a display terminal and severely restricting, the available space required to display other more important information such as texts being edited and images further requiring a user to constantly switch between various shells and icons, which is extremely cumbersome and inconvenient.
Accordingly, the presently available devices do not provide feasible solutions to the ever-shrinking size of computers and current trends towards portable equipment such as laptops and notebooks, which do not leave sufficient space for both a screen and key board. Therefore, it is highly desirable to provide a more flexible mechanism for enabling different designations or assignment of functions on the keyboard at different times according to the functionality being employed without the problems associated with the presently available techniques described hereinabove.
SUMMARY OF THE INVENTION
The present invention is directed to a novel intelligent keyboard which employs automatic speech recognition (ASR) and semantic language processor (SLP) techniques to provide a user friendly interface for various keyboard uses. The intelligent keyboard of the present invention having an ASR module will enable the individual keypads on the keyboard to display their labels in international languages. For example, the keyboard will include a capability to recognize speech such as “get Russian alphabet” and respond by displaying Russian characters on the key pad. An instruction from a user such as “go back to English” will drive the keyboard to display the English characters again.
Furthermore, the displays need not be limited to a different set of languages being displayed at different times. A different set of fonts and/or styles in the same language may be displayed according to a user's instructions. For instance, a user may instruct the intelligent keyboard of the present invention to display cursive style. The cursive style will then appear as labels on each keypad.
Further yet, with the intelligent keyboard of the present invention, a user may request different applications by voice. For example, the user may vocalize “activate Internet access”, and in response, the intelligent keyboard of the present invention may display symbols on the keys which are used for accessing various Internet functions, including e-mail lists, calls to specific customers, and the like.
Additionally, a user may activate a different application such as a word processor or a spread sheet program and direct the keyboard to display function key labels corresponding to the application by enunciating “activate Word Perfect.” The Intelligent keyboard of the present invention then responds by displaying the function key labels used in Word Perfect word processor application. ASR also may be employed to display different icon images used with applications such as the database applications directly on the key pads.
In yet another embodiment of the present invention, the keyboard keypad set may be transformed into a telephone dial pad when the computer and telephone are integrated. For example, user may specify “telephone” to display a telephone dialer on the keypad. With such capabilities, a user may directly dial a phone number or access any phone number previously stored. The display feature may also include conference call function key labels.
When a user needs to correct the errors from the speech recognizer, the user will have the set of alternative options displayed directly on the keys of the keyboard. The user can easily select the correct option/word, by either pressing the corresponding key, or pronouncing the number of the line where the correct word is displayed. This solution is especially advantageous with reduced size laptops, and particularly with Personal Digital Assistants where the small screens cannot represent simultaneously a speech recognition output and a window with alternative words.
Yet further, the present invention combines a use of SLP with an intelligent keyboard to enable interpretation and processing of multiple key function actions. For example, in applications where one or more keys display different icons, the meaning of a particular icon may depend on other icons that are activated via the respective key buttons at the same time. If an icon represents the picture of a telephone, and other icons represent the list, home, e-mail, or personal computer pictures, then pressing only one single key for example, the icon representing the telephone, may activate a telephone dialer and related functionality. Simultaneously pressing two keys with two icons, e.g., telephone and list, causes the display of the complete list of telephone numbers. Pressing the keys displaying the icons telephone and e-mail, e.g., causes the display of the list of all e-mail addresses. Pressing the keys displaying the icons telephone and home activates the call to the user's home, for example, via usual telephone. Pressing the keys with labels such as telephone, home, or personal computer (PC) subsequently connects the user subnotebook remotely with his home PC, via a telephone line. In sum, pressing the icon representing the telephone is interpreted by SLP of the present invention as a specific adjective or verb depending on the other keys, which are pressed simultaneously.
The present invention also enables combined use of ASR and SLP which allows the user optionally to either speak the commands, or enter the command by pressing multiple keys, or use the two options simultaneously. An example where the two options may be used simultaneously is in a situation in which a user enunciates the word “save” while pressing a key that represents a file that should be saved. Such use makes saving files with long names especially convenient. Another application for the combined use of ASR and SLP with an intelligent keyboard according to the present invent on may be video games where complex user interaction is required quickly. In these cases SLP solves the complex task of recognizing the meaning of the user's combined voice-keyboard actions, i.e., whether the voice input should be displayed on a screen, e.g., typing a text, or activated on a keyboard, e.g., as an icon, or interpreted as an action, e.g., “call home”.
The intelligent keyboard interface of the present invention may be used with various devices including workstations, desktops, laptops, notebooks, palmtops, webphones, digital cameras, TVs and recorder controllers.
Further features and advantages of the present invention as well as the structure and operation of various embodiments of the present invention are described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements.
BRIEF DESCRIPTION OF THE DRAWINGS
Preferred embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings in which:
FIG. 1 illustrates the components of the present invention which may be integrated with a computer system;
FIG. 2 is a flow diagram illustrating an input processing algorithm of the present invention;
FIG. 3 is an example of an intelligent keyboard of the present invention which switches its LCD displays based on the user voice and key input;
FIG. 4 is illustrates an example of voice splitter functionality of the present invention;
FIG. 5 is an example of a Hidden Markov model (HMM) employed in the present invention; and
FIG. 6 is another example of a hybrid HMM where the voice and key inputs are processed as two types of HMM.
DESCRIPTION OF THE PREFERRED EMBODIMENT OF THE INVENTION
The components of the device of the present invention generally reside within a computer system having a central processing unit or an equivalent thereto, including a workstation, desktop, laptop, notebook, or palmtop. FIG. 1 illustrates the components of the intelligent program module 100 of the present invention, which may be integrated with such computer systems. The system of the present invention can receive voice inputs and keypad inputs, either alone or in combination as shown by arrows 134 and 132 respectively. The keyset 104 may be a mounted programmable keyset having embedded LCD displays 128, 130. Voice input may include enunciated speech such as “I want to call” as shown at 102. When the inputs are received by the intelligent program module 100 of the present invention, a synchronize module 106 synchronizes the voice and keypad input to determine whether the combination of the two inputs are meant to be interpreted together. The inputs are synchronized and determined to be related by measuring the time interval between the two inputs 132, 134. For example, if a time interval between the enunciation of the words, “I want to call”102 and the activation of key stroke “home”128 is within a short predetermined time period, the inputs are determined to be related and therefore are interpreted together as being “I want to call home”.
The voice input 102 then passes to a voice interpreter 108, which activates an automatic speech recognition (ASR) module 116. Generally, the automatic speech recognition (ASR) module receives a voice input from the user, processes it to ascertain the meaning of the words. ASR module may use any available ASR technique, which is generally known in the field of automatic speech recognition processing. The interpreted voice input is then processed again by voice splitter 110 if needed. The functions of a voice splitter 110 module in FIG. 1 will now be described with reference to FIG. 4. The voice splitter 110 module generally groups the interpreted voice input into separable actions. For example, if a voice input “phone PC” was received, the voice splitter 110 (FIG. 1) determines that the two words designate a single action 404, e.g., displaying phone number pad on PC so that the PC may be used as a telephone. In another example, if a voice input “phone home” was received, the voice splitter 110 (FIG. 1) would split the input into two actions, action 1 phone 402 and action 3 home 406.
Referring back to FIG. 1, the key input inserter 112 combines the voice and keypad input into one phrase which is then interpreted by an interpreter 114 activating a semantic language processing (SLP) module 118. A semantic language processing (SLP) module 118 of the present invention interprets a phrase formed from the voice input 102 via the ASR module 116 and keyboard input 104. According to the SLP 118 interpretation, an action 120 is activated. For example, if a voice input “list” 102 was received with keypad “e-mail” 130, the action to be activated is determined to be listing e-mail addresses. At 124, table mapping, e.g., by table lookup, is performed to determine which e-mail addresses should be listed for this particular user. Then at 126, the contents of LCD keyboard displays are changed to reflect the e-mail addresses, which are then shown on the LCD keyboard 104. Thus, contents of keyboard LCD displays would change to new symbols and configurations depending on the ASR module 116 and SLP module 118 interpretation of the inputs 102, 104.
As a result of voice 134 and/or keypad input 132, an action may either produce a display on a display monitor or on a display device such as a liquid crystal display (LCD) 128, 130 embedded in the keyset. The keyset includes the ability to display varying symbols including pictures, letters, words, and/or icons on its keys. The value of the symbols on each key at each time depends on the user's actions expressed by voice and/or finger manipulations.
FIG. 3 illustrates the intelligent keyboard of the present invention which changes its displays based on a user input. When a phone list keypad 302 is pressed, the keyboard contents 304 would change to display words relating to the keypad pressed as shown at 306. For example, if the keypad which includes an LCD display of phone and list icon 302 is pressed, the portions of keyboard LCD displays would list “w”,“o”,“r”,“k” 308 forming the word “work”, and “h”,“o ”,“m”,“e” 310, forming the word “home”, etc. When any of the keys “w”,“o”,“r”,“k” are pressed, the keyboard LCD displays 306 would change to actual phone number associated with user's work place. The changing of LCD displays on the keyset may be achieved by transmitting signals to the LCD display control device to change its display content and is well known to the persons skilled in the art.
FIG. 2 is a flow diagram illustrating an input processing algorithm of the present invention. In the preferred embodiment of the present invention, keyboard input without any additional speech input is interpreted in inter-related steps which include segmentation 202, matching 204, and extension 206. These steps 202, 204, 206 may be performed, for example, in the synchronize module 106 shown FIG. 1. In the segmentation step 202, the sequence of key actions that represents one “continuous segment” is detected by timing a delay between subsequent keys that were pressed. This time delay is usually very small and may be preset. The longest sequence of keys that satisfies time delay requirements between the key strokes, referred to as a “continuous segment” is considered as a possible command phrase.
In a matching step 204, the sequence of entered symbols is matched against a predefined set of stored pairs which relate a key sequence with a command. The longest “continuous segment” that matches some command from this set of pairs will be interpreted as the command that corresponds to the entered sequence of symbols.
In an extension step 206, if the given sequence of keys that represents the continuous segment does not match any command, then the next continuous segment is added to the current one and both these subsequent segments are matched to a command phrase. That is, segmented phrases are concatenated or appended in an attempt to form a matching command. This process continues until the command phrase is found or the limit of length for sequences of keys is reached.
In the preferred embodiment of the present invention, the voice input without keyboard input is interpreted using conventional automatic speech recognition techniques known to persons skilled in the art. The keyboard then displays a set of commands that a user entered via voice. This allows for an increase in the number of commands that the user can use without memorizing the specific key code sequences. Since automatic speech recognition may also be used for purpose other than to change a keyboard display, the present invention includes a method for determining whether a voice input should be interpreted as a command. For example, a voice input may be interpreted as a command if the words in the voice input are semantically relevant to some command phrases, which are specified by symbols that are currently displayed on the keyboard. For instance, if keys on the keyboard display symbols telephone, list and e-mail, and the user enunciates a phrase, “Show me e-mail notebook”, SLP of the present invention interprets the phrase as a command and not as input that should be typed in a file.
The method and system of present invention interprets combined voice and keyboard input in a series of steps including segmentation 202, matching 204, and extension 206 as described hereinabove. A voice input which is decoded according to a known ASR technique and a keyboard-activated sequence of symbols is considered in the present invention as related if a time interval between the enunciation of the words and the activation of key strokes is within a short predetermined time period. Then, if this sequence matches a predefined pair, e.g., “decoded class phrase”+key symbol sequence−command pair, the relevant command is executed. Here the “decoded class phrase” denotes a semantic class of the decoded phrase, e.g., “show a picture of a dog”, “display a picture of a dog”, “show a dog” are in the same semantic class.
In an extension step, the sequence of key-entered symbols is extended as described hereinabove. If a combined decoded phrase-keyboard input cannot be extended to match a command, then the decoded input is directed to a monitor or an output file where the user's dictation is stored and only the keyboard's sequence of symbols is processed as a command. Other variations of segmentation, matching and extension are available.
After a voice-keyboard input is sent to SLP, the configuration of symbols on the keyboard is changed and the user can enter new voice-keyboard input in accordance with the new configurations of symbols. For example, the user can say: “Show me . . . ” and then press the key [switch] that displays the symbol “(Customers in NY”. The keyboard would display the list of customers in NY and a keyset for new possible actions, e.g., “orders”, “profile”, “calls”, “cancel” and “addresses”. The user can speak the words, “What is order of” and press the key that corresponds to a particular customer.
The present invention provides a predefined set of pairs for the key sequences and commands, and also for the voice input phrases+key sequences and commands. User-friendliness of an interface largely depends on how multiple key actions are matched with commands. In the present invention, a set of pairs has the following grammatical structure: A={(sequence of symbols, command)}. The set A is indexed by sequences of classes, which may include parts of speech. Each symbol from A can belong to several different classes, for example a symbol TELEPHONE can correspond to any of three parts of speech-noun (e.g. “hang TELEPHONE”), adjective (e.g. “TELEPHONE list”) or verb (“CALL”). The sequence of symbols defines a sequence of classes (e.g., “SAVE TELEPHONE LIST” defines a sequence “VERB ADJECTIVE NOUN”).
For example, let set C denote a set of allowed sequences of classes. For example, “NOUN NOUN” is not allowed, but “ADJECTIVE NOUN” or “VERB NOUN” is allowed. The sequence of symbols will be called admissible if it defines the unique sequence of classes from C. If the sequence of symbols is admissible it can be mapped to a formal expression in a formal language and further actions can be processed. For example, let the symbol LIST correspond only to the class “NOUN”. Then the sequence “TELEPHONE LIST” can only correspond to two classes, i.e., “ADJECTIVE NOUN”—telephone list, or “VERB NOUN” call list. In the present invention, by employing the above described syntax, if there is no third key symbol specified, the computer would then display by default a telephone list of numbers. Otherwise if there are two more symbols entered, e.g., “PC” and “HUMAN LANGUAGE TECHNOLOGIES”, one can get the unique allowed sequence “VERB ADJECTIVE NOUN ADJECTIVE NOUN” that corresponds to the action: “send a note via e-mail to all members of the Human Language Technologies department” (‘send’ ‘electronic’ ‘note’ ‘HLT’ ‘department’).
The construction of the admissible set C is provided with help of semantic and grammar tools. The statistical analysis and classification of typical user actions is also possible.
In situations where a whole word is a mixture of two modality inputs, for example, a voice input and a key input the present invention employs Hidden Markov models (HMM) to convert the input into an appropriate string of commands which signals an action. A label of a document or an item may include both phonemes and numbers, and some examples of the combined input were described hereinabove. A telephone number is such an example. Another example may be a document name such as “doc01.” In these instances it is expected that a user may enunciate the phoneme and type in the rest of the numbers on the keyboard to convey a document name for further processing. In the preferred embodiment the present invention interprets these combined inputs by employing several different approaches of Hidden Markov model (HMM) structures.
In the first approach Hidden Markov models can include as output both type of labels, i.e., voice and key inputs. In this approach overlapped voice and key labeled segments are separated and the likelihood of each alternative is computed.
In the second approach the Hidden Markov model (HMM) is a hybrid of two types of HMM: one to process voice labels and another to process key labels. These two types of HMM are connected with arcs that allow a path to switch between the two types of HMMs. In this approach overlapped voice key labeled segments have two time scales. The first time scale is used to process voice labeled segments and the second time scale is used to process key labeled segments. The two different time scales are switched back and forth at the same time when two types of HMM are switched. In both first and second approaches probabilities for key labels are estimated using the criteria that keys may be mistyped or dropped when words are typed especially when key input is combined with voice input. Moreover, users often mistype words by typing a nearby key instead, e.g., “IBN” instead “IBM”, or omit typing letters in a word, e.g., “CAL” instead of “Call”. In other instances, users may type abbreviated versions of words, e.g., “u” instead of “you”. Furthermore, instead of typing, a user may speak as described above.
FIGS. 5 and 6 are state diagrams illustrating the multimodal approaches described hereinabove for recognizing the input that includes voice and/or key input. In the present invention Hidden Markov model (HMM) output labels may include voice and keys for multimodal approach. Every HMM represents a word in a vocabulary and HMM is used to compute a likelihood of any given string of labels. This string of labels represents a user voice and key input. Some of the voice and key inputs may overlap in time, i.e., the user may speak and type simultaneously within a given time interval. A probability of an occurrence of a string having voice and key labels may be computed for each HMM in a vocabulary using a known standard method. For a given string of voice/key labels and each HMM corresponding to a word, a likelihood score for a particular string outcome may be computed by multiplying the probability of the HMM corresponding to a word. The decoding word is the word that corresponds to the HMM with the highest likelihood score.
In FIG. 5, a time line 500 is shown in which a voice and key are input where some of voice and key input overlap in time. In such a situation where voice and key labels are overlapped several possible strings are considered in which key and voice labels are separated. For example, consider an input as shown in FIG. 5 which includes voice string v1, v2, . . . , v10, v11, v12 502 and a key string k1, k2, k3, k4 504 where v11, v12, 506 and k1, k2 508 overlap in time. Consider also in this example that the voice input includes three segments v1, v2, . . . , v12; v1, v2, . . . ,v10; and v11, v12. Further consider that the sequence of keys k1, k2, k3, k4 corresponds to a word in a vocabulary and that no sub-segment in this string corresponds to any word). Then the strings for which scores of HMM are computed are: v1, v2, . . . . v11, v12, k1, k2, k3, k4; and v1, v2, . . . ,v10, k1, k2, k3, k4, v11, v12. The string that has the highest likelihood score for some sequence of words in the vocabulary is put into a decoding stack. After several decoding paths are out into stack the best decoding path is chosen using standard methods.
FIG. 5 also illustrates an example of a Hidden Markov model 510 with states 512, 514, 516 and output arcs 518, 520, 522, 524, 526 where the arcs 518, 522, and 526 are loops. Each arc has output labels: key data {k1, k2, . . . } and voice data {v1, v2, . . . } 530. A likelihood for one path is computed as a product of probabilities of an outcome including a voice or key label from one of the above strings produced by an arc j.
FIG. 6 is another example of a hybrid HMM where the voice and key inputs are processed as two types of HMM. In FIG. 6, two HMMs 602 604 are connected via dotted arrows 606 608 610. The first HMM 602 models strings that include voice labels. For example the output labels in the first HMM 602 are voice labels v1, v2, vn 612. The second HMM 604 models strings that include key input labels, k1, k2, . . . kn 614. When a path travels through the solid arrows in 602 or 604, labels are produced. When a path travels through the dotted lines, i.e., path is switched between the first and second HHMs, no labels are produced. If there are overlapped strings of voice and key labels then the voice and key parts of the strings that are overlapped are processed only when arcs are in the HMMs that correspond to the respective label types, i.e., voice or key.
For example, consider the following voice string v1, v2, . . , v10, v11, v12 and a key string k1, k2, k3, k4 where v11, v12 and k1, k2 overlap in time. When the voice string path is traversed only arcs in the first HMM 602 are processed. When the path encounters and processes time frames that have overlapped parts v11, v12 and k1, k2, the path may either stay in the first HMM 602 or jump to the second HMM 604 for key input parts. If the path stays in the first HMM 602, the voice label output continues. If the path jumps to the second HMM 602, the output of key label is produced. The path can at any time return to the original HMM, i.e., in this example the first HMM 602 and continue to output voice labels, continuing to produce voice output from where it stopped before switching to the second HMM 604.
There may be different [path] paths corresponding to different jumps between the first HMM 602 and second HMM 604. Each path has a different likelihood [scores] score. The path that produces the highest likelihood score is considered as the most probable path and the likelihood score of this path is compared with other most probable likelihood scores for paths for different hybrids of HMMs for different words or classes of words. This hybrid HMM can be described as a typical HMM with arcs that have zero or empty output labels when traversing between the HMMs. Thus, a general theory of computation of likelihoods and paths [are] is applicable and further there can be different ways to produce topologies of how states are connected, as described in the general theory.
While the invention has been particularly shown and described with respect to a preferred embodiment thereof, it will be understood by those skilled in the art that the foregoing and other changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (23)

Having thus described our invention, what we claim as new, and desire to secure by Letters Patent is:
1. An intelligent keyboard interface having human language processing capability which is responsive to commands designated by voice input and keyboard input to change keypad displays on a keyboard, the intelligent keyboard interface comprising:
an intelligent module automatically responsive to a combination of voice input and keystroke input to recognize and interpret the combination into command signals via a Hidden Markov model (HMM), the HMM comprising:
a first Hidden Markov model including one or more first states having one or more first arcs for connecting the one or more first states, the one or more first arcs representing output string of voice;
a second Hidden Markov model including one or more second states having one or more second arcs for connecting the one or more second states, the one or more second arcs representing output string of keys;
one of more third arcs connecting the one or more first states with the one or more second states, the one or more third arcs producing an empty output string, the one or more third arcs traversed when an interpretation state between the output string of the voice and the output string of keys are switched; and
one or more display screens embedded in one or more keypads on a keyboard capable of displacing contents responsive to the command signals transmitted from the intelligent module, wherein the displayed contents represent user selectable functionality for performing functions associated with the combination of the voice input and keyboard input.
2. The intelligent keyboard interface as claimed in claim 1, wherein the contents include symbolic icons.
3. The intelligent keyboard interface as claimed in claim 1, wherein the contents include alphanumeric text.
4. The intelligent keyboard interface as claimed in claim 1, wherein the intelligent module includes an automatic speech recognition module for interpreting voice input.
5. The intelligent keyboard interface as claimed in claim 4, wherein the intelligent module further includes a semantic language processing module for interpreting combination of voice input and keyboard input.
6. The intelligent keyboard interface as claimed in claim 5, wherein the intelligent module further includes a synchronization module for synchronizing inputs forming a command.
7. The intelligent keyboard interface as claimed in claim 6, wherein the synchronization module is enabled to synchronize temporally overlapping inputs forming a command in accordance with time intervals in which the inputs of overlap.
8. The intelligent keyboard interface as claimed in claim 1, wherein the voice input is a phoneme.
9. A method for automatically changing contents of display device integrated onto a keyboard including a plurality of keypads, the method comprising:
grouping one or more inputs into a first continuous segment representing a command phrase, said one or more inputs including a combination of voice input and keyboard input that are recognized and interpreted into the command phrase via a Hidden Markov model (HMM), the HMM comprising:
first Hidden Markov model including one or more first states having one or more first arcs for connecting the one or more first states, the one or more first arcs representing output string of voice;
a second Hidden Markov model including one or more second states having one or more second arcs for connecting the one or more second states, the one or more second arcs representing output string of keys;
one or more third arcs connecting the one or more first states with the one or more second states, the one or more third arcs producing an empty output string, the one or more third arcs traversed when an interpretation state between the output string of the voice and the output string of keys are switched; and;
comparing the first continuous segment with a prestored key sequence-command pairs to determine an action which matches the command phrase designating contents to display, wherein the displayed contents represent user selectable functionality for performing functions associated with the command phrase.
10. The method for automatically changing contents of display device integrated onto a keyboard including a plurality of keypads as claimed in claim 9, the method further comprising:
appending a second continuous segment accepted as input to the first continuous segment, if no match is found in the step of comparing.
11. The method for automatically changing contents of display device integrated onto a keyboard including a plurality of keypads as claimed in claim 10, the method further comprising:
repeating he steps of appending and matching until a match is found.
12. The method for automatically changing contents of display device integrated onto a keyboard including a plurality of keypads as claimed in claim 10, the method further comprising:
repeating the steps of appending and matching until a match is found or a number of appended segments exceed a predetermined number.
13. The method for automatically changing contents of keyboard display as claimed in claim 9, wherein the step of grouping further includes determining which one or more inputs are associated with each other based on a predetermined time interval occurring between each input.
14. The method for interpreting input to automatically change contents of keyboard display as claimed in claim 9, wherein said one or more input includes one or more keystroke input.
15. The method for automatically changing contents of keyboard display as claimed in claim 9, wherein said one or more input includes a combination of voice and one or more keystroke inputs.
16. The method for automatically changing contents of keyboard display as claimed in claim 15, wherein if the voice and keystroke inputs temporally overlap, the step of grouping further includes synchronizing the temporally overlapping inputs for representing the command phrase in accordance with time intervals in which the inputs overlap.
17. The method for automatically changing contents of keyboard display as claimed in claim 9, wherein the method further includes:
changing display contents of keyboard display based on the action matching the command phrase.
18. The method for automatically changing contents of keyboard display as claimed in claim 9, wherein the method further includes:
interpreting the first continuous segment employing semantic language processor before the step of comparing.
19. The method for automatically changing contents of keyboard display as claimed in claim 9, wherein the method further includes:
interpreting the first continuous segment employing automatic speech recognition processor before the step of comparing.
20. The method for automatically changing contents of display device integrated onto a keyboard including a plurality of keypads as claimed in claim 9, wherein the voice input is a phoneme.
21. A program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for automatically changing contents of keyboard display, the method steps comprising:
grouping one or more inputs into a first continuous segment representing a command phrase, said one or more inputs including a combination of voice input and keyboard input that are recognized and interpreted into the command phrase via a Hidden Markov model (HMM), the HMM comprising:
a first Hidden Markov model including one or more first states having one or more first arcs for connecting the one or more first states, the one or more first arcs representing output string of voice;
a second Hidden Markov model including one or more second states having one or more second arcs for connecting the one or more second states, the one or more second arcs representing output string of keys;
one or more third arcs connecting the one or more first states with the one or more second states, the one or more third arcs producing an empty output string, the one or more third arcs traversed when an interpretation state between the output string of the voice and the output string of keys are switched; and
comparing the first continuous segment with a prestored key sequence-command pairs to determine an action which matches the command phrase designating contents to display, wherein the displayed contents represent user selectable functionality for performing functions associated with the command phrase.
22. The program storage device as claimed in claim 21, the method steps further including:
concatenating a second continuous segment accepted as input to the first continuous segment, if no match is found in the step of comparing.
23. The program storage device readable by machine as claimed in claim 21, wherein the voice input is a phoneme.
US09/335,345 1999-06-17 1999-06-17 Intelligent keyboard interface with use of human language processing Expired - Lifetime US6401065B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/335,345 US6401065B1 (en) 1999-06-17 1999-06-17 Intelligent keyboard interface with use of human language processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/335,345 US6401065B1 (en) 1999-06-17 1999-06-17 Intelligent keyboard interface with use of human language processing

Publications (1)

Publication Number Publication Date
US6401065B1 true US6401065B1 (en) 2002-06-04

Family

ID=23311389

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/335,345 Expired - Lifetime US6401065B1 (en) 1999-06-17 1999-06-17 Intelligent keyboard interface with use of human language processing

Country Status (1)

Country Link
US (1) US6401065B1 (en)

Cited By (145)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030055644A1 (en) * 2001-08-17 2003-03-20 At&T Corp. Systems and methods for aggregating related inputs using finite-state devices and extracting meaning from multimodal inputs using aggregation
US6604073B2 (en) * 2000-09-12 2003-08-05 Pioneer Corporation Voice recognition apparatus
US20030210280A1 (en) * 2001-03-02 2003-11-13 Baker Bruce R. Device and method for previewing themes and categories of sequenced symbols
US6694295B2 (en) * 1998-05-25 2004-02-17 Nokia Mobile Phones Ltd. Method and a device for recognizing speech
US20040036632A1 (en) * 2002-08-21 2004-02-26 Intel Corporation Universal display keyboard, system, and methods
WO2005020211A1 (en) * 2003-08-18 2005-03-03 Siemens Aktiengesellschaft Voice-assisted text input for pre-installed applications in mobile devices
US20050060138A1 (en) * 1999-11-05 2005-03-17 Microsoft Corporation Language conversion and display
US7143043B1 (en) * 2000-04-26 2006-11-28 Openwave Systems Inc. Constrained keyboard disambiguation using voice recognition
US20070061148A1 (en) * 2005-09-13 2007-03-15 Cross Charles W Jr Displaying speech command input state information in a multimodal browser
US7196691B1 (en) * 2001-11-14 2007-03-27 Bruce Martin Zweig Multi-key macros to speed data input
US7257575B1 (en) 2002-10-24 2007-08-14 At&T Corp. Systems and methods for generating markup-language based expressions from multi-modal and unimodal inputs
US7260529B1 (en) * 2002-06-25 2007-08-21 Lengen Nicholas D Command insertion system and method for voice recognition applications
US20070265829A1 (en) * 2006-05-10 2007-11-15 Cisco Technology, Inc. Techniques for passing data across the human-machine interface
WO2008036338A1 (en) * 2006-09-18 2008-03-27 United Keys, Inc. Method and display data entry unit
US20080088590A1 (en) * 2006-04-18 2008-04-17 Ronald Brown Display Input Equipped Data Entry Device and Methods
US20080133228A1 (en) * 2006-11-30 2008-06-05 Rao Ashwin P Multimodal speech recognition system
US7424675B2 (en) 1999-11-05 2008-09-09 Microsoft Corporation Language input architecture for converting one text form to another text form with tolerance to spelling typographical and conversion errors
WO2008115237A1 (en) 2007-03-21 2008-09-25 Tegic Communications, Inc. Interchangeable input modules associated with varying languages
US20090216531A1 (en) * 2008-02-22 2009-08-27 Apple Inc. Providing text input using speech data and non-speech data
US20110225535A1 (en) * 2010-03-09 2011-09-15 Kabushiki Kaisha Toshiba Information processing apparatus
US20120078627A1 (en) * 2010-09-27 2012-03-29 Wagner Oliver P Electronic device with text error correction based on voice recognition data
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
CN104698999A (en) * 2013-12-05 2015-06-10 上海能感物联网有限公司 Controller device for robot under foreign language natural language text filed control
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
WO2016049983A1 (en) * 2014-09-29 2016-04-07 同济大学 User keyboard key-pressing behavior mode modeling and analysis system, and identity recognition method thereof
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US20160283453A1 (en) * 2015-03-26 2016-09-29 Lenovo (Singapore) Pte. Ltd. Text correction using a second input
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9501741B2 (en) 2005-09-08 2016-11-22 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9691383B2 (en) 2008-09-05 2017-06-27 Apple Inc. Multi-tiered voice feedback in an electronic device
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9830912B2 (en) 2006-11-30 2017-11-28 Ashwin P Rao Speak and touch auto correction interface
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9922640B2 (en) 2008-10-17 2018-03-20 Ashwin P Rao System and method for multimodal utterance detection
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10255426B2 (en) * 2015-09-15 2019-04-09 Electronics And Telecommunications Research Institute Keyboard device and data communication method using the same
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10607140B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11194547B2 (en) 2018-06-22 2021-12-07 Samsung Electronics Co., Ltd. Text input device and method therefor
US11216245B2 (en) 2019-03-25 2022-01-04 Samsung Electronics Co., Ltd. Electronic device and multitasking supporting method thereof
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5180029A (en) 1989-05-08 1993-01-19 Johan Rosenlund Anti-theft device
US5588105A (en) * 1992-11-16 1996-12-24 Apple Computer, Inc. Status bar for application windows
US5608624A (en) * 1992-05-27 1997-03-04 Apple Computer Inc. Method and apparatus for processing natural language
US5621809A (en) * 1992-06-09 1997-04-15 International Business Machines Corporation Computer program product for automatic recognition of a consistent message using multiple complimentary sources of information
US5761329A (en) * 1995-12-15 1998-06-02 Chen; Tsuhan Method and apparatus employing audio and video data from an individual for authentication purposes
US5839104A (en) * 1996-02-20 1998-11-17 Ncr Corporation Point-of-sale system having speech entry and item recognition support system
US5937380A (en) * 1997-06-27 1999-08-10 M.H. Segan Limited Partenship Keypad-assisted speech recognition for text or command input to concurrently-running computer application
US6054990A (en) * 1996-07-05 2000-04-25 Tran; Bao Q. Computer system with handwriting annotation
US6219639B1 (en) * 1998-04-28 2001-04-17 International Business Machines Corporation Method and apparatus for recognizing identity of individuals employing synchronized biometrics
US6246985B1 (en) * 1998-08-20 2001-06-12 International Business Machines Corporation Method and apparatus for automatic segregation and routing of signals of different origins by using prototypes

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5180029A (en) 1989-05-08 1993-01-19 Johan Rosenlund Anti-theft device
US5608624A (en) * 1992-05-27 1997-03-04 Apple Computer Inc. Method and apparatus for processing natural language
US5621809A (en) * 1992-06-09 1997-04-15 International Business Machines Corporation Computer program product for automatic recognition of a consistent message using multiple complimentary sources of information
US5588105A (en) * 1992-11-16 1996-12-24 Apple Computer, Inc. Status bar for application windows
US5761329A (en) * 1995-12-15 1998-06-02 Chen; Tsuhan Method and apparatus employing audio and video data from an individual for authentication purposes
US5839104A (en) * 1996-02-20 1998-11-17 Ncr Corporation Point-of-sale system having speech entry and item recognition support system
US6054990A (en) * 1996-07-05 2000-04-25 Tran; Bao Q. Computer system with handwriting annotation
US5937380A (en) * 1997-06-27 1999-08-10 M.H. Segan Limited Partenship Keypad-assisted speech recognition for text or command input to concurrently-running computer application
US6219639B1 (en) * 1998-04-28 2001-04-17 International Business Machines Corporation Method and apparatus for recognizing identity of individuals employing synchronized biometrics
US6246985B1 (en) * 1998-08-20 2001-06-12 International Business Machines Corporation Method and apparatus for automatic segregation and routing of signals of different origins by using prototypes

Cited By (222)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6694295B2 (en) * 1998-05-25 2004-02-17 Nokia Mobile Phones Ltd. Method and a device for recognizing speech
US20050060138A1 (en) * 1999-11-05 2005-03-17 Microsoft Corporation Language conversion and display
US7424675B2 (en) 1999-11-05 2008-09-09 Microsoft Corporation Language input architecture for converting one text form to another text form with tolerance to spelling typographical and conversion errors
US7403888B1 (en) * 1999-11-05 2008-07-22 Microsoft Corporation Language input user interface
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US7143043B1 (en) * 2000-04-26 2006-11-28 Openwave Systems Inc. Constrained keyboard disambiguation using voice recognition
US6604073B2 (en) * 2000-09-12 2003-08-05 Pioneer Corporation Voice recognition apparatus
US20030210280A1 (en) * 2001-03-02 2003-11-13 Baker Bruce R. Device and method for previewing themes and categories of sequenced symbols
US8893044B2 (en) 2001-03-02 2014-11-18 Semantic Compaction Systems, Inc. Device and method for previewing themes and categories of sequenced symbols
US8234589B2 (en) 2001-03-02 2012-07-31 Semantic Compaction Systems, Inc. Device and method for previewing themes and categories of sequenced symbols
US20090150828A1 (en) * 2001-03-02 2009-06-11 Baker Bruce R Device and method for previewing themes and categories of sequenced symbols
US7506256B2 (en) * 2001-03-02 2009-03-17 Semantic Compaction Systems Device and method for previewing themes and categories of sequenced symbols
US20030055644A1 (en) * 2001-08-17 2003-03-20 At&T Corp. Systems and methods for aggregating related inputs using finite-state devices and extracting meaning from multimodal inputs using aggregation
US20030065505A1 (en) * 2001-08-17 2003-04-03 At&T Corp. Systems and methods for abstracting portions of information that is represented with finite-state devices
US7783492B2 (en) 2001-08-17 2010-08-24 At&T Intellectual Property Ii, L.P. Systems and methods for classifying and representing gestural inputs
US7505908B2 (en) 2001-08-17 2009-03-17 At&T Intellectual Property Ii, L.P. Systems and methods for classifying and representing gestural inputs
US20080306737A1 (en) * 2001-08-17 2008-12-11 At&T Corp. Systems and methods for classifying and representing gestural inputs
US7196691B1 (en) * 2001-11-14 2007-03-27 Bruce Martin Zweig Multi-key macros to speed data input
US7260529B1 (en) * 2002-06-25 2007-08-21 Lengen Nicholas D Command insertion system and method for voice recognition applications
WO2004019196A3 (en) * 2002-08-21 2004-09-16 Intel Corp Universal display keyboard
US20040036632A1 (en) * 2002-08-21 2004-02-26 Intel Corporation Universal display keyboard, system, and methods
WO2004019196A2 (en) * 2002-08-21 2004-03-04 Intel Corporation Universal display keyboard
US8898202B2 (en) 2002-10-24 2014-11-25 At&T Intellectual Property Ii, L.P. Systems and methods for generating markup-language based expressions from multi-modal and unimodal inputs
US9563395B2 (en) 2002-10-24 2017-02-07 At&T Intellectual Property Ii, L.P. Systems and methods for generating markup-language based expressions from multi-modal and unimodal inputs
US8433731B2 (en) 2002-10-24 2013-04-30 At&T Intellectual Property Ii, L.P. Systems and methods for generating markup-language based expressions from multi-modal and unimodal inputs
US20100100509A1 (en) * 2002-10-24 2010-04-22 At&T Corp. Systems and Methods for Generating Markup-Language Based Expressions from Multi-Modal and Unimodal Inputs
US7257575B1 (en) 2002-10-24 2007-08-14 At&T Corp. Systems and methods for generating markup-language based expressions from multi-modal and unimodal inputs
US20080046418A1 (en) * 2002-10-24 2008-02-21 At&T Corp. Systems and methods for generating markup-language based expressions from multi-modal and unimodal inputs
US7660828B2 (en) 2002-10-24 2010-02-09 At&T Intellectual Property Ii, Lp. Systems and methods for generating markup-language based expressions from multi-modal and unimodal inputs
WO2005020211A1 (en) * 2003-08-18 2005-03-03 Siemens Aktiengesellschaft Voice-assisted text input for pre-installed applications in mobile devices
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9501741B2 (en) 2005-09-08 2016-11-22 Apple Inc. Method and apparatus for building an intelligent automated assistant
US8719034B2 (en) * 2005-09-13 2014-05-06 Nuance Communications, Inc. Displaying speech command input state information in a multimodal browser
US8965772B2 (en) 2005-09-13 2015-02-24 Nuance Communications, Inc. Displaying speech command input state information in a multimodal browser
US20070061148A1 (en) * 2005-09-13 2007-03-15 Cross Charles W Jr Displaying speech command input state information in a multimodal browser
US20080088590A1 (en) * 2006-04-18 2008-04-17 Ronald Brown Display Input Equipped Data Entry Device and Methods
US7702501B2 (en) 2006-05-10 2010-04-20 Cisco Technology, Inc. Techniques for passing data across the human-machine interface
US20070265829A1 (en) * 2006-05-10 2007-11-15 Cisco Technology, Inc. Techniques for passing data across the human-machine interface
US9117447B2 (en) 2006-09-08 2015-08-25 Apple Inc. Using event alert text as input to an automated assistant
US8942986B2 (en) 2006-09-08 2015-01-27 Apple Inc. Determining user intent based on ontologies of domains
US8930191B2 (en) 2006-09-08 2015-01-06 Apple Inc. Paraphrasing of user requests and results by automated digital assistant
WO2008036338A1 (en) * 2006-09-18 2008-03-27 United Keys, Inc. Method and display data entry unit
US9830912B2 (en) 2006-11-30 2017-11-28 Ashwin P Rao Speak and touch auto correction interface
US8355915B2 (en) * 2006-11-30 2013-01-15 Rao Ashwin P Multimodal speech recognition system
US20080133228A1 (en) * 2006-11-30 2008-06-05 Rao Ashwin P Multimodal speech recognition system
WO2008115237A1 (en) 2007-03-21 2008-09-25 Tegic Communications, Inc. Interchangeable input modules associated with varying languages
EP2132727A1 (en) * 2007-03-21 2009-12-16 Tegic Communications, Inc. Interchangeable input modules associated with varying languages
EP2132727A4 (en) * 2007-03-21 2012-12-12 Tegic Communications Inc Interchangeable input modules associated with varying languages
US9176596B2 (en) 2007-03-21 2015-11-03 Nuance Communications, Inc. Interchangeable input modules associated with varying languages
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US8065143B2 (en) * 2008-02-22 2011-11-22 Apple Inc. Providing text input using speech data and non-speech data
US9361886B2 (en) 2008-02-22 2016-06-07 Apple Inc. Providing text input using speech data and non-speech data
US8688446B2 (en) 2008-02-22 2014-04-01 Apple Inc. Providing text input using speech data and non-speech data
US20090216531A1 (en) * 2008-02-22 2009-08-27 Apple Inc. Providing text input using speech data and non-speech data
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9691383B2 (en) 2008-09-05 2017-06-27 Apple Inc. Multi-tiered voice feedback in an electronic device
US9922640B2 (en) 2008-10-17 2018-03-20 Ashwin P Rao System and method for multimodal utterance detection
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10475446B2 (en) 2009-06-05 2019-11-12 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US10795541B2 (en) 2009-06-05 2020-10-06 Apple Inc. Intelligent organization of tasks items
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US9548050B2 (en) 2010-01-18 2017-01-17 Apple Inc. Intelligent automated assistant
US8903716B2 (en) 2010-01-18 2014-12-02 Apple Inc. Personalized vocabulary for digital assistant
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US11410053B2 (en) 2010-01-25 2022-08-09 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10984326B2 (en) 2010-01-25 2021-04-20 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10984327B2 (en) 2010-01-25 2021-04-20 New Valuexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607140B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607141B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US8448081B2 (en) * 2010-03-09 2013-05-21 Kabushiki Kaisha Toshiba Information processing apparatus
US20110225535A1 (en) * 2010-03-09 2011-09-15 Kabushiki Kaisha Toshiba Information processing apparatus
US8719014B2 (en) * 2010-09-27 2014-05-06 Apple Inc. Electronic device with text error correction based on voice recognition data
US20120078627A1 (en) * 2010-09-27 2012-03-29 Wagner Oliver P Electronic device with text error correction based on voice recognition data
US9075783B2 (en) * 2010-09-27 2015-07-07 Apple Inc. Electronic device with text error correction based on voice recognition data
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US11388291B2 (en) 2013-03-14 2022-07-12 Apple Inc. System and method for processing voicemail
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
CN104698999A (en) * 2013-12-05 2015-06-10 上海能感物联网有限公司 Controller device for robot under foreign language natural language text filed control
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
WO2016049983A1 (en) * 2014-09-29 2016-04-07 同济大学 User keyboard key-pressing behavior mode modeling and analysis system, and identity recognition method thereof
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US11556230B2 (en) 2014-12-02 2023-01-17 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US10726197B2 (en) * 2015-03-26 2020-07-28 Lenovo (Singapore) Pte. Ltd. Text correction using a second input
US20160283453A1 (en) * 2015-03-26 2016-09-29 Lenovo (Singapore) Pte. Ltd. Text correction using a second input
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10255426B2 (en) * 2015-09-15 2019-04-09 Electronics And Telecommunications Research Institute Keyboard device and data communication method using the same
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US11194547B2 (en) 2018-06-22 2021-12-07 Samsung Electronics Co., Ltd. Text input device and method therefor
US11762628B2 (en) 2018-06-22 2023-09-19 Samsung Electronics Co., Ltd. Text input device and method therefor
US11216245B2 (en) 2019-03-25 2022-01-04 Samsung Electronics Co., Ltd. Electronic device and multitasking supporting method thereof

Similar Documents

Publication Publication Date Title
US6401065B1 (en) Intelligent keyboard interface with use of human language processing
JP4463795B2 (en) Reduced keyboard disambiguation system
JP4829901B2 (en) Method and apparatus for confirming manually entered indeterminate text input using speech input
US7719521B2 (en) Navigational interface providing auxiliary character support for mobile and wearable computers
JP4920154B2 (en) Language input user interface
US8583440B2 (en) Apparatus and method for providing visual indication of character ambiguity during text entry
US7395203B2 (en) System and method for disambiguating phonetic input
US8311829B2 (en) Multimodal disambiguation of speech recognition
KR101109265B1 (en) Method for entering text
US20070100619A1 (en) Key usage and text marking in the context of a combined predictive text and speech recognition system
JP4012143B2 (en) Information processing apparatus and data input method
US20090326938A1 (en) Multiword text correction
JP2007133884A5 (en)
WO1998033111A9 (en) Reduced keyboard disambiguating system
KR20120006503A (en) Improved text input
KR20000035960A (en) Speed typing apparatus and method
US7912697B2 (en) Character inputting method and character inputting apparatus
JP7109498B2 (en) voice input device
JP2004053871A (en) Speech recognition system
JP2007018468A (en) System for converting english to katakana english
KR20030008905A (en) Method of character Input In Screen Pop-up Keyboard Of Handheld Device
KR20040016715A (en) Chinese phonetic transcription input system and method with comparison function for imperfect and fuzzy phonetic transcriptions

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KANEVSKY, DIMITRI;MAES, STEPHANE;PICKOVER, CLIFFORD A.;AND OTHERS;REEL/FRAME:010047/0994;SIGNING DATES FROM 19990610 TO 19990611

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022354/0566

Effective date: 20081231

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12