CA2311439C - Conversational data mining - Google Patents
Conversational data mining Download PDFInfo
- Publication number
- CA2311439C CA2311439C CA002311439A CA2311439A CA2311439C CA 2311439 C CA2311439 C CA 2311439C CA 002311439 A CA002311439 A CA 002311439A CA 2311439 A CA2311439 A CA 2311439A CA 2311439 C CA2311439 C CA 2311439C
- Authority
- CA
- Canada
- Prior art keywords
- user
- emotional state
- data
- attribute
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
- 238000007418 data mining Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 99
- 230000002996 emotional effect Effects 0.000 claims abstract description 89
- 238000012545 processing Methods 0.000 claims abstract description 51
- 230000004044 response Effects 0.000 claims abstract description 38
- 230000002596 correlated effect Effects 0.000 claims abstract description 16
- 230000002452 interceptive effect Effects 0.000 claims abstract description 13
- 230000000875 corresponding effect Effects 0.000 claims abstract description 11
- 239000000284 extract Substances 0.000 claims abstract description 3
- 238000005065 mining Methods 0.000 claims description 3
- 238000007670 refining Methods 0.000 claims description 2
- 230000003993 interaction Effects 0.000 abstract description 5
- 230000004048 modification Effects 0.000 description 10
- 238000012986 modification Methods 0.000 description 10
- 230000009471 action Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000013518 transcription Methods 0.000 description 4
- 230000035897 transcription Effects 0.000 description 4
- 230000008451 emotion Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- 235000014214 soft drink Nutrition 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 206010048909 Boredom Diseases 0.000 description 1
- 206010011469 Crying Diseases 0.000 description 1
- CDBYLPFSWZWCQE-UHFFFAOYSA-L Sodium Carbonate Chemical compound [Na+].[Na+].[O-]C([O-])=O CDBYLPFSWZWCQE-UHFFFAOYSA-L 0.000 description 1
- 208000003028 Stuttering Diseases 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 235000019640 taste Nutrition 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/40—Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/51—Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
Abstract
A method for collecting data associated with the voice of a voice system user includes conducting a conversation with the user, capturing and digitizing a speech waveform of the user, extracting at least one acoustic feature from the digitized speech waveform and storing attribute data corresponding to the acoustic feature, together with an identifying indicia, in the data warehouse in a form to facilitate subsequent data mining. User attributes can include gender, age, accent, native language, dialect, socioeconomic classification, educational level and emotional state. Data gathering can be repeated for a large number of users, until sufficient data is present. The attribute data to be stored can include raw acoustic features, or processed features, such as the user's emotional state, age, gender, socioeconomic group, and the like. In an alternative form of method, the user attribute can be used to real-time modify behavior of the voice system, with or without storage of data for subsequent data mining. An apparatus for collecting data associated with a voice of a user includes a dialog management unit, an audio capture module, an acoustic front end, a processing module and a data warehouse. The acoustic front end receives and digitizes a speech waveform from the user and extracts at least one acoustic feature from the digitized speech waveform. The feature is correlated with at least one user attribute. The processing module analyzes the acoustic feature to determine the user attribute, which can then be stored in the data warehouse. The dialog management unit can include, for example, a telephone interactive voice response system. The processor can be an application specific circuit, a separate general purpose computer with appropriate software, or a processor portion of the IVR. The processing module can include an emotional state classifier, a speaker clusterer and classifier, a speech recognizer, and/or an accent identifier. Alternatively, the apparatus can be configured as a real-time- modifiable voice system for interaction with a user, which can be used to practice the method for tailoring a voice system response.
Description
CONVERSATIONAL DATA MINING
BACKGROUND OF THE INVENTION
Field of the Invention The present invention relates to voice-oriented systems, and more particularly relates to an acoustically oriented method and apparatus to facilitate data mining and an acoustically oriented method and apparatus to tailor response of a voice system to an acoustically determined state of a voice system user.
Brief Description of the Prior Art Data mining is an interdisciplinary field which has recently increased in popularity. It refers to the use of methods which extract information from data in an unsupervised manner, or with very little supervision. "Unsupervised" refers to techniques wherein there is no advance labeling; classes are allowed to develop on their own. Sounds are clustered and one sees which classes develop. Data mining is used in market, risk and fraud management.
In the data mining field, it is generally agreed that more data is better.
Accordingly, companies engaged in data mining frequently compile or acquire customer data bases.
These data bases may be based on mail-order history, past customer history, credit history and the like. It is anticipated that the customer's electronic business and internet behavior will soon also provide a basis for customer data bases. The nature of the stored information may result from the manual or automatic encoding of either a transaction or an event. An example of a transaction might be that a given person bought a given product at a given price under certain conditions, or that a given person responded to a certain mailing. An example of an event could include a person having a car accident on a certain date, or a given family moving in the last month.
The data on which data mining is performed is traditionally stored in a data warehouse. Once business objectives have been determined, the data warehouse is examined to select relevant features, evaluate the quality of the data, and transform it into analytical models suited for the intended analysis. Techniques such as predictive modeling, data base segmentation, link analysis and deviation detection can then be applied so as to output targets, forecasts or detections.
Following validation, the resulting models can be deployed.
Today, it is common for a variety of transactions to be performed over the telephone via a human operator or an interactive voice response (IVR) system. It is known that voice, which is the mode of communication in such transactions, carries information about a variety of user attributes, such as gender, age, native language, accent, dialect, socioeconomic condition, level of education and emotional state. One or more of these parameters may be valuable to individuals engaged in data mining. At present, the treasure trove of data contained in these transactions is either completely lost to data miners, or else would have to be manually indexed in order to be effectively employed.
There is, therefore, a need in the prior art for a method for collecting, in a data warehouse, data associated with the voice of a voice system user which can efficiently and automatically make use of the data available in transactions using voice systems, such as telephones, kiosks, and the like.
It would be desirable for the method to also be implemented in real-time, with or without data warehouse storage, to permit "on the fly" modification of voice systems, such as interactive voice response systems, and the like.
SUMMARY OF THE INVENTION
The present invention, which addresses the needs identified in the prior art, provides a method for collecting, in a data warehouse, data associated with the voice of a voice system user. The method comprises the steps of conducting a conversation with the voice system user, capturing a speech waveform, digitizing the speech waveform, extracting at least one acoustic feature from the digitized speech waveform, and then storing attribute data corresponding to the acoustic feature in the data warehouse. The conversation can be conducted with the voice system user via at least one of a human operator and a voice-enabled machine system. The speech waveform to be captured is that associated with utterances spoken by the voice system user during the conversation. The digitizing of the speech waveform provides a digitized speech waveform. The at least one acoustic feature is extracted from the digitized waveform and correlates with at least one user attribute, such as gender, age, accent, native language, dialect, socioeconomic classification, educational level and emotional state of the user. The attribute data which is stored in the data warehouse corresponds to the acoustic feature which correlates with the at least one user attribute, and is stored together with at least one identifying indicia. The data is. stored in the data warehouse in a form to facilitate subsequent data mining thereon.
The present invention also provides a method of tailoring a voice system response to an acoustically-determined state of a voice system user. The method includes the step of conducting a conversation with the voice system user via the voice system. The method further includes the steps of capturing a speech waveform and digitizing the speech waveform, as discussed previously.
Yet further, the method includes the step of extracting an acoustic feature from the digitized speech waveform, also as set forth above. Finally, the method includes the step of modifying behavior of the voice system based on the at least one user attribute with which the at least one acoustic feature is correlated.
The present invention further includes a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform either of the methods just discussed.
The present invention further provides an apparatus for collecting data associated with the voice of a user. The apparatus comprises a dialog management unit, an audio capture module, an acoustic front end, a processing module, and a data warehouse. The dialog management unit conducts a conversation with the user. The audio capture module is coupled to the dialog management unit and captures a speech waveform associated with utterances spoken by the user during the conversation.
The acoustic front end is coupled to the audio capture module and is configured to receive and digitize the speech waveform so as to provide a digitized speech waveform, and to extract, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute. The at least one user attribute can include at least one of the user attributes discussed above with respect to the methods.
The processing module is coupled to the acoustic front end and analyzes the at least one acoustic 5. feature to determine the at least one user attribute. The data warehouse is coupled to the processing module and stores the at least one user attribute in a form for subsequent data mining thereon.
The present invention still further provides a real-time-modifiable voice system for interaction with a user. The system includes a dialog management unit of the type discussed above, an audio capture module of the type discussed above, and an acoustic front end of the type discussed above. Further, 1 o the voice system includes a processing module of the type discussed above.
The processing module is configured so as to modify behavior of the voice system based on the at least one user attribute.
For a better understanding of the present invention, together with other and further advantages thereof, reference is made to the following description, taken in conjunction with the accompanying drawings, and the scope of the invention will be pointed out in the appended claims.
FIG. 1 is a diagram of an apparatus for collecting data associated with a voice of a user, in accordance with the present invention;
FIG. 2 is a diagram of a real-time-modifiable voice system for interaction with a user, in accordance with the present invention;
20 FIG. 3 is a flowchart of a method for collecting, in a data warehouse, data associated with a voice of a voice system user, in accordance with the present invention;
FIG. 4 depicts certain details of the method shown in FIG. 3, which are also applicable to FIG. 5;
FIG. 5 is a flowchart of a method, in accordance with the present invention, for tailoring a 25 voice system response to an acoustically-determined state of a voice system user; and FIG. 6 depicts certain details of the method of FIG. 5.
DETAILED DESCRIPTION OF THE INVENTION
Reference should now had to FIG. 1 which depicts an apparatus for collecting data associated with a voice of a user, in accordance with the present invention. The apparatus is designated generally as 100. The apparatus includes a dialog management unit 102 which conducts a.conversation with the user 104. Apparatus 100 further includes an audio capture module 106 which is coupled to the dialog management unit 102 and which captures a speech waveform associated with utterances spoken by the user 104 during the conversation. As used herein, a conversation should be broadly understood to include any interaction, between a first human and either a second human, a machine, or a combination thereof, which includes at least some speech.
Apparatus 100 further includes an acoustic front end 108 which is coupled to the audio capture module 106 and which is configured to receive and digitize the speech waveform so as to provide a digitized speech waveform. Further, acoustic front end 108 is also configured to extract, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute, i.e., of the user 104. The at least one user attribute can include at least one of the following: gender of the user, age of the user, accent of the user, native language of the user, dialect of the user, socioeconomic classification of the user, educational level of the user, and emotional state of the user. The dialog management unit 102 may employ acoustic features, such as MEL
cepstra, obtained from acoustic front end 108 and may therefore, if desired, have a direct coupling thereto.
Apparatus 100 further includes a processing module 110 which is coupled to the acoustic front end 108 and which analyzes the at least one acoustic feature to determine the at least one user attribute.
Yet further, apparatus 100 includes a data warehouse 112 which is coupled to the processing module 110 and which stores the at least one user attribute, together with at least one identifying indicia, in a form for subsequent data mining thereon. Identifying indicia will be discussed elsewhere herein.
The gender of the user can be determined by classifying the pitch of the user's voice, or by simply clustering the features. In the latter method, voice prints associated with a large set of speakers of a given gender are built and a speaker classification is then performed with the two sets of models.
Age of the user can also be determined via classification of age groups, in a manner similar to gender. Although having limited reliability, broad classes of ages, such as children, teenagers, adults and senior citizens can be separated in this fashion.
Determination of accent from acoustic features is known in the art. For example, the paper "A
Comparison of Two Unsupervised Approaches to Accent Identification" by Lincoln et al., presented at the 1998 International Conference on Spoken Language Processing, Sidney, Australia [hereinafter ICSLP'98], sets forth useful techniques. Native language of the user can be determined in a manner essentially equivalent to accent classification. Meta information about the native language of the speaker can be added to define each accentlnative language model.
That is, at the creation of the models for each native language, one employs a speaker or speakers who are tagged with that language as their native language. The paper "Language Identification Incorporating Lexical Information" by Matrouf et al., also presented at ICSLP'98, discusses various techniques for language identification.
The user's dialect can be determined from the accent and the usage of keywords or idioms which are specific to a given dialect. For example, in the French language, the choice of "nonante" for the numeral 90 instead of "Quatre Vingt Dix" would identify the speaker as being of Belgian or Swiss extraction, and not French or Canadian. Further, the consequent choice of "quatre-vingt" instead of "octante" or "Huitante" for the numeral 80 would identify the individual as Belgian and not Swiss.
In American English, the choice of "grocery sack" rather than "grocery bag"
might identify a person as being of Midwestern origin rather than Midatlantic origin. Another example of Midwestern versus Midatlantic American English would be the choice of "pop" for a soft drink in the Midwest and the choice of "soda" for the corresponding soft drink in the middle Atlantic region. In an international context, the use of "holiday" rather than "vacation" might identify someone as being of British rather than United States origin. The operations described in this paragraph can be carried out using a speech recognizer 126 which will be discussed below.
The socioeconomic classification of the user can include such factors as the racial background of the user, ethnic background of the user, and economic class of the user, for example, blue collar, white collar-middle class, or wealthy. Such determinations can be made via annotated accents and dialects at the moment of training, as well as by examining the choice of words of the user. While only moderately reliable, it is believed that these techniques will give sufficient insight into the background of the user so as to be useful for data mining.
The educational level of the user can be determined by the word choice and accent, in a manner similar to the socioeconomic classification; again, only partial reliability is expected, but sufficient for data mining purposes.
Determination of the emotional state of the user from acoustic features is well known in the art.
Emotional categories which can be recognized include hot anger, cold anger, panic, fear, anxiety, sadness, elation, despair, happiness, interest, boredom, shame, contempt, confusion, disgust and pride. Exemplary methods of determining emotional state from relevant acoustic features are set forth in the following papers: "Some Acoustic Characteristics of Emotion" by Pereira and Watson, "Towards an Automatic Classification of Emotions in Speech" by Amir and Ron, and "Simulated Emotions: An Acoustic Study of Voice and Perturbation Measures" by Whiteside, all of which were presented at ICSLP'98.
The audio capture module 106 can include, for example, at least one of an analog-to-digital converter board, an interactive voice response system, and a microphone. The dialog management unit 102 can include a telephone interactive voice response system, for example, the same one used to implement the audio capturing. Alternatively, the dialog management unit may simply be an acoustic interface to a human operator. Dialog management unit 102 can include natural language understanding (NLU), natural language generation (NLG), finite state grammar (FSG), and/or text-to-speech syntheses (TTS) for machine-prompting the user in lieu of, or in addition to, the human operator. The processing module 110 can be implemented in the processor portion of the IVR, or can be implemented in a separate general purpose computer with appropriate software. Still further, the processing module can be implemented using an application specific circuit such as an application specific integrated circuit (ASIC) or can be implemented in an application specific circuit employing. discrete components, or a combination of discrete and integrated components.
Processing module 110 can include an emotional state classifier 114.
Classifier 114 can in turn include an emotional state classification module 116 and an emotional state prototype database 118.
Processing module 110 can further include a speaker clusterer and classifier 120. Element 120 can 1 o further include a speaker clustering and classification module 122 and a speaker class data base 124.
Processing module 110 can further include a speech recognizer 126 which can, in turn, itself include a speech recognition module 128 and a speech prototype, language model and grammar database 130. Speech recognizer 126 can be part of the dialog management unit 102 or, for example, a separate element within the implementation of processing module 110. Yet further, processing module 110 can include an accent identifier 132, which in turn includes an accent identification module 134 and an accent data base 136.
Processing module 110 can include any one of elements 114,120,126 and 132; all of those elements together; or any combination thereof.
Apparatus 100 can further include a post processor 138 which is coupled to the data warehouse 112 and which is configured to transcribe user utterances and to perform keyword spotting thereon.
Although shown as a separate item in FIG. 1, the post processor can be a part of the processing module 110 or of any of the sub-components thereof. For example, it can be implemented as part of the speech recognizer 126. Post processor 138 can be implemented as part of the processor of an IVR, as an application specific circuit, or on a general purpose computer with suitable software modules. Post processor 138 can employ speech recognizer 126. Post processor 138 can also .~_ .
include a semantic module (not shown) to interpret meaning of phrases. The semantic module could be used by speech recognizer 126 to indicate that some decoding candidates in a list are meaningless and should be discarded/replaced with meaningful candidates.
The acoustic front end 108 can typically be an eight dimensions plus energy front end as known in the art. However, it should be understood that 13, 24, or any other number of dimensions could be used. MEL cepstra can be computed, for example, over 25 ms frames with a 10 ms overlap, along with the delta and delta delta parameters, that is, the first and second finite derivatives. Such acoustic features can be supplied to the speaker clusterer and classifier 120, speech recognizer 126 and accent identifier 132, as shown in FIG. 1.
Other types of acoustic features can be extracted by the acoustic front end 108. These can be designated as emotional state features, such as running average pitch, running pitch variance, pitch jitter, running energy variance, speech rate, shimmer, fundamental frequency, and variation in fundamental frequency. Pitch jitter refers to the number of sign changes of the first derivative of pitch. Shimmer is energy jitter. These features can be supplied from the acoustic front end 108 to the emotional state classifier 114. The aforementioned acoustic features, including the MEL cepstra and the emotional state features, can be thought of as the raw, that is, unprocessed features.
User queries can be transcribed by an IVR or otherwise. Speech features can first be processed by a text-independent speaker classification system, for example, in speaker clusterer and classifier 120.
This permits classification of the speakers based on acoustic similarities of their voices.
Implementation and use of such a system is disclosed in U.S. Patent application S.N. 60/011,058, filed February 2, 1996; U.S. Patent application S.N. 08/787,031, filed January 28, 1997 (now U.S.
Patent 5,895,447 issued Apri120, 1999); U.S. Patent application S.N.
08/788,471, filed January 28, 1997; and U.S. Patent application S.N. 08/787,029, filed January 28, 1997, all of which are co-assigned to International Business Machines Corporation. The classification of the speakers can be supervised or unsupervised. In the supervised case, the classes have been decided beforehand based on external information. Typically, such classification can separate between male and female, adult versus child, native speakers versus different classes of non-native speakers, and the like. The indices of this classification process constitute processed features. The results of this process can be supplied to the emotional state classifier 114 and can be used to normalize the emotional state features with respect to the average (mean) observed for a given class, during training, for a neutral emotional state. The normalized emotional state features are used by the emotional state classifier 114 which then outputs an estimate of the emotional state. This output is also considered to be part of the processed features. To summarize, the emotional state features can be normalized by the emotional state classifier 114 with respect to each class produced by the speech clusterer and classifier 120. A feature can be normalized as follows. Let Xo be the normal frequency. Let X; be the measured frequency. Then, the normalized feature will be given by N. minus Xo. This quantity can be positive or negative, and is not, in general, dimensionless.
The speech recognizer 126 can transcribe the queries from the user. It can be a speaker-independent or class-dependent large vocabulary continuous speech recognition, or system could be something as simple as a keyword spotter to detect insults (for example) and the like.
Such systems are well known in the art. The output can be full sentences, but finer granularity can also be attained; for example, time alignment of the recognized words. The time stamped transcriptions can also be considered as part of the processed features, and will be discussed further below with respect to methods in accordance with the present invention. Thus, conversation from every stage of a transaction can be transcribed and stored. As shown in FIG. 1, appropriate data is transferred from the speaker clusterer and classifier 120 to the emotional state classifier 114 and the speech recognizer 126. As noted, it is possible to perform accent, dialect and language recognition with the input speech from user 104. A continuous speech recognizer can be trained on speech with several speakers having the different accents which are to be recognized. Each of the training speakers is also associated with an accent vector, with each dimension representing the most likely mixture component associated with each state of each lefeme. The speakers can be clustered based on the distance between these accent vectors, and the clusters can be identified by, for example, the accent of the member speakers. The accent identification can be performed by extracting an accent vector from the user's speech and classifying it. As noted, dialect, socioeconomic classification, and the like can be estimated based on vocabulary and word series used by the user 104. Appropriate key words, sentences, or grammatical mistakes to detect can be compiled via expert linguistic knowledge. The accent, socioeconomic background, gender, age and the like are part of the processed features. As shown in FIG. 1, any of the processed features, indicated by the solid arrows, can be stored in the data warehouse 112. Further, raw features, indicated by the dotted lines can also be stored in the data warehouse 112.
Any of the processed or raw features can be stored in the data warehouse 112 and then associated with the other data which has been collected, upon completion of the transaction. Classical data mining techniques can then be applied. Such techniques are known, for example, as set forth in the book Data Warehousing, Data MiningLand OAAP, by Alex Berson and Stephen J.
Smith, published by McGraw Hill in 1997, and in Discovering Data Mining, by Cabena et al., published by Prentice Hall in 1998. For a given business objective, for example, target marketing, predictive models or classifiers are automatically obtained by applying appropriate mining recipes.
All data stored in the data warehouse 112 can be stored in a format to facilitate subsequent data mining thereon. Those of skill in the art are aware of appropriate formats for data which is to be mined, as set forth in the two cited reference books. Business objectives can include, for example, detection of users who are vulnerable to a proposal to buy a given product or service, detection of users who have problems with the automated system and should be transferred to an operator and detection of users who are angry at the service and should be transferred to a supervisory person. The user 104 can be a customer of a business which employs the apparatus 100, or can be a client of some other type of institution, such as a nonprofit institution, a government agency or the like.
Features can be extracted and decisions dynamically returned by the models.
This will be discussed further below.
Reference should now be had to FIG. 2 which depicts a real-time-modifiable voice system for interaction with a user, in accordance with the present invention, which is designated generally as 200. Elements in FIG. 2 which are similar to those in FIG. 1 have received the same reference numerals incremented by 100. System 200 can include a dialog management unit 202 similar to that discussed above. In particular, as suggested in FIG. 2, unit 202 can be a human operator or supervisor, an IVR, or a Voice User Interface (VUI). System 200 can also include an audio capture module 206 similar to that described above, and an acoustic front end 208, also similar to that described above. Just as with apparatus 100, unit 202 can be directly coupled to acoustic front end 208, if desired, to permit use of MEL cepstra or other acoustic features determined by front end 208.
Further, system 200 includes a processing module 210 similar to that described above, but having certain additional features which will now be discussed. Processing module 210 can include a dynamic classification module 240 which performs dynamic classification of the user 204.
1 o Accordingly, processing module 210 is configured to modify behavior of the voice system 200 based on at least one user attribute which has been determined based on at least one acoustic feature extracted from the user's speech. System 200 can further include a business logic unit 242 which is coupled to the dialog management unit 202, the dynamic classification module 240, and optionally to the acoustic front end 208. The business logic unit can be implemented as a processing portion of the IVR or VUI, can be part of an appropriately programmed general purpose computer, or can be an application specific circuit. At present, it is believed preferable that the processing module 110, 210 (including module 240) be implemented as a general purpose computer and that the business logic 242 be implemented in a processor portion of an interactive voice response system.
Dynamic classification module 240 can be configured to provide feedback, which can be real-time feedback, to the business logic unit 242 and the dialog management unit 202, as suggested by the heavy line 244.
A data warehouse 212 and post processor 238 can be optionally provided as shown and can operate as discussed above with respect to the data collecting apparatus 100. It should be emphasized, however, that in the real-time-modifiable voice system 200 of the present invention, data warehousing is optional and if desired, the system can be limited to the real time feedback discussed with respect to elements 240, 242 and 202, and suggested by line 244.
Processing module 210 can modify behavior of the system 200, at least in part, by prompting a human operator thereof, as suggested by feedback line 244 connected with dialog management unit 202. For example, a human operator could be alerted when an angry emotional state of the user 204 is detected and could be prompted to utter soothing words to the user 204, or transfer the user to a higher level human supervisor. Further, the processing module 210 could modify business logic 242 of the system 200. This could be done, for example, when both the processing module 210 and business logic unit 242 were part of an IVR system. Examples of modification of business logic will be discussed further below, but could include tailoring a marketing offer to the user 204 based on attributes of the user detected by the system 200.
As noted, processing module 210, and the sub-elements thereof, perform in essentially the same fashion as processing module 110 in FIG. 1. Note, however, the option for feedback of the output of speech recognition module 228, to business logic 242, as suggested by the dotted lines and arrows in FIG. 2.
It should be noted that throughout this application, including the specification and drawings thereof, the term "mood" is considered to be an equivalent of the term "emotional state."
Attention should now be given to FIG. 3 which depicts a flowchart, 300, of a method for collecting, in a data warehouse, data associated with the voice of a voice system user.
After starting, at block 302, the method includes the steps of conducting a conversation with a user of the voice system, per block 304, via at least one of a human operator and a voice-enabled machine system. The method further includes capturing a speech waveform, per block 306, which is associated with utterances spoken by the voice system user during the conversation. Yet further, the method includes the step of digitizing the speech waveform, per block 308, so as to provide a digitized speech waveform.
Still further, per block 310, the method includes the step of extracting, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute. The at least one acoustic feature can be any of the features discussed above, for example, MEL cepstra or any one of the emotional state features, for example. The user attributes can include any of the user attributes discussed above, that is, gender, age, accent and the remainder of the aforementioned attributes. Finally, the method can include the step, per block 316, of storing attribute data corresponding to the acoustic feature which is correlated with the at least one user attribute, together with at least one identifying indicia, in the data warehouse in a form to facilitate subsequent data mining thereon. Any type of identifying indicia which is desired can be used;
this term is to be understood broadly. For example, the identifying indicia can be a time stamp which correlates the various. features to a conversation conducted at a given time, thereby identifying the given transaction; can be an identification number or name, or the like, which identifies the user; or can be any other item of information associated with the attribute data which is useful in the data mining process.
As indicated at the decision block 320, the aforementioned steps in blocks 304, 306, 308, 310, and 316 can be repeated for a plurality of additional conversations to provide a collection of stored data including the attribute data and identifying indicia. This can be repeated until there is sufficient data for data mining. Then, as indicated at block 322, the collection of stored data can be mined to provide information which may be desired, for example, information to be used in modifying the underlying business logic of the voice system.
As noted, the storing step, per block 316, can comprise storing wherein the at least one identifying indicia is a time stamp. The more data which is collected, the better models which can be built.
Data collection can be annotated, possibly by using an existing set of classifiers already trained to identify each item, or purely via annotations from transcribers who estimate the desired items. A
combination of these two techniques can also be employed. It is preferred that the plurality of additional conversations discussed above be conducted with a plurality of different users, such that there will be data from a large set of speakers.
The extracting step, per block 310, can include extracting at least one of fundamental frequency, variation in fundamental frequency, running average pitch, running pitch variance, pitch jitter, running energy variance, speech rate and shimmer as at least one emotional state feature which is correlated with the emotional state of the user.
Per block 312, the extracted features can be normalized; this is believed to be particularly valuable when the features are those indicative of emotional state. This has been discussed previously with respect to the apparatus of the present invention.
The method 300 can further include the additional step, per block 314, of processing the at least one acoustic feature to determine the at least one user attribute. In this case, processed features are obtained, and the attribute data can be a value of the attribute itself, for example, a value of the emotional state. This can be distinguished from the method when only raw data is stored, in which case the attribute data can simply be the raw features, i.e., MEL cepstra or emotional state features discussed above. Thus, to summarize, either raw acoustic features (e.g., waveform, MEL cepstra, emotional state features), processed acoustic features (e.g., value of emotional state (happy, sad, confused), transcription of conversation) or both raw and processed acoustic features may be stored in block 316.
Referring to block 318, the processing module, used in performing the processing step per block 314, can be automatically refined each time an additional attribute is stored in the data warehouse. That is, the clustering, classification, and recognition functions discussed above with respect to the apparatus can be improved with each new piece of data.
Reference should now be had to FIG. 4 which depicts certain optional sub-steps which it is highly preferable to perform in connection with the method illustrated in FIG. 3. In particular, block 310 of FIG. 3 can, if desired, include extracting at least MEL cepstra, as shown in block 310' in FIG. 4.
In this case, the method can further comprise the additional steps of recognizing speech of the user based on the MEL cepstra, per block 314A, transcribing the speech, per block 314B, and examining the speech per block 314C. The speech can be examined for at least one of word choice and vocabulary to determine at least one of educational level of the user, socioeconomic classification of the user, and dialect of the user. Other user attributes related to word choice and vocabulary can also be determined as desired. The steps 314A, 314B, and 314C can, in another sense, be thought of as sub-steps of the processing block 314 in FIG. 3.
Referring back to FIG. 3, the end of the process can be represented per block 324.
Reference should now be had to FIG. 5, which depicts a flowchart 400 representative of a method, in accordance with the present invention, of tailoring a voice system response to an acoustically determined state of a voice system user. After starting at block 402, the method includes the step of conducting a conversation with the voice system user, via the voice system, per block 404. The method further includes the step of capturing a speech waveform associated with utterances spoken by the voice system user during the conversation, per block 406. Still further, the method includes the step of digitizing the speech waveform, per block 408, to provide a digitized speech waveform.
Yet further, per block 410, the method includes the step of extracting, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute. The at least one user attribute can include any of the user attributes discussed above. It will be appreciated that blocks 402-410 are similar to blocks 302-3 10 in FIG. 3.
Finally, the method can include, per block 415, modifying behavior of the voice system based on the at least one user attribute. The modification of the behavior of the voice system can include at least one of real-time changing of the business logic of the voice system, and real-time modifying of the voice system response, as compared to an expected response of the voice system without the modification. Reference should be had to the discussion of the apparatus above. For example, a real-time modification of the voice system response could be transferring a perturbed user to a human operator.
The extracting step per block 410 can include extracting of any of the aforementioned emotional state features, or of any of the other features previously discussed. Per block 412, the method can optionally include the additional step of normalizing the acoustic feature, particularly in the case when the acoustic feature is an emotional state feature. The method can further optionally include the additional step of storing attribute data corresponding to the acoustic feature which is correlated with the at least one user attribute, together with at least one identifying indicia, in a data warehouse, in accordance with block 416. The storage can be in a form to facilitate subsequent data mining thereon, and can include one of a raw and a processed condition. This step can be essentially similar to those discussed above in the method represented by flowchart 300. It will be appreciated that, per block 414, the feature could be processed with a processing module to determine the desired attribute. In this case, the attribute data could be the attribute itself;
when no processing takes place, the attribute data could be the raw acoustic feature. Although the method depicted in FIG. 5 can be confined to modification of behavior of the voice system, the refining step per block 418, repetition controlled by decision block 420, and data mining step 422 can all be carried out if desired (e.g., just as for the method depicted in FIG. 3). Block 424 signifies the end of the method steps.
Just as in the method represented by flowchart 300, the method represented by flowchart 400 can determine certain user attributes based on transcription of the user's speech.
Accordingly, in the extracting step, block 400, the extraction can include at least MEL cepstra.
With reference now again to FIG. 4, this is accomplished in block 410' . Further steps can include recognizing speech of the user based on the MEL cepstra, per block 414A; transcribing the speech, per block 414B; and examining the speech, per block 414C, for at least one of word choice and vocabulary so as to determine at least one of educational level of the user, socioeconomic classification of the user, and dialect of the user. As before, other user attributes related to word choice and vocabulary can be determined.
Reference should now be had to FIG. 6 which depicts certain details associated with certain aspects of the method of flowchart 400. In particular, in some embodiments of the method according to flowchart 400, the processing step 414 can include examining an emotional state feature to determine an emotional state of the user, per block 414D in FIG. 6. Further, the modification of behavior block 415 can include taking action in response to the emotional state previously determined, per block 415A in FIG. 6. Thus, the emotional state feature can be examined to determine whether the user is in ajovial (i.e., happy) emotional state or if he or she is in, for example, at least one of a disgusted, contemptuous, fearful and angry emotional state. When the user is found to be in jovial emotional state, he or she can be offered at least one of a product and a service, as the action taken in block 415A. Alternatively, when the user is found to be in jovial emotional state, a marketing study can be performed on the user as the action taken in block 415A.
Still with reference to FIG. 6, in cases where the emotional state feature is used to determine emotional state, a feature other than an emotional state feature can be examined to determine an attribute other than emotional state, per block 426, and then the action taken in block 415A can be tailored in response to the attribute other than emotional state, per block 428. For example, when the jovial user is offered one of a product and a service, the product or service which is offered can be tailored based on the at least one user attribute other than emotional state. Alternatively, when the jovial user is made the subject of a marketing study, the marketing study can be tailored in response to the at least one user attribute other than emotional state. For example, suppose ajovial user is to be offered one of a product and a service. Their language pattern could be examined to determine that they were from a rural area in the southern United States where bass fishing was popular and, if desired, pitch could additionally be examined to determine that they were of the male gender. Products such as bass fishing equipment and videos could then be offered to the subject.
Or, suppose, that the jovial subject on which a marketing study is to be done is determined to be a middle aged woman from a wealthy urban area who is highly educated. The marketing study could be tailored to quiz her about her buying habits for expensive cosmetics, stylish clothing, or trendy vacation resorts.
As noted, the emotional state feature could be examined to determine if the user is in one of a disgusted, contemptuous, fearful and angry emotional state. If the method were being conducted using an IVR system, and such an emotional state were detected, then block 415A could constitute switching the user from the IVR to a human operator in response to the user's detected emotional state. Alternatively, if a similar emotional state were detected, in a case where a hybrid interactive voice response system were employed, the action taken in block 415A could be switching the user from a low-level human operator to a higher-level human supervisor in response to the user's emotional state.
Yet further, the emotional state feature could be examined to determine whether the user was in a confused emotional state. This can be done using techniques known in the art, as set forth, for example, in the ICSLP'98 papers discussed above. Confusion may be evidenced, e.g., by delays in answering a question, stuttering, repetitions, false starts and the like.
Thus, speech recognition and transcription are valuable. When a confused emotional state is detected, the action taken in block 415A could then be the switching of the user from a substantially automatic IVR system to a human operator in response to the confused emotional state.
The present invention can also include a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform the method steps of any of the methods disclosed herein, or any subset of the steps of those methods.
For example, where certain subsets of the method steps were conveniently performed by a general purpose computer, or a processor portion of an IVR system, suitable program instructions could be written on a diskette, CD-ROM or the like. In the method shown in flowchart 300, such method steps could include reading digital data corresponding to a speech waveform associated with utterances spoken by the voice system user during a conversation between the voice system user and at least one of a human operator and a voice-enabled machine system. Program instructions for additional steps could include instructions to accomplish the tasks depicted in blocks 310 and 316, or any of the other blocks, as desired.
Similarly, with reference to the method depicted in flowchart 400, a first step to be performed via program instructions could include reading digital data corresponding to a speech waveform associated with utterances spoken by the voice system user during a conversation between the voice system user and at least one of a human operator and a voice-enabled machine system. Additional method steps to be incorporated in the program instructions could be, for example, those in block 410 and block 415, as discussed above, or indeed, any of the other method steps discussed herein.
It should be understood that features can be extracted and decisions dynamically returned by the models in the present invention. In addition to those examples already set forth, when a user, such as customer, sounds fearful, a human operator can intercept the call for a variety of reasons, for example, to make sure that the transaction is not coerced. Furthermore, anger can be detected in a user (or, for that matter, an operator) and in addition to modifying responses of a automatic or hybrid IVR system, could be used for quality control, e.g., as a means to evaluate and train customer service agents.
The present invention can be extended to other than acoustic information. For example, video information can be included, whether alone or accompanying audio data.
Accordingly, method steps calling for conducting a conversation could instead involve conducting a visual transaction. Video information can help to identify or classify user attributes. Such data can be collected naturally through video-telephones, cameras at kiosks, cameras on computers, and the like. Such attributes and emotional states as smiling, laughing, crying and the like can be identified. Further, voice segments corresponding to certain user attributes or emotional states, which could be visually determined, can be labeled. This would permit creation of a training data base which would be useful for creating automatic techniques for identification of user attributes via acoustic data only.
Accordingly, data mining could be performed on visually-determined user attributes only, on acoustically determined user attributes only, or on both.
Determination of user attributes from appearance can be done based on common human experience, i.e., red face means angry or embarrassed, smile means happiness or jovial mood, tears mean sadness. Furthermore, any appropriate biometric data can be taken in conjunction with the video and acoustic data. Yet further, data can be taken on more than one individual at one time. For example, parents and children could be simultaneously monitored or a married couple searching for a house or car could also be simultaneously monitored. One might detect children who were happy with a junk food menu item, while their parents were simultaneously unhappy with that choice. A husband might be angry, and his wife happy, at her choice of an expensive jewelry purchase. Alternatively, a husband might be happy and his wife unhappy at his choice of purchasing an expensive set of golf clubs.
As noted, time stamping can be employed as an indicia to be stored together with user attribute data.
This can permit studies of how people respond at different times during the day, or can watch them evolve at different times during their life, for example, as children grow into teenagers and then adults, or as the tastes of adults change as they grow older. Similarities in relatives can also be tracked and plotted. Yet further, one of the user attributes which can be tracked is fatigue. Such a system could be installed, for example, in an automobile, train, aircraft, or long distance truck to monitor operator fatigue and to prompt the operator to pull over and rest, or, for example, to play loud music to keep the operator awake. Co-assigned U.S. Patent Application 09/078,807 of Zadrozny and Kanevsky, entitled "Sleep Prevention Dialog Based Car System,"
filed May 14,1998.
It should be noted that the voice systems discussed herein can include telephone systems, kiosks, speaking to a computer and the like. The term "acoustic feature" is to be broadly understood and, as discussed, can include either raw or processed features, or both. For example, when the acoustic feature is MEL cepstra certain processed features could include key words, sentence parts, or the like. Some key words could be, for example, unacceptable profane words, which could be eliminated, result in summoning a manager, or result in disciplinary action against an employee. It should also be emphasized that in the apparatus and method for performing real time modification of a voice system, storage of an attribute, with an indicia, in the warehouse is optional and need not be performed.
When training the models, human operators can annotate data when making educated guesses about various user attributes. Alternatively, annotation can be done automatically using an existing set of classifiers which are already trained. A combination of the two techniques can also be employed.
The indicia which are stored can include, in addition to a time stamp and the other items discussed herein, a transaction event or results, or any other useful information. The method depicted in flowchart 400 could also be used in a live conversation with a human operator with manual prompts to change the business logic used by the operator, or to summon a supervisor automatically when anger or other undesirable occurrences are noted.
While there have been described what are presently believed to be the preferred embodiments of the invention, those skilled in the art will realize that various changes and modifications may be made to the invention without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as fall within the scope of the invention.
BACKGROUND OF THE INVENTION
Field of the Invention The present invention relates to voice-oriented systems, and more particularly relates to an acoustically oriented method and apparatus to facilitate data mining and an acoustically oriented method and apparatus to tailor response of a voice system to an acoustically determined state of a voice system user.
Brief Description of the Prior Art Data mining is an interdisciplinary field which has recently increased in popularity. It refers to the use of methods which extract information from data in an unsupervised manner, or with very little supervision. "Unsupervised" refers to techniques wherein there is no advance labeling; classes are allowed to develop on their own. Sounds are clustered and one sees which classes develop. Data mining is used in market, risk and fraud management.
In the data mining field, it is generally agreed that more data is better.
Accordingly, companies engaged in data mining frequently compile or acquire customer data bases.
These data bases may be based on mail-order history, past customer history, credit history and the like. It is anticipated that the customer's electronic business and internet behavior will soon also provide a basis for customer data bases. The nature of the stored information may result from the manual or automatic encoding of either a transaction or an event. An example of a transaction might be that a given person bought a given product at a given price under certain conditions, or that a given person responded to a certain mailing. An example of an event could include a person having a car accident on a certain date, or a given family moving in the last month.
The data on which data mining is performed is traditionally stored in a data warehouse. Once business objectives have been determined, the data warehouse is examined to select relevant features, evaluate the quality of the data, and transform it into analytical models suited for the intended analysis. Techniques such as predictive modeling, data base segmentation, link analysis and deviation detection can then be applied so as to output targets, forecasts or detections.
Following validation, the resulting models can be deployed.
Today, it is common for a variety of transactions to be performed over the telephone via a human operator or an interactive voice response (IVR) system. It is known that voice, which is the mode of communication in such transactions, carries information about a variety of user attributes, such as gender, age, native language, accent, dialect, socioeconomic condition, level of education and emotional state. One or more of these parameters may be valuable to individuals engaged in data mining. At present, the treasure trove of data contained in these transactions is either completely lost to data miners, or else would have to be manually indexed in order to be effectively employed.
There is, therefore, a need in the prior art for a method for collecting, in a data warehouse, data associated with the voice of a voice system user which can efficiently and automatically make use of the data available in transactions using voice systems, such as telephones, kiosks, and the like.
It would be desirable for the method to also be implemented in real-time, with or without data warehouse storage, to permit "on the fly" modification of voice systems, such as interactive voice response systems, and the like.
SUMMARY OF THE INVENTION
The present invention, which addresses the needs identified in the prior art, provides a method for collecting, in a data warehouse, data associated with the voice of a voice system user. The method comprises the steps of conducting a conversation with the voice system user, capturing a speech waveform, digitizing the speech waveform, extracting at least one acoustic feature from the digitized speech waveform, and then storing attribute data corresponding to the acoustic feature in the data warehouse. The conversation can be conducted with the voice system user via at least one of a human operator and a voice-enabled machine system. The speech waveform to be captured is that associated with utterances spoken by the voice system user during the conversation. The digitizing of the speech waveform provides a digitized speech waveform. The at least one acoustic feature is extracted from the digitized waveform and correlates with at least one user attribute, such as gender, age, accent, native language, dialect, socioeconomic classification, educational level and emotional state of the user. The attribute data which is stored in the data warehouse corresponds to the acoustic feature which correlates with the at least one user attribute, and is stored together with at least one identifying indicia. The data is. stored in the data warehouse in a form to facilitate subsequent data mining thereon.
The present invention also provides a method of tailoring a voice system response to an acoustically-determined state of a voice system user. The method includes the step of conducting a conversation with the voice system user via the voice system. The method further includes the steps of capturing a speech waveform and digitizing the speech waveform, as discussed previously.
Yet further, the method includes the step of extracting an acoustic feature from the digitized speech waveform, also as set forth above. Finally, the method includes the step of modifying behavior of the voice system based on the at least one user attribute with which the at least one acoustic feature is correlated.
The present invention further includes a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform either of the methods just discussed.
The present invention further provides an apparatus for collecting data associated with the voice of a user. The apparatus comprises a dialog management unit, an audio capture module, an acoustic front end, a processing module, and a data warehouse. The dialog management unit conducts a conversation with the user. The audio capture module is coupled to the dialog management unit and captures a speech waveform associated with utterances spoken by the user during the conversation.
The acoustic front end is coupled to the audio capture module and is configured to receive and digitize the speech waveform so as to provide a digitized speech waveform, and to extract, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute. The at least one user attribute can include at least one of the user attributes discussed above with respect to the methods.
The processing module is coupled to the acoustic front end and analyzes the at least one acoustic 5. feature to determine the at least one user attribute. The data warehouse is coupled to the processing module and stores the at least one user attribute in a form for subsequent data mining thereon.
The present invention still further provides a real-time-modifiable voice system for interaction with a user. The system includes a dialog management unit of the type discussed above, an audio capture module of the type discussed above, and an acoustic front end of the type discussed above. Further, 1 o the voice system includes a processing module of the type discussed above.
The processing module is configured so as to modify behavior of the voice system based on the at least one user attribute.
For a better understanding of the present invention, together with other and further advantages thereof, reference is made to the following description, taken in conjunction with the accompanying drawings, and the scope of the invention will be pointed out in the appended claims.
FIG. 1 is a diagram of an apparatus for collecting data associated with a voice of a user, in accordance with the present invention;
FIG. 2 is a diagram of a real-time-modifiable voice system for interaction with a user, in accordance with the present invention;
20 FIG. 3 is a flowchart of a method for collecting, in a data warehouse, data associated with a voice of a voice system user, in accordance with the present invention;
FIG. 4 depicts certain details of the method shown in FIG. 3, which are also applicable to FIG. 5;
FIG. 5 is a flowchart of a method, in accordance with the present invention, for tailoring a 25 voice system response to an acoustically-determined state of a voice system user; and FIG. 6 depicts certain details of the method of FIG. 5.
DETAILED DESCRIPTION OF THE INVENTION
Reference should now had to FIG. 1 which depicts an apparatus for collecting data associated with a voice of a user, in accordance with the present invention. The apparatus is designated generally as 100. The apparatus includes a dialog management unit 102 which conducts a.conversation with the user 104. Apparatus 100 further includes an audio capture module 106 which is coupled to the dialog management unit 102 and which captures a speech waveform associated with utterances spoken by the user 104 during the conversation. As used herein, a conversation should be broadly understood to include any interaction, between a first human and either a second human, a machine, or a combination thereof, which includes at least some speech.
Apparatus 100 further includes an acoustic front end 108 which is coupled to the audio capture module 106 and which is configured to receive and digitize the speech waveform so as to provide a digitized speech waveform. Further, acoustic front end 108 is also configured to extract, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute, i.e., of the user 104. The at least one user attribute can include at least one of the following: gender of the user, age of the user, accent of the user, native language of the user, dialect of the user, socioeconomic classification of the user, educational level of the user, and emotional state of the user. The dialog management unit 102 may employ acoustic features, such as MEL
cepstra, obtained from acoustic front end 108 and may therefore, if desired, have a direct coupling thereto.
Apparatus 100 further includes a processing module 110 which is coupled to the acoustic front end 108 and which analyzes the at least one acoustic feature to determine the at least one user attribute.
Yet further, apparatus 100 includes a data warehouse 112 which is coupled to the processing module 110 and which stores the at least one user attribute, together with at least one identifying indicia, in a form for subsequent data mining thereon. Identifying indicia will be discussed elsewhere herein.
The gender of the user can be determined by classifying the pitch of the user's voice, or by simply clustering the features. In the latter method, voice prints associated with a large set of speakers of a given gender are built and a speaker classification is then performed with the two sets of models.
Age of the user can also be determined via classification of age groups, in a manner similar to gender. Although having limited reliability, broad classes of ages, such as children, teenagers, adults and senior citizens can be separated in this fashion.
Determination of accent from acoustic features is known in the art. For example, the paper "A
Comparison of Two Unsupervised Approaches to Accent Identification" by Lincoln et al., presented at the 1998 International Conference on Spoken Language Processing, Sidney, Australia [hereinafter ICSLP'98], sets forth useful techniques. Native language of the user can be determined in a manner essentially equivalent to accent classification. Meta information about the native language of the speaker can be added to define each accentlnative language model.
That is, at the creation of the models for each native language, one employs a speaker or speakers who are tagged with that language as their native language. The paper "Language Identification Incorporating Lexical Information" by Matrouf et al., also presented at ICSLP'98, discusses various techniques for language identification.
The user's dialect can be determined from the accent and the usage of keywords or idioms which are specific to a given dialect. For example, in the French language, the choice of "nonante" for the numeral 90 instead of "Quatre Vingt Dix" would identify the speaker as being of Belgian or Swiss extraction, and not French or Canadian. Further, the consequent choice of "quatre-vingt" instead of "octante" or "Huitante" for the numeral 80 would identify the individual as Belgian and not Swiss.
In American English, the choice of "grocery sack" rather than "grocery bag"
might identify a person as being of Midwestern origin rather than Midatlantic origin. Another example of Midwestern versus Midatlantic American English would be the choice of "pop" for a soft drink in the Midwest and the choice of "soda" for the corresponding soft drink in the middle Atlantic region. In an international context, the use of "holiday" rather than "vacation" might identify someone as being of British rather than United States origin. The operations described in this paragraph can be carried out using a speech recognizer 126 which will be discussed below.
The socioeconomic classification of the user can include such factors as the racial background of the user, ethnic background of the user, and economic class of the user, for example, blue collar, white collar-middle class, or wealthy. Such determinations can be made via annotated accents and dialects at the moment of training, as well as by examining the choice of words of the user. While only moderately reliable, it is believed that these techniques will give sufficient insight into the background of the user so as to be useful for data mining.
The educational level of the user can be determined by the word choice and accent, in a manner similar to the socioeconomic classification; again, only partial reliability is expected, but sufficient for data mining purposes.
Determination of the emotional state of the user from acoustic features is well known in the art.
Emotional categories which can be recognized include hot anger, cold anger, panic, fear, anxiety, sadness, elation, despair, happiness, interest, boredom, shame, contempt, confusion, disgust and pride. Exemplary methods of determining emotional state from relevant acoustic features are set forth in the following papers: "Some Acoustic Characteristics of Emotion" by Pereira and Watson, "Towards an Automatic Classification of Emotions in Speech" by Amir and Ron, and "Simulated Emotions: An Acoustic Study of Voice and Perturbation Measures" by Whiteside, all of which were presented at ICSLP'98.
The audio capture module 106 can include, for example, at least one of an analog-to-digital converter board, an interactive voice response system, and a microphone. The dialog management unit 102 can include a telephone interactive voice response system, for example, the same one used to implement the audio capturing. Alternatively, the dialog management unit may simply be an acoustic interface to a human operator. Dialog management unit 102 can include natural language understanding (NLU), natural language generation (NLG), finite state grammar (FSG), and/or text-to-speech syntheses (TTS) for machine-prompting the user in lieu of, or in addition to, the human operator. The processing module 110 can be implemented in the processor portion of the IVR, or can be implemented in a separate general purpose computer with appropriate software. Still further, the processing module can be implemented using an application specific circuit such as an application specific integrated circuit (ASIC) or can be implemented in an application specific circuit employing. discrete components, or a combination of discrete and integrated components.
Processing module 110 can include an emotional state classifier 114.
Classifier 114 can in turn include an emotional state classification module 116 and an emotional state prototype database 118.
Processing module 110 can further include a speaker clusterer and classifier 120. Element 120 can 1 o further include a speaker clustering and classification module 122 and a speaker class data base 124.
Processing module 110 can further include a speech recognizer 126 which can, in turn, itself include a speech recognition module 128 and a speech prototype, language model and grammar database 130. Speech recognizer 126 can be part of the dialog management unit 102 or, for example, a separate element within the implementation of processing module 110. Yet further, processing module 110 can include an accent identifier 132, which in turn includes an accent identification module 134 and an accent data base 136.
Processing module 110 can include any one of elements 114,120,126 and 132; all of those elements together; or any combination thereof.
Apparatus 100 can further include a post processor 138 which is coupled to the data warehouse 112 and which is configured to transcribe user utterances and to perform keyword spotting thereon.
Although shown as a separate item in FIG. 1, the post processor can be a part of the processing module 110 or of any of the sub-components thereof. For example, it can be implemented as part of the speech recognizer 126. Post processor 138 can be implemented as part of the processor of an IVR, as an application specific circuit, or on a general purpose computer with suitable software modules. Post processor 138 can employ speech recognizer 126. Post processor 138 can also .~_ .
include a semantic module (not shown) to interpret meaning of phrases. The semantic module could be used by speech recognizer 126 to indicate that some decoding candidates in a list are meaningless and should be discarded/replaced with meaningful candidates.
The acoustic front end 108 can typically be an eight dimensions plus energy front end as known in the art. However, it should be understood that 13, 24, or any other number of dimensions could be used. MEL cepstra can be computed, for example, over 25 ms frames with a 10 ms overlap, along with the delta and delta delta parameters, that is, the first and second finite derivatives. Such acoustic features can be supplied to the speaker clusterer and classifier 120, speech recognizer 126 and accent identifier 132, as shown in FIG. 1.
Other types of acoustic features can be extracted by the acoustic front end 108. These can be designated as emotional state features, such as running average pitch, running pitch variance, pitch jitter, running energy variance, speech rate, shimmer, fundamental frequency, and variation in fundamental frequency. Pitch jitter refers to the number of sign changes of the first derivative of pitch. Shimmer is energy jitter. These features can be supplied from the acoustic front end 108 to the emotional state classifier 114. The aforementioned acoustic features, including the MEL cepstra and the emotional state features, can be thought of as the raw, that is, unprocessed features.
User queries can be transcribed by an IVR or otherwise. Speech features can first be processed by a text-independent speaker classification system, for example, in speaker clusterer and classifier 120.
This permits classification of the speakers based on acoustic similarities of their voices.
Implementation and use of such a system is disclosed in U.S. Patent application S.N. 60/011,058, filed February 2, 1996; U.S. Patent application S.N. 08/787,031, filed January 28, 1997 (now U.S.
Patent 5,895,447 issued Apri120, 1999); U.S. Patent application S.N.
08/788,471, filed January 28, 1997; and U.S. Patent application S.N. 08/787,029, filed January 28, 1997, all of which are co-assigned to International Business Machines Corporation. The classification of the speakers can be supervised or unsupervised. In the supervised case, the classes have been decided beforehand based on external information. Typically, such classification can separate between male and female, adult versus child, native speakers versus different classes of non-native speakers, and the like. The indices of this classification process constitute processed features. The results of this process can be supplied to the emotional state classifier 114 and can be used to normalize the emotional state features with respect to the average (mean) observed for a given class, during training, for a neutral emotional state. The normalized emotional state features are used by the emotional state classifier 114 which then outputs an estimate of the emotional state. This output is also considered to be part of the processed features. To summarize, the emotional state features can be normalized by the emotional state classifier 114 with respect to each class produced by the speech clusterer and classifier 120. A feature can be normalized as follows. Let Xo be the normal frequency. Let X; be the measured frequency. Then, the normalized feature will be given by N. minus Xo. This quantity can be positive or negative, and is not, in general, dimensionless.
The speech recognizer 126 can transcribe the queries from the user. It can be a speaker-independent or class-dependent large vocabulary continuous speech recognition, or system could be something as simple as a keyword spotter to detect insults (for example) and the like.
Such systems are well known in the art. The output can be full sentences, but finer granularity can also be attained; for example, time alignment of the recognized words. The time stamped transcriptions can also be considered as part of the processed features, and will be discussed further below with respect to methods in accordance with the present invention. Thus, conversation from every stage of a transaction can be transcribed and stored. As shown in FIG. 1, appropriate data is transferred from the speaker clusterer and classifier 120 to the emotional state classifier 114 and the speech recognizer 126. As noted, it is possible to perform accent, dialect and language recognition with the input speech from user 104. A continuous speech recognizer can be trained on speech with several speakers having the different accents which are to be recognized. Each of the training speakers is also associated with an accent vector, with each dimension representing the most likely mixture component associated with each state of each lefeme. The speakers can be clustered based on the distance between these accent vectors, and the clusters can be identified by, for example, the accent of the member speakers. The accent identification can be performed by extracting an accent vector from the user's speech and classifying it. As noted, dialect, socioeconomic classification, and the like can be estimated based on vocabulary and word series used by the user 104. Appropriate key words, sentences, or grammatical mistakes to detect can be compiled via expert linguistic knowledge. The accent, socioeconomic background, gender, age and the like are part of the processed features. As shown in FIG. 1, any of the processed features, indicated by the solid arrows, can be stored in the data warehouse 112. Further, raw features, indicated by the dotted lines can also be stored in the data warehouse 112.
Any of the processed or raw features can be stored in the data warehouse 112 and then associated with the other data which has been collected, upon completion of the transaction. Classical data mining techniques can then be applied. Such techniques are known, for example, as set forth in the book Data Warehousing, Data MiningLand OAAP, by Alex Berson and Stephen J.
Smith, published by McGraw Hill in 1997, and in Discovering Data Mining, by Cabena et al., published by Prentice Hall in 1998. For a given business objective, for example, target marketing, predictive models or classifiers are automatically obtained by applying appropriate mining recipes.
All data stored in the data warehouse 112 can be stored in a format to facilitate subsequent data mining thereon. Those of skill in the art are aware of appropriate formats for data which is to be mined, as set forth in the two cited reference books. Business objectives can include, for example, detection of users who are vulnerable to a proposal to buy a given product or service, detection of users who have problems with the automated system and should be transferred to an operator and detection of users who are angry at the service and should be transferred to a supervisory person. The user 104 can be a customer of a business which employs the apparatus 100, or can be a client of some other type of institution, such as a nonprofit institution, a government agency or the like.
Features can be extracted and decisions dynamically returned by the models.
This will be discussed further below.
Reference should now be had to FIG. 2 which depicts a real-time-modifiable voice system for interaction with a user, in accordance with the present invention, which is designated generally as 200. Elements in FIG. 2 which are similar to those in FIG. 1 have received the same reference numerals incremented by 100. System 200 can include a dialog management unit 202 similar to that discussed above. In particular, as suggested in FIG. 2, unit 202 can be a human operator or supervisor, an IVR, or a Voice User Interface (VUI). System 200 can also include an audio capture module 206 similar to that described above, and an acoustic front end 208, also similar to that described above. Just as with apparatus 100, unit 202 can be directly coupled to acoustic front end 208, if desired, to permit use of MEL cepstra or other acoustic features determined by front end 208.
Further, system 200 includes a processing module 210 similar to that described above, but having certain additional features which will now be discussed. Processing module 210 can include a dynamic classification module 240 which performs dynamic classification of the user 204.
1 o Accordingly, processing module 210 is configured to modify behavior of the voice system 200 based on at least one user attribute which has been determined based on at least one acoustic feature extracted from the user's speech. System 200 can further include a business logic unit 242 which is coupled to the dialog management unit 202, the dynamic classification module 240, and optionally to the acoustic front end 208. The business logic unit can be implemented as a processing portion of the IVR or VUI, can be part of an appropriately programmed general purpose computer, or can be an application specific circuit. At present, it is believed preferable that the processing module 110, 210 (including module 240) be implemented as a general purpose computer and that the business logic 242 be implemented in a processor portion of an interactive voice response system.
Dynamic classification module 240 can be configured to provide feedback, which can be real-time feedback, to the business logic unit 242 and the dialog management unit 202, as suggested by the heavy line 244.
A data warehouse 212 and post processor 238 can be optionally provided as shown and can operate as discussed above with respect to the data collecting apparatus 100. It should be emphasized, however, that in the real-time-modifiable voice system 200 of the present invention, data warehousing is optional and if desired, the system can be limited to the real time feedback discussed with respect to elements 240, 242 and 202, and suggested by line 244.
Processing module 210 can modify behavior of the system 200, at least in part, by prompting a human operator thereof, as suggested by feedback line 244 connected with dialog management unit 202. For example, a human operator could be alerted when an angry emotional state of the user 204 is detected and could be prompted to utter soothing words to the user 204, or transfer the user to a higher level human supervisor. Further, the processing module 210 could modify business logic 242 of the system 200. This could be done, for example, when both the processing module 210 and business logic unit 242 were part of an IVR system. Examples of modification of business logic will be discussed further below, but could include tailoring a marketing offer to the user 204 based on attributes of the user detected by the system 200.
As noted, processing module 210, and the sub-elements thereof, perform in essentially the same fashion as processing module 110 in FIG. 1. Note, however, the option for feedback of the output of speech recognition module 228, to business logic 242, as suggested by the dotted lines and arrows in FIG. 2.
It should be noted that throughout this application, including the specification and drawings thereof, the term "mood" is considered to be an equivalent of the term "emotional state."
Attention should now be given to FIG. 3 which depicts a flowchart, 300, of a method for collecting, in a data warehouse, data associated with the voice of a voice system user.
After starting, at block 302, the method includes the steps of conducting a conversation with a user of the voice system, per block 304, via at least one of a human operator and a voice-enabled machine system. The method further includes capturing a speech waveform, per block 306, which is associated with utterances spoken by the voice system user during the conversation. Yet further, the method includes the step of digitizing the speech waveform, per block 308, so as to provide a digitized speech waveform.
Still further, per block 310, the method includes the step of extracting, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute. The at least one acoustic feature can be any of the features discussed above, for example, MEL cepstra or any one of the emotional state features, for example. The user attributes can include any of the user attributes discussed above, that is, gender, age, accent and the remainder of the aforementioned attributes. Finally, the method can include the step, per block 316, of storing attribute data corresponding to the acoustic feature which is correlated with the at least one user attribute, together with at least one identifying indicia, in the data warehouse in a form to facilitate subsequent data mining thereon. Any type of identifying indicia which is desired can be used;
this term is to be understood broadly. For example, the identifying indicia can be a time stamp which correlates the various. features to a conversation conducted at a given time, thereby identifying the given transaction; can be an identification number or name, or the like, which identifies the user; or can be any other item of information associated with the attribute data which is useful in the data mining process.
As indicated at the decision block 320, the aforementioned steps in blocks 304, 306, 308, 310, and 316 can be repeated for a plurality of additional conversations to provide a collection of stored data including the attribute data and identifying indicia. This can be repeated until there is sufficient data for data mining. Then, as indicated at block 322, the collection of stored data can be mined to provide information which may be desired, for example, information to be used in modifying the underlying business logic of the voice system.
As noted, the storing step, per block 316, can comprise storing wherein the at least one identifying indicia is a time stamp. The more data which is collected, the better models which can be built.
Data collection can be annotated, possibly by using an existing set of classifiers already trained to identify each item, or purely via annotations from transcribers who estimate the desired items. A
combination of these two techniques can also be employed. It is preferred that the plurality of additional conversations discussed above be conducted with a plurality of different users, such that there will be data from a large set of speakers.
The extracting step, per block 310, can include extracting at least one of fundamental frequency, variation in fundamental frequency, running average pitch, running pitch variance, pitch jitter, running energy variance, speech rate and shimmer as at least one emotional state feature which is correlated with the emotional state of the user.
Per block 312, the extracted features can be normalized; this is believed to be particularly valuable when the features are those indicative of emotional state. This has been discussed previously with respect to the apparatus of the present invention.
The method 300 can further include the additional step, per block 314, of processing the at least one acoustic feature to determine the at least one user attribute. In this case, processed features are obtained, and the attribute data can be a value of the attribute itself, for example, a value of the emotional state. This can be distinguished from the method when only raw data is stored, in which case the attribute data can simply be the raw features, i.e., MEL cepstra or emotional state features discussed above. Thus, to summarize, either raw acoustic features (e.g., waveform, MEL cepstra, emotional state features), processed acoustic features (e.g., value of emotional state (happy, sad, confused), transcription of conversation) or both raw and processed acoustic features may be stored in block 316.
Referring to block 318, the processing module, used in performing the processing step per block 314, can be automatically refined each time an additional attribute is stored in the data warehouse. That is, the clustering, classification, and recognition functions discussed above with respect to the apparatus can be improved with each new piece of data.
Reference should now be had to FIG. 4 which depicts certain optional sub-steps which it is highly preferable to perform in connection with the method illustrated in FIG. 3. In particular, block 310 of FIG. 3 can, if desired, include extracting at least MEL cepstra, as shown in block 310' in FIG. 4.
In this case, the method can further comprise the additional steps of recognizing speech of the user based on the MEL cepstra, per block 314A, transcribing the speech, per block 314B, and examining the speech per block 314C. The speech can be examined for at least one of word choice and vocabulary to determine at least one of educational level of the user, socioeconomic classification of the user, and dialect of the user. Other user attributes related to word choice and vocabulary can also be determined as desired. The steps 314A, 314B, and 314C can, in another sense, be thought of as sub-steps of the processing block 314 in FIG. 3.
Referring back to FIG. 3, the end of the process can be represented per block 324.
Reference should now be had to FIG. 5, which depicts a flowchart 400 representative of a method, in accordance with the present invention, of tailoring a voice system response to an acoustically determined state of a voice system user. After starting at block 402, the method includes the step of conducting a conversation with the voice system user, via the voice system, per block 404. The method further includes the step of capturing a speech waveform associated with utterances spoken by the voice system user during the conversation, per block 406. Still further, the method includes the step of digitizing the speech waveform, per block 408, to provide a digitized speech waveform.
Yet further, per block 410, the method includes the step of extracting, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute. The at least one user attribute can include any of the user attributes discussed above. It will be appreciated that blocks 402-410 are similar to blocks 302-3 10 in FIG. 3.
Finally, the method can include, per block 415, modifying behavior of the voice system based on the at least one user attribute. The modification of the behavior of the voice system can include at least one of real-time changing of the business logic of the voice system, and real-time modifying of the voice system response, as compared to an expected response of the voice system without the modification. Reference should be had to the discussion of the apparatus above. For example, a real-time modification of the voice system response could be transferring a perturbed user to a human operator.
The extracting step per block 410 can include extracting of any of the aforementioned emotional state features, or of any of the other features previously discussed. Per block 412, the method can optionally include the additional step of normalizing the acoustic feature, particularly in the case when the acoustic feature is an emotional state feature. The method can further optionally include the additional step of storing attribute data corresponding to the acoustic feature which is correlated with the at least one user attribute, together with at least one identifying indicia, in a data warehouse, in accordance with block 416. The storage can be in a form to facilitate subsequent data mining thereon, and can include one of a raw and a processed condition. This step can be essentially similar to those discussed above in the method represented by flowchart 300. It will be appreciated that, per block 414, the feature could be processed with a processing module to determine the desired attribute. In this case, the attribute data could be the attribute itself;
when no processing takes place, the attribute data could be the raw acoustic feature. Although the method depicted in FIG. 5 can be confined to modification of behavior of the voice system, the refining step per block 418, repetition controlled by decision block 420, and data mining step 422 can all be carried out if desired (e.g., just as for the method depicted in FIG. 3). Block 424 signifies the end of the method steps.
Just as in the method represented by flowchart 300, the method represented by flowchart 400 can determine certain user attributes based on transcription of the user's speech.
Accordingly, in the extracting step, block 400, the extraction can include at least MEL cepstra.
With reference now again to FIG. 4, this is accomplished in block 410' . Further steps can include recognizing speech of the user based on the MEL cepstra, per block 414A; transcribing the speech, per block 414B; and examining the speech, per block 414C, for at least one of word choice and vocabulary so as to determine at least one of educational level of the user, socioeconomic classification of the user, and dialect of the user. As before, other user attributes related to word choice and vocabulary can be determined.
Reference should now be had to FIG. 6 which depicts certain details associated with certain aspects of the method of flowchart 400. In particular, in some embodiments of the method according to flowchart 400, the processing step 414 can include examining an emotional state feature to determine an emotional state of the user, per block 414D in FIG. 6. Further, the modification of behavior block 415 can include taking action in response to the emotional state previously determined, per block 415A in FIG. 6. Thus, the emotional state feature can be examined to determine whether the user is in ajovial (i.e., happy) emotional state or if he or she is in, for example, at least one of a disgusted, contemptuous, fearful and angry emotional state. When the user is found to be in jovial emotional state, he or she can be offered at least one of a product and a service, as the action taken in block 415A. Alternatively, when the user is found to be in jovial emotional state, a marketing study can be performed on the user as the action taken in block 415A.
Still with reference to FIG. 6, in cases where the emotional state feature is used to determine emotional state, a feature other than an emotional state feature can be examined to determine an attribute other than emotional state, per block 426, and then the action taken in block 415A can be tailored in response to the attribute other than emotional state, per block 428. For example, when the jovial user is offered one of a product and a service, the product or service which is offered can be tailored based on the at least one user attribute other than emotional state. Alternatively, when the jovial user is made the subject of a marketing study, the marketing study can be tailored in response to the at least one user attribute other than emotional state. For example, suppose ajovial user is to be offered one of a product and a service. Their language pattern could be examined to determine that they were from a rural area in the southern United States where bass fishing was popular and, if desired, pitch could additionally be examined to determine that they were of the male gender. Products such as bass fishing equipment and videos could then be offered to the subject.
Or, suppose, that the jovial subject on which a marketing study is to be done is determined to be a middle aged woman from a wealthy urban area who is highly educated. The marketing study could be tailored to quiz her about her buying habits for expensive cosmetics, stylish clothing, or trendy vacation resorts.
As noted, the emotional state feature could be examined to determine if the user is in one of a disgusted, contemptuous, fearful and angry emotional state. If the method were being conducted using an IVR system, and such an emotional state were detected, then block 415A could constitute switching the user from the IVR to a human operator in response to the user's detected emotional state. Alternatively, if a similar emotional state were detected, in a case where a hybrid interactive voice response system were employed, the action taken in block 415A could be switching the user from a low-level human operator to a higher-level human supervisor in response to the user's emotional state.
Yet further, the emotional state feature could be examined to determine whether the user was in a confused emotional state. This can be done using techniques known in the art, as set forth, for example, in the ICSLP'98 papers discussed above. Confusion may be evidenced, e.g., by delays in answering a question, stuttering, repetitions, false starts and the like.
Thus, speech recognition and transcription are valuable. When a confused emotional state is detected, the action taken in block 415A could then be the switching of the user from a substantially automatic IVR system to a human operator in response to the confused emotional state.
The present invention can also include a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform the method steps of any of the methods disclosed herein, or any subset of the steps of those methods.
For example, where certain subsets of the method steps were conveniently performed by a general purpose computer, or a processor portion of an IVR system, suitable program instructions could be written on a diskette, CD-ROM or the like. In the method shown in flowchart 300, such method steps could include reading digital data corresponding to a speech waveform associated with utterances spoken by the voice system user during a conversation between the voice system user and at least one of a human operator and a voice-enabled machine system. Program instructions for additional steps could include instructions to accomplish the tasks depicted in blocks 310 and 316, or any of the other blocks, as desired.
Similarly, with reference to the method depicted in flowchart 400, a first step to be performed via program instructions could include reading digital data corresponding to a speech waveform associated with utterances spoken by the voice system user during a conversation between the voice system user and at least one of a human operator and a voice-enabled machine system. Additional method steps to be incorporated in the program instructions could be, for example, those in block 410 and block 415, as discussed above, or indeed, any of the other method steps discussed herein.
It should be understood that features can be extracted and decisions dynamically returned by the models in the present invention. In addition to those examples already set forth, when a user, such as customer, sounds fearful, a human operator can intercept the call for a variety of reasons, for example, to make sure that the transaction is not coerced. Furthermore, anger can be detected in a user (or, for that matter, an operator) and in addition to modifying responses of a automatic or hybrid IVR system, could be used for quality control, e.g., as a means to evaluate and train customer service agents.
The present invention can be extended to other than acoustic information. For example, video information can be included, whether alone or accompanying audio data.
Accordingly, method steps calling for conducting a conversation could instead involve conducting a visual transaction. Video information can help to identify or classify user attributes. Such data can be collected naturally through video-telephones, cameras at kiosks, cameras on computers, and the like. Such attributes and emotional states as smiling, laughing, crying and the like can be identified. Further, voice segments corresponding to certain user attributes or emotional states, which could be visually determined, can be labeled. This would permit creation of a training data base which would be useful for creating automatic techniques for identification of user attributes via acoustic data only.
Accordingly, data mining could be performed on visually-determined user attributes only, on acoustically determined user attributes only, or on both.
Determination of user attributes from appearance can be done based on common human experience, i.e., red face means angry or embarrassed, smile means happiness or jovial mood, tears mean sadness. Furthermore, any appropriate biometric data can be taken in conjunction with the video and acoustic data. Yet further, data can be taken on more than one individual at one time. For example, parents and children could be simultaneously monitored or a married couple searching for a house or car could also be simultaneously monitored. One might detect children who were happy with a junk food menu item, while their parents were simultaneously unhappy with that choice. A husband might be angry, and his wife happy, at her choice of an expensive jewelry purchase. Alternatively, a husband might be happy and his wife unhappy at his choice of purchasing an expensive set of golf clubs.
As noted, time stamping can be employed as an indicia to be stored together with user attribute data.
This can permit studies of how people respond at different times during the day, or can watch them evolve at different times during their life, for example, as children grow into teenagers and then adults, or as the tastes of adults change as they grow older. Similarities in relatives can also be tracked and plotted. Yet further, one of the user attributes which can be tracked is fatigue. Such a system could be installed, for example, in an automobile, train, aircraft, or long distance truck to monitor operator fatigue and to prompt the operator to pull over and rest, or, for example, to play loud music to keep the operator awake. Co-assigned U.S. Patent Application 09/078,807 of Zadrozny and Kanevsky, entitled "Sleep Prevention Dialog Based Car System,"
filed May 14,1998.
It should be noted that the voice systems discussed herein can include telephone systems, kiosks, speaking to a computer and the like. The term "acoustic feature" is to be broadly understood and, as discussed, can include either raw or processed features, or both. For example, when the acoustic feature is MEL cepstra certain processed features could include key words, sentence parts, or the like. Some key words could be, for example, unacceptable profane words, which could be eliminated, result in summoning a manager, or result in disciplinary action against an employee. It should also be emphasized that in the apparatus and method for performing real time modification of a voice system, storage of an attribute, with an indicia, in the warehouse is optional and need not be performed.
When training the models, human operators can annotate data when making educated guesses about various user attributes. Alternatively, annotation can be done automatically using an existing set of classifiers which are already trained. A combination of the two techniques can also be employed.
The indicia which are stored can include, in addition to a time stamp and the other items discussed herein, a transaction event or results, or any other useful information. The method depicted in flowchart 400 could also be used in a live conversation with a human operator with manual prompts to change the business logic used by the operator, or to summon a supervisor automatically when anger or other undesirable occurrences are noted.
While there have been described what are presently believed to be the preferred embodiments of the invention, those skilled in the art will realize that various changes and modifications may be made to the invention without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as fall within the scope of the invention.
Claims (31)
1 A method for collecting, in a data warehouse, data associated with a voice of a voice system user, said method comprising the steps of:
(a) conducting a conversation with the voice system user via at least one of a human operator and a voice-enabled machine system, (b) capturing a speech waveform associated with utterances spoken by the voice system user during said conversation, (c) digitizing said speech waveform to provide a digitized speech waveform, (d) extracting, from said digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute, said at least one user attribute including at least one of (d-1) gender of the user;
(d-2) age of the user;
(d-3) accent of the user;
(d-4) native language of the user;
(d-5) dialect of the user, (d-6) socioeconomic classification of the user, (d-7) educational level of the user; and (d-8) emotional state of the user, (e) storing attribute data corresponding to said acoustic feature which is correlated with said at least one user attribute, together with at least one identifying indicia, in the data warehouse in a form to facilitate subsequent data mining thereon;
(f) repeating steps (a)-(e) for a plurality of additional conversations, with additional users, to provide a collection of stored data including the attribute data and identifying indicia, and (g) mining the collection of stored data to provide information for modifying underlying business logic of the voice system
(a) conducting a conversation with the voice system user via at least one of a human operator and a voice-enabled machine system, (b) capturing a speech waveform associated with utterances spoken by the voice system user during said conversation, (c) digitizing said speech waveform to provide a digitized speech waveform, (d) extracting, from said digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute, said at least one user attribute including at least one of (d-1) gender of the user;
(d-2) age of the user;
(d-3) accent of the user;
(d-4) native language of the user;
(d-5) dialect of the user, (d-6) socioeconomic classification of the user, (d-7) educational level of the user; and (d-8) emotional state of the user, (e) storing attribute data corresponding to said acoustic feature which is correlated with said at least one user attribute, together with at least one identifying indicia, in the data warehouse in a form to facilitate subsequent data mining thereon;
(f) repeating steps (a)-(e) for a plurality of additional conversations, with additional users, to provide a collection of stored data including the attribute data and identifying indicia, and (g) mining the collection of stored data to provide information for modifying underlying business logic of the voice system
2 The method of claim 1, wherein step (e) comprises storing with at least one identifying indicia which comprise a time stamp.
3 The method of claim 1, wherein step (d) includes extracting at least one of fundamental frequency, variation in fundamental frequency, running average pitch, running pitch variance, pitch jitter, running energy variance, speech rate and shimmer as at least one emotional state feature which is correlated with the emotional state of the user.
4 The method of claim 3, further comprising the additional step of normalizing said at least one emotional state feature.
The method of claim 1, further comprising the additional step of processing said at least one acoustic feature to determine said at least one user attribute, wherein said attribute data in step (e) comprises at least a value of said user attribute.
6 The method of claim 5, further comprising the additional step of automatically refining said processing step in response to storage of additional attribute data in the data warehouse.
7 The method of claim 1, wherein step (e) comprises storing said attribute data as at least one substantially raw acoustic feature.
8. The method of claim 1, wherein step (d) includes extracting at least MEL
cepstra, further comprising the additional steps of recognizing speech of the user based on said MEL cepstra, transcribing said speech, and examining said speech for at least one of word choice and vocabulary to determine at least one of educational level of the user, socioeconomic classification of the user, and dialect of the user.
cepstra, further comprising the additional steps of recognizing speech of the user based on said MEL cepstra, transcribing said speech, and examining said speech for at least one of word choice and vocabulary to determine at least one of educational level of the user, socioeconomic classification of the user, and dialect of the user.
9 The method of claim 1, further comprising the additional step of (h) modifying, in real time, behavior of the voice system based on said at least one user attribute.
10. The method of claim 9, wherein said modifying in step (h) comprises at least one of real-time changing of business logic of the voice system; and real-time modifying of the voice system response, as compared to an expected response of the voice system without said modifying.
11 The method of claim 3, further comprising the additional steps of examining said at least one emotional state feature to determine if the user is in a jovial emotional state, and offering the user at least one of a product and a service in response to said jovial emotional state.
12 The method of claim 11, further comprising the additional steps of determining at least one user attribute other than emotional state, and tailoring said at least one of a product and a service in response to said at least one user attribute other than emotional state.
13. The method of claim 3, further comprising the additional steps of examining said at least one emotional state feature to determine if the user is in a jovial emotional state, and performing a marketing study on the user in response to said jovial emotional state.
14 The method of claim 13, further comprising the additional steps of.
determining at least one user attribute other than emotional state, and tailoring said market study in response to said at least one user attribute other than emotional state.
determining at least one user attribute other than emotional state, and tailoring said market study in response to said at least one user attribute other than emotional state.
15 The method of claim 3, wherein the voice system is a substantially automatic interactive voice response (IVR) system, further comprising the additional steps of examining said at least one emotional state feature to determine if the user is in at least one of a disgusted, contemptuous, fearful and angry emotional state, and switching said user from said IVR to a human operator in response to said at least one of a disgusted, contemptuous, fearful and angry emotional state.
16 The method of claim 3, wherein the voice system is a hybrid interactive voice response (IVR) system, further comprising the additional steps of examining said at least one emotional state feature to determine if the user is in at least one of a disgusted, contemptuous, fearful and angry emotional state; and switching said user from a low-level human operator to a higher-level human supervisor in response to said at least one of a disgusted, contemptuous, fearful and angry emotional state.
17 The method of claim 3, wherein the voice system is a substantially automatic interactive voice response (IVR) system, further comprising the additional steps of examining said at least one emotional state feature to determine if the user is in a confused emotional state; and switching said user from said IVR to a human operator in response to said confused emotional state.
18 An apparatus for collecting data associated with a voice of a user, said apparatus comprising, (a) a dialog management unit which conducts a conversation with the user;
(b) an audio capture module which is coupled to said dialog management unit and which captures a speech waveform associated with utterances spoken by the user during the conversation, (c) an acoustic front end which is coupled to said audio capture module and which is configured to receive and digitize the speech waveform to provide a digitized speech waveform, and extract, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute, said at least one user attribute including at least one of (c-1) gender of the user;
(c-2) age of the user;
(c-3) accent of the user, (c-4) native language of the user;
(c-5) dialect of the user;
(c-6) socioeconomic classification of the user;
(c-7) educational level of the user; and (c-8) emotional state of the user;
(d) a processing module which is coupled to said acoustic front end and which analyzes said at least one acoustic feature to determine said at least one user attribute, and (e) a data warehouse which is coupled to said processing module and which stores said at least one user attribute, together with at least one identifying indicia, in a form for subsequent data mining thereon, wherein:
said dialog management unit is configured to conduct a plurality of additional conversations with additional users;
said audio capture module is configured to capture a plurality of additional speech waveforms associated with utterances spoken by said additional users during said plurality of additional conversations, said acoustic front end is configured to receive and digitize said plurality of additional speech waveforms to provide a plurality of additional digitized speech waveforms, and is further configured to extract, from said plurality of additional digitized speech waveforms, a plurality of additional acoustic features, each correlated with at least one attribute of one of said additional users;
said processing module is configured to analyze said additional acoustic features to determine a plurality of additional user attributes;
said data warehouse is configured to store said plurality of additional user attributes, each together with at least one additional identifying indicia, in said form for said subsequent data mining; and said processing module and said data warehouse are configured to mine the stored user attributes and identifying indicia to provide information for modifying underlying business logic of the apparatus.
(b) an audio capture module which is coupled to said dialog management unit and which captures a speech waveform associated with utterances spoken by the user during the conversation, (c) an acoustic front end which is coupled to said audio capture module and which is configured to receive and digitize the speech waveform to provide a digitized speech waveform, and extract, from the digitized speech waveform, at least one acoustic feature which is correlated with at least one user attribute, said at least one user attribute including at least one of (c-1) gender of the user;
(c-2) age of the user;
(c-3) accent of the user, (c-4) native language of the user;
(c-5) dialect of the user;
(c-6) socioeconomic classification of the user;
(c-7) educational level of the user; and (c-8) emotional state of the user;
(d) a processing module which is coupled to said acoustic front end and which analyzes said at least one acoustic feature to determine said at least one user attribute, and (e) a data warehouse which is coupled to said processing module and which stores said at least one user attribute, together with at least one identifying indicia, in a form for subsequent data mining thereon, wherein:
said dialog management unit is configured to conduct a plurality of additional conversations with additional users;
said audio capture module is configured to capture a plurality of additional speech waveforms associated with utterances spoken by said additional users during said plurality of additional conversations, said acoustic front end is configured to receive and digitize said plurality of additional speech waveforms to provide a plurality of additional digitized speech waveforms, and is further configured to extract, from said plurality of additional digitized speech waveforms, a plurality of additional acoustic features, each correlated with at least one attribute of one of said additional users;
said processing module is configured to analyze said additional acoustic features to determine a plurality of additional user attributes;
said data warehouse is configured to store said plurality of additional user attributes, each together with at least one additional identifying indicia, in said form for said subsequent data mining; and said processing module and said data warehouse are configured to mine the stored user attributes and identifying indicia to provide information for modifying underlying business logic of the apparatus.
19. The apparatus of claim 18, wherein said audio capture module comprises one of an analog to digital converter board, an interactive voice response (IVR) system and a microphone.
20. The apparatus of claim 18, wherein said dialog management unit comprises a telephone interactive voice response (IVR) system.
21. The apparatus of claim 20, wherein said processing module comprises a processor portion of said IVR.
22. The apparatus of claim 18, wherein said processing module comprises a separate general purpose computer with appropriate software.
23. The apparatus of claim 18, wherein said processing module comprises an application specific circuit.
24. The apparatus of claim 18, wherein said processing module comprises at least an emotional state classifier.
25. The apparatus of claim 24, wherein said processing module further comprises at least:
a speaker clusterer and classifier;
a speech recognizer; and an accent identifier.
a speaker clusterer and classifier;
a speech recognizer; and an accent identifier.
26. The apparatus of claim 25, further comprising a post processor which is coupled to said data warehouse and which is configured to transcribe user utterances and to perform keyword spotting thereon.
27. The apparatus of claim 18, wherein said processing module is configured to modify behavior of the apparatus, in real time, based on said at least one user attribute.
28. The apparatus of claim 27, wherein said processing module modifies behavior of the apparatus, at least in part, by prompting a human operator thereof.
29. The apparatus of claim 27, wherein said processing module comprises a processor portion of an interactive voice response (IVR) system and wherein said processor module modifies behavior of the apparatus, at least in part, by modifying business logic of the IVR.
30. A program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for collecting, in a data warehouse, data associated with a voice of a voice system user, said method steps comprising:
(a) reading digital data corresponding to a speech waveform associated with utterances spoken by the voice system user during a conversation between the voice system user and at least one of a human operator and a voice-enabled machine system;
(b) extracting, from said digital data, at least one acoustic feature which is correlated with at least one user attribute, said at least one user attribute including at least one of:
(b-1) gender of the user;
(b-2) age of the user;
(b-3) accent of the user;
(b-4) native language of the user;
(b-5) dialect of the user;
(b-6) socioeconomic classification of the user;
(b-7) educational level of the user; and (b-8) emotional state of the user; and (c) storing attribute data corresponding to said acoustic feature which is correlated with said at least one user attribute, together with at least one identifying indicia, in the data warehouse in a form to facilitate subsequent data mining thereon;
(d) repeating steps (a)-(c) for a plurality of additional conversations, with additional users, to provide a collection of stored data including suitable attribute data and identifying indicia for each conversation;
and (e) mining the collection of stored data to provide information for modifying underlying business logic of the voice system.
(a) reading digital data corresponding to a speech waveform associated with utterances spoken by the voice system user during a conversation between the voice system user and at least one of a human operator and a voice-enabled machine system;
(b) extracting, from said digital data, at least one acoustic feature which is correlated with at least one user attribute, said at least one user attribute including at least one of:
(b-1) gender of the user;
(b-2) age of the user;
(b-3) accent of the user;
(b-4) native language of the user;
(b-5) dialect of the user;
(b-6) socioeconomic classification of the user;
(b-7) educational level of the user; and (b-8) emotional state of the user; and (c) storing attribute data corresponding to said acoustic feature which is correlated with said at least one user attribute, together with at least one identifying indicia, in the data warehouse in a form to facilitate subsequent data mining thereon;
(d) repeating steps (a)-(c) for a plurality of additional conversations, with additional users, to provide a collection of stored data including suitable attribute data and identifying indicia for each conversation;
and (e) mining the collection of stored data to provide information for modifying underlying business logic of the voice system.
31. The program storage device of claim 30, wherein said method steps further comprise:
(f) modifying behavior of the voice system, in real time, based on said at least one user attribute,
(f) modifying behavior of the voice system, in real time, based on said at least one user attribute,
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/371,400 | 1999-08-10 | ||
US09/371,400 US6665644B1 (en) | 1999-08-10 | 1999-08-10 | Conversational data mining |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2311439A1 CA2311439A1 (en) | 2001-02-10 |
CA2311439C true CA2311439C (en) | 2007-05-22 |
Family
ID=23463836
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002311439A Expired - Lifetime CA2311439C (en) | 1999-08-10 | 2000-06-13 | Conversational data mining |
Country Status (6)
Country | Link |
---|---|
US (1) | US6665644B1 (en) |
EP (1) | EP1076329B1 (en) |
CN (1) | CN1157710C (en) |
AT (1) | ATE341071T1 (en) |
CA (1) | CA2311439C (en) |
DE (1) | DE60030920T2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10043517B2 (en) | 2015-12-09 | 2018-08-07 | International Business Machines Corporation | Audio-based event interaction analytics |
Families Citing this family (260)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6191585B1 (en) * | 1996-05-03 | 2001-02-20 | Digital Control, Inc. | Tracking the positional relationship between a boring tool and one or more buried lines using a composite magnetic signal |
JP3842497B2 (en) * | 1999-10-22 | 2006-11-08 | アルパイン株式会社 | Audio processing device |
CA2389186A1 (en) * | 1999-10-29 | 2001-05-03 | British Telecommunications Public Limited Company | Method and apparatus for processing queries |
GB9926134D0 (en) * | 1999-11-05 | 2000-01-12 | Ibm | Interactive voice response system |
US7392185B2 (en) | 1999-11-12 | 2008-06-24 | Phoenix Solutions, Inc. | Speech based learning/training system using semantic decoding |
US7725307B2 (en) | 1999-11-12 | 2010-05-25 | Phoenix Solutions, Inc. | Query engine for processing voice based queries including semantic decoding |
US9076448B2 (en) | 1999-11-12 | 2015-07-07 | Nuance Communications, Inc. | Distributed real time speech recognition system |
US7050977B1 (en) | 1999-11-12 | 2006-05-23 | Phoenix Solutions, Inc. | Speech-enabled server for internet website and method |
US7587041B2 (en) | 2000-01-13 | 2009-09-08 | Verint Americas Inc. | System and method for analysing communications streams |
GB0000735D0 (en) | 2000-01-13 | 2000-03-08 | Eyretel Ltd | System and method for analysing communication streams |
US6871140B1 (en) * | 2000-02-25 | 2005-03-22 | Costar Group, Inc. | System and method for collection, distribution, and use of information in connection with commercial real estate |
WO2003050799A1 (en) * | 2001-12-12 | 2003-06-19 | International Business Machines Corporation | Method and system for non-intrusive speaker verification using behavior models |
US7917366B1 (en) * | 2000-03-24 | 2011-03-29 | Exaudios Technologies | System and method for determining a personal SHG profile by voice analysis |
US7096185B2 (en) * | 2000-03-31 | 2006-08-22 | United Video Properties, Inc. | User speech interfaces for interactive media guidance applications |
US6424935B1 (en) * | 2000-07-31 | 2002-07-23 | Micron Technology, Inc. | Two-way speech recognition and dialect system |
US7664673B1 (en) * | 2000-09-18 | 2010-02-16 | Aol Llc | Smart transfer |
US7325190B1 (en) | 2000-10-02 | 2008-01-29 | Boehmer Tiffany D | Interface system and method of building rules and constraints for a resource scheduling system |
US20090132316A1 (en) * | 2000-10-23 | 2009-05-21 | Costar Group, Inc. | System and method for associating aerial images, map features, and information |
US6728679B1 (en) * | 2000-10-30 | 2004-04-27 | Koninklijke Philips Electronics N.V. | Self-updating user interface/entertainment device that simulates personal interaction |
US6937986B2 (en) * | 2000-12-28 | 2005-08-30 | Comverse, Inc. | Automatic dynamic speech recognition vocabulary based on external sources of information |
US6964023B2 (en) * | 2001-02-05 | 2005-11-08 | International Business Machines Corporation | System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input |
GB0103381D0 (en) | 2001-02-12 | 2001-03-28 | Eyretel Ltd | Packet data recording method and system |
US8180643B1 (en) * | 2001-02-15 | 2012-05-15 | West Corporation | Script compliance using speech recognition and compilation and transmission of voice and text records to clients |
US7174297B2 (en) * | 2001-03-09 | 2007-02-06 | Bevocal, Inc. | System, method and computer program product for a dynamically configurable voice portal |
EP1246164A1 (en) * | 2001-03-30 | 2002-10-02 | Sony France S.A. | Sound characterisation and/or identification based on prosodic listening |
US8015042B2 (en) | 2001-04-02 | 2011-09-06 | Verint Americas Inc. | Methods for long-range contact center staff planning utilizing discrete event simulation |
US6952732B2 (en) | 2001-04-30 | 2005-10-04 | Blue Pumpkin Software, Inc. | Method and apparatus for multi-contact scheduling |
US6959405B2 (en) | 2001-04-18 | 2005-10-25 | Blue Pumpkin Software, Inc. | Method and system for concurrent error identification in resource scheduling |
JP2002366166A (en) | 2001-06-11 | 2002-12-20 | Pioneer Electronic Corp | System and method for providing contents and computer program for the same |
DE60108104T2 (en) * | 2001-07-24 | 2005-12-15 | Sony International (Europe) Gmbh | Method for speaker identification |
DE60108373T2 (en) | 2001-08-02 | 2005-12-22 | Sony International (Europe) Gmbh | Method for detecting emotions in speech signals using speaker identification |
GB2381638B (en) * | 2001-11-03 | 2004-02-04 | Dremedia Ltd | Identifying audio characteristics |
GB2381688B (en) * | 2001-11-03 | 2004-09-22 | Dremedia Ltd | Time ordered indexing of audio-visual data |
DE10154423A1 (en) * | 2001-11-06 | 2003-05-15 | Deutsche Telekom Ag | Speech controlled interface for accessing an information or computer system in which a digital assistant analyses user input and its own output so that it can be personalized to match user requirements |
US7054817B2 (en) * | 2002-01-25 | 2006-05-30 | Canon Europa N.V. | User interface for speech model generation and testing |
US7219138B2 (en) | 2002-01-31 | 2007-05-15 | Witness Systems, Inc. | Method, apparatus, and system for capturing data exchanged between a server and a user |
US7424715B1 (en) | 2002-01-28 | 2008-09-09 | Verint Americas Inc. | Method and system for presenting events associated with recorded data exchanged between a server and a user |
US9008300B2 (en) | 2002-01-28 | 2015-04-14 | Verint Americas Inc | Complex recording trigger |
US7149788B1 (en) | 2002-01-28 | 2006-12-12 | Witness Systems, Inc. | Method and system for providing access to captured multimedia data from a multimedia player |
US7882212B1 (en) * | 2002-01-28 | 2011-02-01 | Verint Systems Inc. | Methods and devices for archiving recorded interactions and retrieving stored recorded interactions |
DE10220521B4 (en) * | 2002-05-08 | 2005-11-24 | Sap Ag | Method and system for processing voice data and classifying calls |
US7092972B2 (en) * | 2002-05-09 | 2006-08-15 | Sun Microsystems, Inc. | Delta transfers in distributed file systems |
US7277913B2 (en) * | 2002-05-09 | 2007-10-02 | Sun Microsystems, Inc. | Persistent queuing for distributed file systems |
US20030212763A1 (en) * | 2002-05-09 | 2003-11-13 | Ravi Kashyap | Distributed configuration-managed file synchronization systems |
US20070061413A1 (en) * | 2005-09-15 | 2007-03-15 | Larsen Eric J | System and method for obtaining user information from voices |
US20070261077A1 (en) * | 2006-05-08 | 2007-11-08 | Gary Zalewski | Using audio/visual environment to select ads on game platform |
US20070260517A1 (en) * | 2006-05-08 | 2007-11-08 | Gary Zalewski | Profile detection |
GB0219493D0 (en) | 2002-08-21 | 2002-10-02 | Eyretel Plc | Method and system for communications monitoring |
US20040073425A1 (en) * | 2002-10-11 | 2004-04-15 | Das Sharmistha Sarkar | Arrangement for real-time automatic recognition of accented speech |
US8959019B2 (en) | 2002-10-31 | 2015-02-17 | Promptu Systems Corporation | Efficient empirical determination, computation, and use of acoustic confusability measures |
US20040107097A1 (en) * | 2002-12-02 | 2004-06-03 | General Motors Corporation | Method and system for voice recognition through dialect identification |
US7389228B2 (en) | 2002-12-16 | 2008-06-17 | International Business Machines Corporation | Speaker adaptation of vocabulary for speech recognition |
US7546226B1 (en) | 2003-03-12 | 2009-06-09 | Microsoft Corporation | Architecture for automating analytical view of business applications |
US7313561B2 (en) | 2003-03-12 | 2007-12-25 | Microsoft Corporation | Model definition schema |
US7275024B2 (en) * | 2003-03-12 | 2007-09-25 | Microsoft Corporation | Automatic generation of a dimensional model for business analytics from an object model for online transaction processing |
US7634478B2 (en) * | 2003-12-02 | 2009-12-15 | Microsoft Corporation | Metadata driven intelligent data navigation |
WO2004114207A2 (en) * | 2003-05-24 | 2004-12-29 | Gatelinx Corporation | Artificial intelligence dialogue processor |
US7340398B2 (en) * | 2003-08-21 | 2008-03-04 | Hewlett-Packard Development Company, L.P. | Selective sampling for sound signal classification |
US7349527B2 (en) | 2004-01-30 | 2008-03-25 | Hewlett-Packard Development Company, L.P. | System and method for extracting demographic information |
US8447027B2 (en) | 2004-01-30 | 2013-05-21 | Hewlett-Packard Development Company, L.P. | System and method for language variation guided operator selection |
US7899698B2 (en) * | 2004-03-19 | 2011-03-01 | Accenture Global Services Limited | Real-time sales support and learning tool |
US7022907B2 (en) * | 2004-03-25 | 2006-04-04 | Microsoft Corporation | Automatic music mood detection |
US8086462B1 (en) * | 2004-09-09 | 2011-12-27 | At&T Intellectual Property Ii, L.P. | Automatic detection, summarization and reporting of business intelligence highlights from automated dialog systems |
DE102004056164A1 (en) * | 2004-11-18 | 2006-05-24 | Deutsche Telekom Ag | Method for dialogue control and dialog system operating thereafter |
US7562117B2 (en) | 2005-09-09 | 2009-07-14 | Outland Research, Llc | System, method and computer program product for collaborative broadcast media |
US20070189544A1 (en) | 2005-01-15 | 2007-08-16 | Outland Research, Llc | Ambient sound responsive media player |
US20060184800A1 (en) * | 2005-02-16 | 2006-08-17 | Outland Research, Llc | Method and apparatus for using age and/or gender recognition techniques to customize a user interface |
KR100678212B1 (en) * | 2005-03-11 | 2007-02-02 | 삼성전자주식회사 | Method for controlling information of emotion in wireless terminal |
US7995717B2 (en) | 2005-05-18 | 2011-08-09 | Mattersight Corporation | Method and system for analyzing separated voice data of a telephonic communication between a customer and a contact center by applying a psychological behavioral model thereto |
US8094803B2 (en) | 2005-05-18 | 2012-01-10 | Mattersight Corporation | Method and system for analyzing separated voice data of a telephonic communication between a customer and a contact center by applying a psychological behavioral model thereto |
US8094790B2 (en) | 2005-05-18 | 2012-01-10 | Mattersight Corporation | Method and software for training a customer service representative by analysis of a telephonic interaction between a customer and a contact center |
US7912720B1 (en) * | 2005-07-20 | 2011-03-22 | At&T Intellectual Property Ii, L.P. | System and method for building emotional machines |
US20080040110A1 (en) * | 2005-08-08 | 2008-02-14 | Nice Systems Ltd. | Apparatus and Methods for the Detection of Emotions in Audio Interactions |
US20070038633A1 (en) * | 2005-08-10 | 2007-02-15 | International Business Machines Corporation | Method and system for executing procedures in mixed-initiative mode |
US8122259B2 (en) | 2005-09-01 | 2012-02-21 | Bricom Technologies Ltd | Systems and algorithms for stateless biometric recognition |
US20140125455A1 (en) * | 2005-09-01 | 2014-05-08 | Memphis Technologies, Inc. | Systems and algorithms for classification of user based on their personal features |
US8645985B2 (en) * | 2005-09-15 | 2014-02-04 | Sony Computer Entertainment Inc. | System and method for detecting user attention |
US8616973B2 (en) * | 2005-09-15 | 2013-12-31 | Sony Computer Entertainment Inc. | System and method for control by audible device |
US8176101B2 (en) | 2006-02-07 | 2012-05-08 | Google Inc. | Collaborative rejection of media for physical establishments |
US7917148B2 (en) | 2005-09-23 | 2011-03-29 | Outland Research, Llc | Social musical media rating system and method for localized establishments |
US20070121873A1 (en) * | 2005-11-18 | 2007-05-31 | Medlin Jennifer P | Methods, systems, and products for managing communications |
EP2109097B1 (en) * | 2005-11-25 | 2014-03-19 | Swisscom AG | A method for personalization of a service |
US7396990B2 (en) | 2005-12-09 | 2008-07-08 | Microsoft Corporation | Automatic music mood detection |
US7773731B2 (en) * | 2005-12-14 | 2010-08-10 | At&T Intellectual Property I, L. P. | Methods, systems, and products for dynamically-changing IVR architectures |
US7577664B2 (en) * | 2005-12-16 | 2009-08-18 | At&T Intellectual Property I, L.P. | Methods, systems, and products for searching interactive menu prompting system architectures |
US7552098B1 (en) | 2005-12-30 | 2009-06-23 | At&T Corporation | Methods to distribute multi-class classification learning on several processors |
US20070158128A1 (en) * | 2006-01-11 | 2007-07-12 | International Business Machines Corporation | Controlling driver behavior and motor vehicle restriction control |
US8160233B2 (en) | 2006-02-22 | 2012-04-17 | Verint Americas Inc. | System and method for detecting and displaying business transactions |
US8112306B2 (en) | 2006-02-22 | 2012-02-07 | Verint Americas, Inc. | System and method for facilitating triggers and workflows in workforce optimization |
US8670552B2 (en) | 2006-02-22 | 2014-03-11 | Verint Systems, Inc. | System and method for integrated display of multiple types of call agent data |
US8112298B2 (en) | 2006-02-22 | 2012-02-07 | Verint Americas, Inc. | Systems and methods for workforce optimization |
US8108237B2 (en) | 2006-02-22 | 2012-01-31 | Verint Americas, Inc. | Systems for integrating contact center monitoring, training and scheduling |
US7864946B1 (en) | 2006-02-22 | 2011-01-04 | Verint Americas Inc. | Systems and methods for scheduling call center agents using quality data and correlation-based discovery |
US8117064B2 (en) | 2006-02-22 | 2012-02-14 | Verint Americas, Inc. | Systems and methods for workforce optimization and analytics |
US9129290B2 (en) | 2006-02-22 | 2015-09-08 | 24/7 Customer, Inc. | Apparatus and method for predicting customer behavior |
US7853006B1 (en) | 2006-02-22 | 2010-12-14 | Verint Americas Inc. | Systems and methods for scheduling call center agents using quality data and correlation-based discovery |
US7599861B2 (en) | 2006-03-02 | 2009-10-06 | Convergys Customer Management Group, Inc. | System and method for closed loop decisionmaking in an automated care system |
US7983910B2 (en) * | 2006-03-03 | 2011-07-19 | International Business Machines Corporation | Communicating across voice and text channels with emotion preservation |
US8050392B2 (en) * | 2006-03-17 | 2011-11-01 | At&T Intellectual Property I, L.P. | Methods systems, and products for processing responses in prompting systems |
US7961856B2 (en) * | 2006-03-17 | 2011-06-14 | At&T Intellectual Property I, L. P. | Methods, systems, and products for processing responses in prompting systems |
JP4745094B2 (en) * | 2006-03-20 | 2011-08-10 | 富士通株式会社 | Clustering system, clustering method, clustering program, and attribute estimation system using clustering system |
US7734783B1 (en) | 2006-03-21 | 2010-06-08 | Verint Americas Inc. | Systems and methods for determining allocations for distributed multi-site contact centers |
US8126134B1 (en) | 2006-03-30 | 2012-02-28 | Verint Americas, Inc. | Systems and methods for scheduling of outbound agents |
US7852994B1 (en) | 2006-03-31 | 2010-12-14 | Verint Americas Inc. | Systems and methods for recording audio |
US8000465B2 (en) | 2006-03-31 | 2011-08-16 | Verint Americas, Inc. | Systems and methods for endpoint recording using gateways |
US7826608B1 (en) | 2006-03-31 | 2010-11-02 | Verint Americas Inc. | Systems and methods for calculating workforce staffing statistics |
US7672746B1 (en) | 2006-03-31 | 2010-03-02 | Verint Americas Inc. | Systems and methods for automatic scheduling of a workforce |
US8204056B2 (en) | 2006-03-31 | 2012-06-19 | Verint Americas, Inc. | Systems and methods for endpoint recording using a media application server |
US7995612B2 (en) | 2006-03-31 | 2011-08-09 | Verint Americas, Inc. | Systems and methods for capturing communication signals [32-bit or 128-bit addresses] |
US7680264B2 (en) | 2006-03-31 | 2010-03-16 | Verint Americas Inc. | Systems and methods for endpoint recording using a conference bridge |
US7701972B1 (en) | 2006-03-31 | 2010-04-20 | Verint Americas Inc. | Internet protocol analyzing |
US8442033B2 (en) * | 2006-03-31 | 2013-05-14 | Verint Americas, Inc. | Distributed voice over internet protocol recording |
US7792278B2 (en) | 2006-03-31 | 2010-09-07 | Verint Americas Inc. | Integration of contact center surveys |
US7822018B2 (en) | 2006-03-31 | 2010-10-26 | Verint Americas Inc. | Duplicate media stream |
US8254262B1 (en) | 2006-03-31 | 2012-08-28 | Verint Americas, Inc. | Passive recording and load balancing |
US7774854B1 (en) | 2006-03-31 | 2010-08-10 | Verint Americas Inc. | Systems and methods for protecting information |
US8130938B2 (en) | 2006-03-31 | 2012-03-06 | Verint Americas, Inc. | Systems and methods for endpoint recording using recorders |
US8594313B2 (en) | 2006-03-31 | 2013-11-26 | Verint Systems, Inc. | Systems and methods for endpoint recording using phones |
US8155275B1 (en) | 2006-04-03 | 2012-04-10 | Verint Americas, Inc. | Systems and methods for managing alarms from recorders |
US20070244751A1 (en) * | 2006-04-17 | 2007-10-18 | Gary Zalewski | Using visual environment to select ads on game platform |
US20070255630A1 (en) * | 2006-04-17 | 2007-11-01 | Gary Zalewski | System and method for using user's visual environment to select advertising |
US20070243930A1 (en) * | 2006-04-12 | 2007-10-18 | Gary Zalewski | System and method for using user's audio environment to select advertising |
US8331549B2 (en) | 2006-05-01 | 2012-12-11 | Verint Americas Inc. | System and method for integrated workforce and quality management |
US8396732B1 (en) | 2006-05-08 | 2013-03-12 | Verint Americas Inc. | System and method for integrated workforce and analytics |
US7817795B2 (en) | 2006-05-10 | 2010-10-19 | Verint Americas, Inc. | Systems and methods for data synchronization in a customer center |
US20080059177A1 (en) * | 2006-05-19 | 2008-03-06 | Jamey Poirier | Enhancement of simultaneous multi-user real-time speech recognition system |
US7809663B1 (en) | 2006-05-22 | 2010-10-05 | Convergys Cmg Utah, Inc. | System and method for supporting the utilization of machine language |
US8379830B1 (en) | 2006-05-22 | 2013-02-19 | Convergys Customer Management Delaware Llc | System and method for automated customer service with contingent live interaction |
US7660406B2 (en) | 2006-06-27 | 2010-02-09 | Verint Americas Inc. | Systems and methods for integrating outsourcers |
US7660407B2 (en) | 2006-06-27 | 2010-02-09 | Verint Americas Inc. | Systems and methods for scheduling contact center agents |
US7660307B2 (en) | 2006-06-29 | 2010-02-09 | Verint Americas Inc. | Systems and methods for providing recording as a network service |
US7903568B2 (en) | 2006-06-29 | 2011-03-08 | Verint Americas Inc. | Systems and methods for providing recording as a network service |
US7848524B2 (en) | 2006-06-30 | 2010-12-07 | Verint Americas Inc. | Systems and methods for a secure recording environment |
US7953621B2 (en) | 2006-06-30 | 2011-05-31 | Verint Americas Inc. | Systems and methods for displaying agent activity exceptions |
US7966397B2 (en) | 2006-06-30 | 2011-06-21 | Verint Americas Inc. | Distributive data capture |
US7769176B2 (en) | 2006-06-30 | 2010-08-03 | Verint Americas Inc. | Systems and methods for a secure recording environment |
US8131578B2 (en) | 2006-06-30 | 2012-03-06 | Verint Americas Inc. | Systems and methods for automatic scheduling of a workforce |
US7881471B2 (en) | 2006-06-30 | 2011-02-01 | Verint Systems Inc. | Systems and methods for recording an encrypted interaction |
US7853800B2 (en) | 2006-06-30 | 2010-12-14 | Verint Americas Inc. | Systems and methods for a secure recording environment |
US20080010067A1 (en) * | 2006-07-07 | 2008-01-10 | Chaudhari Upendra V | Target specific data filter to speed processing |
JP2008022493A (en) * | 2006-07-14 | 2008-01-31 | Fujitsu Ltd | Reception support system and its program |
US20080027725A1 (en) * | 2006-07-26 | 2008-01-31 | Microsoft Corporation | Automatic Accent Detection With Limited Manually Labeled Data |
US20080086690A1 (en) * | 2006-09-21 | 2008-04-10 | Ashish Verma | Method and System for Hybrid Call Handling |
US7930314B2 (en) | 2006-09-28 | 2011-04-19 | Verint Americas Inc. | Systems and methods for storing and searching data in a customer center environment |
US7953750B1 (en) | 2006-09-28 | 2011-05-31 | Verint Americas, Inc. | Systems and methods for storing and searching data in a customer center environment |
US7570755B2 (en) | 2006-09-29 | 2009-08-04 | Verint Americas Inc. | Routine communication sessions for recording |
US7899178B2 (en) | 2006-09-29 | 2011-03-01 | Verint Americas Inc. | Recording invocation of communication sessions |
US7885813B2 (en) | 2006-09-29 | 2011-02-08 | Verint Systems Inc. | Systems and methods for analyzing communication sessions |
US8837697B2 (en) | 2006-09-29 | 2014-09-16 | Verint Americas Inc. | Call control presence and recording |
US7899176B1 (en) | 2006-09-29 | 2011-03-01 | Verint Americas Inc. | Systems and methods for discovering customer center information |
US7965828B2 (en) | 2006-09-29 | 2011-06-21 | Verint Americas Inc. | Call control presence |
US7920482B2 (en) | 2006-09-29 | 2011-04-05 | Verint Americas Inc. | Systems and methods for monitoring information corresponding to communication sessions |
US8068602B1 (en) | 2006-09-29 | 2011-11-29 | Verint Americas, Inc. | Systems and methods for recording using virtual machines |
US7991613B2 (en) | 2006-09-29 | 2011-08-02 | Verint Americas Inc. | Analyzing audio components and generating text with integrated additional session information |
US7873156B1 (en) | 2006-09-29 | 2011-01-18 | Verint Americas Inc. | Systems and methods for analyzing contact center interactions |
US7881216B2 (en) | 2006-09-29 | 2011-02-01 | Verint Systems Inc. | Systems and methods for analyzing communication sessions using fragments |
US8199886B2 (en) | 2006-09-29 | 2012-06-12 | Verint Americas, Inc. | Call control recording |
US8645179B2 (en) | 2006-09-29 | 2014-02-04 | Verint Americas Inc. | Systems and methods of partial shift swapping |
US7752043B2 (en) | 2006-09-29 | 2010-07-06 | Verint Americas Inc. | Multi-pass speech analytics |
US8005676B2 (en) | 2006-09-29 | 2011-08-23 | Verint Americas, Inc. | Speech analysis using statistical learning |
US8130926B2 (en) | 2006-12-08 | 2012-03-06 | Verint Americas, Inc. | Systems and methods for recording data |
US8280011B2 (en) | 2006-12-08 | 2012-10-02 | Verint Americas, Inc. | Recording in a distributed environment |
US8130925B2 (en) | 2006-12-08 | 2012-03-06 | Verint Americas, Inc. | Systems and methods for recording |
DE102006055864A1 (en) * | 2006-11-22 | 2008-05-29 | Deutsche Telekom Ag | Dialogue adaptation and dialogue system for implementation |
US20100217591A1 (en) * | 2007-01-09 | 2010-08-26 | Avraham Shpigel | Vowel recognition system and method in speech to text applictions |
CN101242452B (en) | 2007-02-05 | 2013-01-23 | 国际商业机器公司 | Method and system for automatic generation and provision of sound document |
US8542802B2 (en) | 2007-02-15 | 2013-09-24 | Global Tel*Link Corporation | System and method for three-way call detection |
US20110022395A1 (en) * | 2007-02-15 | 2011-01-27 | Noise Free Wireless Inc. | Machine for Emotion Detection (MED) in a communications device |
US20080201158A1 (en) | 2007-02-15 | 2008-08-21 | Johnson Mark D | System and method for visitation management in a controlled-access environment |
US8886537B2 (en) * | 2007-03-20 | 2014-11-11 | Nuance Communications, Inc. | Method and system for text-to-speech synthesis with personalized voice |
JP4838351B2 (en) * | 2007-03-29 | 2011-12-14 | パナソニック株式会社 | Keyword extractor |
US8170184B2 (en) | 2007-03-30 | 2012-05-01 | Verint Americas, Inc. | Systems and methods for recording resource association in a recording environment |
US8023639B2 (en) | 2007-03-30 | 2011-09-20 | Mattersight Corporation | Method and system determining the complexity of a telephonic communication received by a contact center |
US9106737B2 (en) | 2007-03-30 | 2015-08-11 | Verint Americas, Inc. | Systems and methods for recording resource association for recording |
US8718262B2 (en) | 2007-03-30 | 2014-05-06 | Mattersight Corporation | Method and system for automatically routing a telephonic communication base on analytic attributes associated with prior telephonic communication |
US8743730B2 (en) | 2007-03-30 | 2014-06-03 | Verint Americas Inc. | Systems and methods for recording resource association for a communications environment |
US8437465B1 (en) | 2007-03-30 | 2013-05-07 | Verint Americas, Inc. | Systems and methods for capturing communications data |
US7869586B2 (en) | 2007-03-30 | 2011-01-11 | Eloyalty Corporation | Method and system for aggregating and analyzing data relating to a plurality of interactions between a customer and a contact center and generating business process analytics |
US8315901B2 (en) | 2007-05-30 | 2012-11-20 | Verint Systems Inc. | Systems and methods of automatically scheduling a workforce |
US7949526B2 (en) * | 2007-06-04 | 2011-05-24 | Microsoft Corporation | Voice aware demographic personalization |
GB2451907B (en) * | 2007-08-17 | 2010-11-03 | Fluency Voice Technology Ltd | Device for modifying and improving the behaviour of speech recognition systems |
US8312379B2 (en) * | 2007-08-22 | 2012-11-13 | International Business Machines Corporation | Methods, systems, and computer program products for editing using an interface |
US8260619B1 (en) | 2008-08-22 | 2012-09-04 | Convergys Cmg Utah, Inc. | Method and system for creating natural language understanding grammars |
US10419611B2 (en) | 2007-09-28 | 2019-09-17 | Mattersight Corporation | System and methods for determining trends in electronic communications |
JP5171962B2 (en) * | 2007-10-11 | 2013-03-27 | 本田技研工業株式会社 | Text classification with knowledge transfer from heterogeneous datasets |
FR2923319B1 (en) * | 2007-11-06 | 2012-11-16 | Alcatel Lucent | DEVICE AND METHOD FOR OBTAINING CONTEXTS OF USERS OF COMMUNICATION TERMINALS FROM AUDIO SIGNALS CAPTURED IN THEIR ENVIRONMENT |
US8126723B1 (en) | 2007-12-19 | 2012-02-28 | Convergys Cmg Utah, Inc. | System and method for improving tuning using caller provided satisfaction scores |
CN101241699B (en) * | 2008-03-14 | 2012-07-18 | 北京交通大学 | A speaker identification method for remote Chinese teaching |
US7475344B1 (en) | 2008-05-04 | 2009-01-06 | International Business Machines Corporation | Genders-usage assistant for composition of electronic documents, emails, or letters |
US8401155B1 (en) | 2008-05-23 | 2013-03-19 | Verint Americas, Inc. | Systems and methods for secure recording in a customer center environment |
CA2665014C (en) | 2008-05-23 | 2020-05-26 | Accenture Global Services Gmbh | Recognition processing of a plurality of streaming voice signals for determination of responsive action thereto |
CA2665055C (en) * | 2008-05-23 | 2018-03-06 | Accenture Global Services Gmbh | Treatment processing of a plurality of streaming voice signals for determination of responsive action thereto |
CA2665009C (en) * | 2008-05-23 | 2018-11-27 | Accenture Global Services Gmbh | System for handling a plurality of streaming voice signals for determination of responsive action thereto |
US8219397B2 (en) * | 2008-06-10 | 2012-07-10 | Nuance Communications, Inc. | Data processing system for autonomously building speech identification and tagging data |
EP2172895A1 (en) * | 2008-10-02 | 2010-04-07 | Vodafone Holding GmbH | Providing information within the scope of a voice communication connection |
CA2685779A1 (en) * | 2008-11-19 | 2010-05-19 | David N. Fernandes | Automated sound segment selection method and system |
US9225838B2 (en) | 2009-02-12 | 2015-12-29 | Value-Added Communications, Inc. | System and method for detecting three-way call circumvention attempts |
US8630726B2 (en) | 2009-02-12 | 2014-01-14 | Value-Added Communications, Inc. | System and method for detecting three-way call circumvention attempts |
US8719016B1 (en) | 2009-04-07 | 2014-05-06 | Verint Americas Inc. | Speech analytics system and system and method for determining structured speech |
US20110044447A1 (en) * | 2009-08-21 | 2011-02-24 | Nexidia Inc. | Trend discovery in audio signals |
US9438741B2 (en) * | 2009-09-30 | 2016-09-06 | Nuance Communications, Inc. | Spoken tags for telecom web platforms in a social network |
US10115065B1 (en) | 2009-10-30 | 2018-10-30 | Verint Americas Inc. | Systems and methods for automatic scheduling of a workforce |
US20110276326A1 (en) * | 2010-05-06 | 2011-11-10 | Motorola, Inc. | Method and system for operational improvements in dispatch console systems in a multi-source environment |
US8417530B1 (en) * | 2010-08-20 | 2013-04-09 | Google Inc. | Accent-influenced search results |
US20120155663A1 (en) * | 2010-12-16 | 2012-06-21 | Nice Systems Ltd. | Fast speaker hunting in lawful interception systems |
US8769009B2 (en) | 2011-02-18 | 2014-07-01 | International Business Machines Corporation | Virtual communication techniques |
JP5250066B2 (en) * | 2011-03-04 | 2013-07-31 | 東芝テック株式会社 | Information processing apparatus and program |
US8798995B1 (en) | 2011-09-23 | 2014-08-05 | Amazon Technologies, Inc. | Key word determinations from voice data |
US8825533B2 (en) | 2012-02-01 | 2014-09-02 | International Business Machines Corporation | Intelligent dialogue amongst competitive user applications |
CN103377432A (en) * | 2012-04-16 | 2013-10-30 | 殷程 | Intelligent customer service marketing analysis system |
WO2013184667A1 (en) | 2012-06-05 | 2013-12-12 | Rank Miner, Inc. | System, method and apparatus for voice analytics of recorded audio |
CN102802114B (en) * | 2012-06-20 | 2015-02-18 | 北京语言大学 | Method and system for screening seat by using voices |
US8914285B2 (en) * | 2012-07-17 | 2014-12-16 | Nice-Systems Ltd | Predicting a sales success probability score from a distance vector between speech of a customer and speech of an organization representative |
US9245428B2 (en) | 2012-08-02 | 2016-01-26 | Immersion Corporation | Systems and methods for haptic remote control gaming |
US9564125B2 (en) | 2012-11-13 | 2017-02-07 | GM Global Technology Operations LLC | Methods and systems for adapting a speech system based on user characteristics |
US9601111B2 (en) * | 2012-11-13 | 2017-03-21 | GM Global Technology Operations LLC | Methods and systems for adapting speech systems |
US9507755B1 (en) | 2012-11-20 | 2016-11-29 | Micro Strategy Incorporated | Selecting content for presentation |
US9105042B2 (en) | 2013-02-07 | 2015-08-11 | Verizon Patent And Licensing Inc. | Customer sentiment analysis using recorded conversation |
US9734819B2 (en) | 2013-02-21 | 2017-08-15 | Google Technology Holdings LLC | Recognizing accented speech |
US9191510B2 (en) | 2013-03-14 | 2015-11-17 | Mattersight Corporation | Methods and system for analyzing multichannel electronic communication data |
US20150287410A1 (en) * | 2013-03-15 | 2015-10-08 | Google Inc. | Speech and semantic parsing for content selection |
CN103310788B (en) * | 2013-05-23 | 2016-03-16 | 北京云知声信息技术有限公司 | A kind of voice information identification method and system |
US20140358538A1 (en) * | 2013-05-28 | 2014-12-04 | GM Global Technology Operations LLC | Methods and systems for shaping dialog of speech systems |
US9215510B2 (en) | 2013-12-06 | 2015-12-15 | Rovi Guides, Inc. | Systems and methods for automatically tagging a media asset based on verbal input and playback adjustments |
CN103680518A (en) * | 2013-12-20 | 2014-03-26 | 上海电机学院 | Voice gender recognition method and system based on virtual instrument technology |
CN103778917B (en) * | 2014-01-10 | 2017-01-04 | 厦门快商通信息技术有限公司 | A kind of System and method for that detection identity is pretended to be in phone satisfaction investigation |
US9363378B1 (en) | 2014-03-19 | 2016-06-07 | Noble Systems Corporation | Processing stored voice messages to identify non-semantic message characteristics |
CN107003723A (en) * | 2014-10-21 | 2017-08-01 | 罗伯特·博世有限公司 | For the response selection in conversational system and the method and system of the automation of composition |
CN105744090A (en) | 2014-12-09 | 2016-07-06 | 阿里巴巴集团控股有限公司 | Voice information processing method and device |
US9722965B2 (en) * | 2015-01-29 | 2017-08-01 | International Business Machines Corporation | Smartphone indicator for conversation nonproductivity |
US9552810B2 (en) | 2015-03-31 | 2017-01-24 | International Business Machines Corporation | Customizable and individualized speech recognition settings interface for users with language accents |
WO2016209888A1 (en) | 2015-06-22 | 2016-12-29 | Rita Singh | Processing speech signals in voice-based profiling |
CN105206269A (en) * | 2015-08-14 | 2015-12-30 | 百度在线网络技术(北京)有限公司 | Voice processing method and device |
US10706873B2 (en) * | 2015-09-18 | 2020-07-07 | Sri International | Real-time speaker state analytics platform |
CN105513597B (en) * | 2015-12-30 | 2018-07-10 | 百度在线网络技术(北京)有限公司 | Voiceprint processing method and processing device |
US10572961B2 (en) | 2016-03-15 | 2020-02-25 | Global Tel*Link Corporation | Detection and prevention of inmate to inmate message relay |
US9609121B1 (en) | 2016-04-07 | 2017-03-28 | Global Tel*Link Corporation | System and method for third party monitoring of voice and video calls |
US10915819B2 (en) | 2016-07-01 | 2021-02-09 | International Business Machines Corporation | Automatic real-time identification and presentation of analogies to clarify a concept |
US20180018973A1 (en) | 2016-07-15 | 2018-01-18 | Google Inc. | Speaker verification |
CN107886955B (en) * | 2016-09-29 | 2021-10-26 | 百度在线网络技术(北京)有限公司 | Identity recognition method, device and equipment of voice conversation sample |
CN106534598A (en) * | 2016-10-28 | 2017-03-22 | 广东亿迅科技有限公司 | Calling platform queuing system based on emotion recognition and implementation method thereof |
US10096319B1 (en) * | 2017-03-13 | 2018-10-09 | Amazon Technologies, Inc. | Voice-based determination of physical and emotional characteristics of users |
US10027797B1 (en) | 2017-05-10 | 2018-07-17 | Global Tel*Link Corporation | Alarm control for inmate call monitoring |
US10225396B2 (en) | 2017-05-18 | 2019-03-05 | Global Tel*Link Corporation | Third party monitoring of a activity within a monitoring platform |
US10860786B2 (en) | 2017-06-01 | 2020-12-08 | Global Tel*Link Corporation | System and method for analyzing and investigating communication data from a controlled environment |
US9930088B1 (en) | 2017-06-22 | 2018-03-27 | Global Tel*Link Corporation | Utilizing VoIP codec negotiation during a controlled environment call |
JP6863179B2 (en) * | 2017-08-29 | 2021-04-21 | 沖電気工業株式会社 | Call center system, call center device, dialogue method, and its program with customer complaint detection function |
CN107919137A (en) * | 2017-10-25 | 2018-04-17 | 平安普惠企业管理有限公司 | The long-range measures and procedures for the examination and approval, device, equipment and readable storage medium storing program for executing |
US10135977B1 (en) * | 2017-11-24 | 2018-11-20 | Nice Ltd. | Systems and methods for optimization of interactive voice recognition systems |
EP3576084B1 (en) | 2018-05-29 | 2020-09-30 | Christoph Neumann | Efficient dialog design |
US20190385711A1 (en) | 2018-06-19 | 2019-12-19 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
EP3811245A4 (en) | 2018-06-19 | 2022-03-09 | Ellipsis Health, Inc. | Systems and methods for mental health assessment |
CN109147800A (en) * | 2018-08-30 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Answer method and device |
CN109036436A (en) * | 2018-09-18 | 2018-12-18 | 广州势必可赢网络科技有限公司 | A kind of voice print database method for building up, method for recognizing sound-groove, apparatus and system |
US11195507B2 (en) * | 2018-10-04 | 2021-12-07 | Rovi Guides, Inc. | Translating between spoken languages with emotion in audio and video media streams |
US10770072B2 (en) | 2018-12-10 | 2020-09-08 | International Business Machines Corporation | Cognitive triggering of human interaction strategies to facilitate collaboration, productivity, and learning |
US11152005B2 (en) * | 2019-09-11 | 2021-10-19 | VIQ Solutions Inc. | Parallel processing framework for voice to text digital media |
CN110648670B (en) * | 2019-10-22 | 2021-11-26 | 中信银行股份有限公司 | Fraud identification method and device, electronic equipment and computer-readable storage medium |
CN113257225B (en) * | 2021-05-31 | 2021-11-02 | 之江实验室 | Emotional voice synthesis method and system fusing vocabulary and phoneme pronunciation characteristics |
EP4202738A1 (en) * | 2021-12-22 | 2023-06-28 | Deutsche Telekom AG | User identification using voice input |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4093821A (en) * | 1977-06-14 | 1978-06-06 | John Decatur Williamson | Speech analyzer for analyzing pitch or frequency perturbations in individual speech pattern to determine the emotional state of the person |
DE69328275T2 (en) * | 1992-06-18 | 2000-09-28 | Seiko Epson Corp | Speech recognition system |
IL108401A (en) * | 1994-01-21 | 1996-12-05 | Hashavshevet Manufacture 1988 | Method and apparatus for indicating the emotional state of a person |
US6052441A (en) * | 1995-01-11 | 2000-04-18 | Fujitsu Limited | Voice response service apparatus |
US5918222A (en) * | 1995-03-17 | 1999-06-29 | Kabushiki Kaisha Toshiba | Information disclosing apparatus and multi-modal information input/output system |
DE69622439T2 (en) * | 1995-12-04 | 2002-11-14 | Jared C Bernstein | METHOD AND DEVICE FOR DETERMINING COMBINED INFORMATION FROM VOICE SIGNALS FOR ADAPTIVE INTERACTION IN TEACHING AND EXAMINATION |
US5895447A (en) | 1996-02-02 | 1999-04-20 | International Business Machines Corporation | Speech recognition using thresholded speaker class model selection or model adaptation |
US6026397A (en) * | 1996-05-22 | 2000-02-15 | Electronic Data Systems Corporation | Data analysis system and method |
US5915001A (en) * | 1996-11-14 | 1999-06-22 | Vois Corporation | System and method for providing and using universally accessible voice and speech data files |
WO1998031007A2 (en) * | 1997-01-09 | 1998-07-16 | Koninklijke Philips Electronics N.V. | Method and apparatus for executing a human-machine dialogue in the form of two-sided speech as based on a modular dialogue structure |
US5897616A (en) * | 1997-06-11 | 1999-04-27 | International Business Machines Corporation | Apparatus and methods for speaker verification/identification/classification employing non-acoustic and/or acoustic models and databases |
US6014647A (en) * | 1997-07-08 | 2000-01-11 | Nizzari; Marcia M. | Customer interaction tracking |
US6151601A (en) * | 1997-11-12 | 2000-11-21 | Ncr Corporation | Computer architecture and method for collecting, analyzing and/or transforming internet and/or electronic commerce data for storage into a data storage area |
JP3886024B2 (en) * | 1997-11-19 | 2007-02-28 | 富士通株式会社 | Voice recognition apparatus and information processing apparatus using the same |
-
1999
- 1999-08-10 US US09/371,400 patent/US6665644B1/en not_active Expired - Lifetime
-
2000
- 2000-06-13 CA CA002311439A patent/CA2311439C/en not_active Expired - Lifetime
- 2000-07-28 EP EP00306483A patent/EP1076329B1/en not_active Expired - Lifetime
- 2000-07-28 DE DE60030920T patent/DE60030920T2/en not_active Expired - Lifetime
- 2000-07-28 AT AT00306483T patent/ATE341071T1/en not_active IP Right Cessation
- 2000-08-08 CN CNB001227025A patent/CN1157710C/en not_active Expired - Lifetime
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10043517B2 (en) | 2015-12-09 | 2018-08-07 | International Business Machines Corporation | Audio-based event interaction analytics |
Also Published As
Publication number | Publication date |
---|---|
EP1076329A3 (en) | 2003-10-01 |
DE60030920D1 (en) | 2006-11-09 |
CA2311439A1 (en) | 2001-02-10 |
CN1157710C (en) | 2004-07-14 |
DE60030920T2 (en) | 2007-04-05 |
EP1076329B1 (en) | 2006-09-27 |
CN1283843A (en) | 2001-02-14 |
US6665644B1 (en) | 2003-12-16 |
EP1076329A2 (en) | 2001-02-14 |
ATE341071T1 (en) | 2006-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2311439C (en) | Conversational data mining | |
CN109767791B (en) | Voice emotion recognition and application system for call center calls | |
US8676586B2 (en) | Method and apparatus for interaction or discourse analytics | |
US7725318B2 (en) | System and method for improving the accuracy of audio searching | |
CN106294774A (en) | User individual data processing method based on dialogue service and device | |
CN112885332A (en) | Voice quality inspection method, system and storage medium | |
EP0549265A2 (en) | Neural network-based speech token recognition system and method | |
CN107993665A (en) | Spokesman role determines method, intelligent meeting method and system in multi-conference scene | |
CN112102850B (en) | Emotion recognition processing method and device, medium and electronic equipment | |
Gupta et al. | Two-stream emotion recognition for call center monitoring | |
CN111899740A (en) | Voice recognition system crowdsourcing test case generation method based on test requirements | |
Lee et al. | On natural language call routing | |
CN109325236A (en) | The method of service robot Auditory Perception kinsfolk's diet information | |
Priego-Valverde et al. | “Cheese!”: a Corpus of Face-to-face French Interactions. A Case Study for Analyzing Smiling and Conversational Humor | |
Scholten et al. | Learning to recognise words using visually grounded speech | |
Qadri et al. | A critical insight into multi-languages speech emotion databases | |
Schuller et al. | Ten recent trends in computational paralinguistics | |
Jia et al. | A deep learning system for sentiment analysis of service calls | |
KR102407055B1 (en) | Apparatus and method for measuring dialogue quality index through natural language processing after speech recognition | |
Casale et al. | Analysis of robustness of attributes selection applied to speech emotion recognition | |
Lin et al. | Phoneme-less hierarchical accent classification | |
Lee et al. | A study on natural language call routing | |
Blouin et al. | A study on the automatic detection and characterization of emotion in a voice service context. | |
Samad et al. | Performance Evaluation of Learning Classifiers of Children Emotions using Feature Combinations in the Presence of Noise | |
CN113990288B (en) | Method for automatically generating and deploying voice synthesis model by voice customer service |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKEX | Expiry |
Effective date: 20200613 |
|
MKEX | Expiry |
Effective date: 20200613 |