WO2006113409A2 - Method, system, and program product for measuring audio video synchronization using lip and teeth charateristics - Google Patents
Method, system, and program product for measuring audio video synchronization using lip and teeth charateristics Download PDFInfo
- Publication number
- WO2006113409A2 WO2006113409A2 PCT/US2006/014023 US2006014023W WO2006113409A2 WO 2006113409 A2 WO2006113409 A2 WO 2006113409A2 US 2006014023 W US2006014023 W US 2006014023W WO 2006113409 A2 WO2006113409 A2 WO 2006113409A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio
- video
- information
- analyzing
- sounds
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/242—Synchronization processes, e.g. processing of PCR [Program Clock References]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transforming into visible information
- G10L2021/105—Synthesis of the lips movements from speech, e.g. for talking heads
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/60—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
- H04N5/602—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for digital sound signals
Definitions
- the invention relates to the creation, manipulation, transmission, storage, etc. and especially synchronization of multi-media entertainment, educational and other programming having at least video and associated information.
- Typical examples of such programming are television and movie programs. Often these programs include a visual or video portion, an audible or audio portion, and may also include one or more various data type portions. Typical data type portions include closed captioning, narrative descriptions for the blind, additional program information data such as web sites and further information directives and various metadata included in compressed (such as for example MPEG and JPEG) systems.
- the video and associated signal programs are produced, operated on, stored or conveyed in a manner such that the synchronization of various ones of the aforementioned audio, video and/or data is affected.
- the synchronization of audio and video commonly known as lip sync
- One aspect of multi-media programming is maintaining audio and video synchronization in audio-visual presentations, such as television programs, for example to prevent annoyances to the viewers, to facilitate further operations with the program or to facilitate analysis of the program.
- audio-visual presentations such as television programs
- US Patent 5,572,261 describes the use of actual mouth images in the video signal to predict what syllables are being spoken and compare that information to sounds in the associated audio signal to measure the relative synchronization. Unfortunately when there are no images of the mouth, there is no ability to determine which syllables are being spoken.
- an audio signal may correspond to one or more of a plurality of video signals, and it is desired to determine which. For example in a W 2
- Patents 5,572,261, 5,530,483 and 5,751,368 describe operations without any inspection or response to the video signal images. Consequently the applicability of the descriptions of the patents is limited to particular systems where various video timing information, etc. is utilized.
- Patents 5,530,483 and 5,751,368 deal with measuring video delays and identifying video signal by inspection of the images carried in the video signal, but do not make any comparison or other inspection of video and audio signals.
- Patent 5,572,261 teaches the use of actual mouth images in the video signal and sounds in the associated audio signal to measure the relative synchronization.
- U.S. Patent 5,572.261 describes a mode of operation of detecting the occurrence of mouth sounds in both the lips and audio.
- Hershey et al. noted, in particular, that "[i]t is interesting that the synchrony is shared by some parts, such as the eyes, that do not directly contribute to the sound, but contribute to the communication nonetheless.” More particularly, Hershey et al. noted that these parts of the face, including the lips, contribute to the communication as well. There was no suggestion by Hershey and Movellan that their algorithms could measure synchronization or perform any of the other features of the invention. Again they specifically said that they do not directly contribute to the sound. In this reference, the algorithms merely identified who was speaking based on the movement or non movement of features.
- U.S. Patent 5,387,943 of Silver a method is described the requires that the mouth be identified by an operator. And, like U.S. Patent No. 5,572,261 discussed above, utilizes video lip movements. In either of these references, only the mere lip movement is focused on. No other characteristic of the lips or other facial features, such as the shape of the lips, is considered in either of these disclosed methods. In particular, the spatial lip shape is not detected or considered in either of these referees, just the movement, opened or closed.
- Perceptual aspects of the human voice such as pitch, loudness, timbre and timing
- the invention provides for directly comparing images conveyed in the video portion of a signal to characteristics in an associated signal, such as an audio signal. More particularly, there is disclosed a method, system, and program product for measuring audio video synchronization that is independent of the particular characteristics of the speaker, whether it be a deep toned speaker such as a large man, or a high pitch toned speaker, such as a small woman.
- the invention is directed in one embodiment to measure the shape of the lips to consider the vowel and other tones created by such shape. Unlike conventional approaches that consider mere movement, opened or closed, the invention considers the shape and movement of the lips, providing substantially improved accuracy of audio and video synchronization of spoken words by video characters.
- the invention considers the shape and may also consider movement of the lips. Furthermore, the invention provides a method for determining different spoken sounds by determining whether teeth are present between the open lips, such as when the letters "v" or "s", for example, are pronounced. A system configured according to the invention can thus reduce or remove one or more of the effects of different speaker related voice characteristics.
- MuEv is the contraction of Mutual Event, to mean an event occurring in an image, signal or data which is unique enough that it may be accompanied by another MuEv in an associated signal.
- Such two MuEv-s are, for example, Audio and Video MuEv-s, where certain video quality (or sequence) corresponds to a unique and matching audio event.
- the invention provides for directly comparing images conveyed in the video portion of a signal to characteristics in an associated signal, such as an audio signal. More particularly, there is disclosed a method, system, and program product for measuring audio video synchronization in a manner that is independent from a speaker's personal voice characteristics.
- Audio and Video MuEv-s are calculated from the audio and video information, and the audio and video information is classified into vowel sounds including, but not limited to, AA, EE, OO (capital double letters signifying the sounds of vowels a, e and o respectively), letters "s", "v", “z” and "f ' i.e closed mouth shapes when teeth are present, letters "p", "b", “m”, i.e closed mouth shapes where teeth are not present, silence, and other unclassified phonemes.
- This information is used to determine and associate a dominant audio class with one or more corresponding video frames. Matching locations are determined, and the offset of video and audio is determined.
- the sound EE an audio MuEv
- the sound EE may be identified as occurring in the audio information and matched to a corresponding image characteristic like lips forming a shape associated with speaking the vowel EE (a video MuEv) with the relative timing thereof being measured or otherwise utilized to determine or correct a lip sync error.
- the invention provides for directly comparing images conveyed in the video portion of a signal to characteristics in an associated signal, such as an audio signal. More particularly, there is disclosed a method, system, and program product for measuring audio video synchronization. This is done by first acquiring the data into an audio video synchronization system by receiving audio video information. Data acquisition is performed in a manner such that the time of the data acquisition may be later utilized in respect to determining relative audio and video timing. In this regard it is preferred that audio and video data be captured at the same time and be stored in memory at known locations so that it is possible to recall from memory audio and video which were initially time coincident simply by reference to such known memory location. Such recall from memory may be simultaneous for audio and video or as needed to facilitate processing. Other methods of data acquisition, storage and recall may be utilized however and may be tailored to specific applications of the invention. For example data may be analyzed as it is captured without intermediate storage.
- One aspect of the invention is a method for measuring audio video synchronization. The method comprises the steps of first receiving a video portion and an associated audio portion of, for example, a television program; analyzing the audio portion to locate the presence of particular phonemes therein, and also analyzing the video portion to locate therein the presence of particular visemes therein. This is followed by analyzing the phonemes and the visemes to determine the relative timing of related phonemes and visemes thereof and locate MuEvs.
- Another aspect of the invention is a method for measuring audio video synchronization by receiving video and associated audio information, analyzing the audio information to locate the presence of particular sounds and analyzing the video information to locate the presence of lip shapes corresponding to the formation of particular sounds, and comparing the location of particular sounds with the location of corresponding lip shapes of step to determine the relative timing of audio and video, e.g., MuEvs.
- a further aspect of the invention is directed to a system and method for particularly analyzing the inner lip region.
- a process is provided that accurately extracts and examines the lip region.
- a narrow strip on the central portion of the lips is analyzed to estimate the percentage of lips (upper and lower), teeth and open space between teeth.
- the process accurately detects closed lips, wide open mouth and all teeth and lips.
- a further aspect of the invention is a method for measuring audio video synchronization, comprising the steps of receiving a video portion and an associated audio portion of a television program, and analyzing the audio portion to locate the presence of particular vowel sounds while analyzing the video portion to locate the presence of lip shapes corresponding to uttering particular vowel sounds, and analyzing the presence and/or location of vowel sounds located in step b) with the location of corresponding lip shapes of step c) to determine the relative timing thereof.
- the invention further analyzes the audio portion for personal voice characteristics that are unique to a speaker and filters this out.
- an audio representation of the spoken voice related to a given video frame can be substantially standardized, where the personal characteristics of a speaker's voice is substantially filtered out.
- the invention provides methods, systems, and program products for identifying and locating MuEvs.
- MuEv is the contraction of Mutual EVent to mean an event occurring in an image, signal or data which is unique enough that it may be accompanied by another MuEv in an associated signal. Accordingly, an image MuEv may have a probability of matching a MuEv in an associated signal. For example in respect to a bat hitting the baseball, the crack of the bat in the audio signal is a MuEv, the swing of the bat is a MuEv and the ball instantly changing direction is also a MuEv. Clearly each MuEv has a probability of matching the others in time.
- the detection of a video MuEv may be accomplished by looking for motion, and in particular quick motion in one or a few limited areas of the image while the rest of the image is static, i.e. the pitcher throwing the ball and the batter swinging at the ball.
- the crack of the bat may be detected by looking for short, percussive sounds which are isolated in time from other short percussive sounds.
- FIG 1 is an overview of a system for carrying out the method of the invention.
- FIG 2 shows a diagram of the invention with images conveyed by a video signal and associated information conveyed by an associated signal and a synchronization output.
- FIG 3 shows a diagram of the invention as used with a video signal conveying images and an audio signal conveying associated information.
- FTG 4 is a flow chart illustrating the "Data Acquisition Phase", also referred to as an “A/V MuEv Acquisition and Calibration Phase” of the method of the invention.
- FIG 5 is a flow chart illustrating the "Audio Analysis Phase" of the method of the invention.
- FIG 6 is a flow chart illustrating the Video Analysis of the method of the invention.
- FIG 7 is a flow chart illustrating the derivation and calculation of the Audio MuEv, also referred to as a Glottal Pulse.
- FIG 8 is a flow chart illustrating the Test Phase' of the method of the invention.
- FIG 9 is a flow chart illustrating the characteristics of the Audio MuEv also referred to as a Glottal Pulse.
- FTG 10 is a flow chart illustrating the process for removing the personal voice characteristics from an audio portion of an audio/video presentation according to the invention.
- the preferred embodiment of the invention has an image input, an image mutual event identifier which provides image MuEvs, and an associated information input, an associated information mutual event identifier which provides associated information MuEvs.
- the image MuEvs and associated information MuEvs are suitably coupled to a comparison operation which compares the two types of MuEvs to determine their relative timing.
- MuEvs may be labeled in regard to the method of conveying images or associated information, or may be labeled in regard to the nature of the images or associated information.
- video MuEv, brightness MuEv, red MuEv, chroma MuEv and luma MuEv are some types of image MuEvs and audio MuEv
- data MuEv, weight MuEv, speed MuEv and temperature MuEv are some types of associated MuE vs which may be commonly utilized.
- Figure 1 shows the preferred embodiment of the invention wherein video conveys the images and an associated signal conveying the associated information.
- Figure 2 has video input 1, mutual event identifier 3 with MuEv output 5, associated signal input 2, mutual event identifier 4 with MuEv output 6, comparison 7 with output 8.
- video signal 1 is coupled to an image MuEv identifier 3 which operates to compare a plurality of image frames of video to identify the movement (if present) of elements within the image conveyed by the video signal.
- image MuEv identifier 3 operates to compare a plurality of image frames of video to identify the movement (if present) of elements within the image conveyed by the video signal.
- the computation of motion vectors commonly utilized with video compression such as in MPEG compression, is useful for this function. It is useful to discard motion vectors which indicate only small amounts of motion and use only motion vectors indicating significant motion in the order of 5% of the picture height or more. When such movement is detected, it is inspected in relation to the rest of the video signal movement to determine if it is an event which is likely to have a corresponding MuEv in the associated signal.
- a motion based video MuEv detection is used only as a fallback when none of the other described methods, such as lip shape for example, is available due to the any particular video content.
- the reason is that if a lip shaped detection is available, it is preferred over motion detection (and also over the lip motion method of '261 patent discussed above) because it is much more accurate owing to the greater ability to match particular sounds (AA, OO, EE for example) rather than just motion based approach. This is because strictly motion based detection can be fooled by different sounds that are generated with the same motion.
- lip shaped detection can be performed in a single frame, whereas motion based detection requires a plurality of frames.
- a MuEv output is generated at 5 indicating the presence of the MuEv(s) within the video field or frame(s), in this example where there is movement that is likely to have a corresponding MuEv in the associated signal.
- a binary number be output for each frame with the number indicating the number of MuEvs, i.e. small region elements which moved in that frame relative to the previous frame, while the remaining portion of the frame remained relatively static.
- the associated signal 2 is coupled to a mutual event identifier 4 which is configured to identify the occurrence of associated signal MuEvs within the associated signal.
- a MuEv output is provided at 6.
- the MuEv output is preferred to be a binary number indicating the number of MuEvs which have occurred within a contiguous segment of the associates signal 2, and in particular within a segment corresponding in length to the field or frame period of the video signal 1 which is utilized for outputting the movement signal number 5.
- This time period may be coupled from movement identifier 3 to MuEv identifier 4 via suitable coupling 9 as will be known to persons of ordinary skill in the art from the description herein.
- video 1 may be coupled directly to MuEv identifier 4 for this and other purposes as will be known from these present teachings.
- a signal is indicated as the preferred method of conveying the associated information to the associated information MuEv identifier 4
- other types of associated information conveyances such as files, clips, data, etc. may be utilized as the operation of the invention is not restricted to the particular manner in which the associated information is conveyed.
- the associated information is also known as the associated signal, owing to the preferred use of a signal for conveyance.
- the associated information MuEvs are also known as associated signal MuEvs. The detection of MuE vs in the associated signal will depend in large part on the nature of the associated signal.
- MuEv For example data which is provided by or in response to a device which is likely present in the image such as data coming from the customer input to a teller machine would be a good MuEv. Audio characteristics which are likely correlated with motion are good MuEvs as discussed below.
- MuEvs Audio characteristics which are likely correlated with motion are good MuEvs as discussed below.
- the use of changes within particular regions of the associated signal, changes in the signal envelope, changes in the information, frequency or energy content of the signal and other changes in properties of the signal may be utilized as well, either alone or in combination, to generate MuEvs. More details of identification of MuEvs in particular signal types will be provided below in respect to the detailed embodiments of the invention.
- a MuEv output is presented at 5 and a MuEv output is presented at 6.
- the image MuEv output also known in this preferred embodiment as a video MuEv owing to the use of video as the method of conveying images, and the associated signal MuEv output are suitable coupled to comparison 7 which operates to determine the best match, on a sliding time scale, of the two outputs.
- the comparison is preferred to be a correlation which determines the best match between the two signals and the relative time therebetween.
- AVSync Audio Video Sync detection
- Muevs such as vowel sounds, silence, and consonant sounds, including, preferably, at least three W
- vowel sounds and silence exemplary of the vowel sounds are the three vowel sounds, /AA/, /EE/ and /00/.
- the process described herein assumes speaker independence in its final implementation.
- the first phase is an initial data acquisition phase, also referred to as an Audio/Video MuEv Acquisition and Calibration Phase shown generally in FIG 4.
- initial data acquisition phase experimental data is used to create decision boundaries and establish segmented audio regions for phonemes, that is, Audio MuEv' s, /AA/, /OO/, /EE/.
- the methodology is not limited to only three vowels, but it can be expanded to include other vowels, or syllables, such as "lip-biting" "V" and "F”, etc.
- positions of these vowels are identified in Audio and Video stream. Analyzing the vowel position in audio and the detected vowel in the corresponding video frame, audio-video synchronicity is estimated.
- Audio-video synchronicity is estimated by analyzing the vowel position in audio and the detected vowel in the corresponding video frame.
- Audio MuEv classification is based on Glottal Pulse analysis. In Glottal Pulse analysis shown and described in detail in FIG 5, audio samples are collected and glottal pulses from audio samples in non-silence zones are calculated. For each glottal pulse period, the Mean, and the Second and Third
- Moments are computed. The moments are centralized and normalized around the mean. The moments are plotted as a scattergram in Figure 6(b) discussed below. Decision boundaries, which separated most of the vowel classes are drawn and stored as parameters for audio classification.
- the lip region for each video frame is extracted employing a face detector and lip tracker.
- the intensity values are preferably normalized to remove any uneven lighting effects.
- the lip region is divided into sub-regions, typically three sub-regions - inner, outer and difference region.
- the inner region is formed by removing about 25% of the pixels from all four sides of the outer lip region.
- the difference of the outer lip-region and the inner region is considered a difference region.
- Mean and standard deviation of all three regions are calculated. The mean/standard deviation of these regions is considered as video measure of spoken vowels, thus forming a corresponding Video MuEv.
- Video MuEv is substantially based on the outer, inner and difference regions which in turn are based substantially on lip shape, rather than mere lip movement.
- a system configured with this method of finding Video MuEvs is capable of finding more MuEvs than a conventional system, that is typically a strictly motion based system. For example, a lip shape corresponding to a speaker's vowel sound of "EE" can be identified for each frame in which the shape is present. By comparison, using a system that uses mere lip movement to determine an EE sound would take several frames to find, since the redundant measuring of this motion of the lips over those several frames would be needed to establish which sound the lips are making.
- the shape of the lips substantially reduces the number of frames needed to determine the sound that the speaker is making. Also, according to the invention, the particular teachings of the manner in which the shape of the lips may be discerned by a system. These teachings may be utilized to provide substantially faster identification of the sound that the lips are making and higher accuracy of alignment.
- the detection phase shown and described in greater detail in FTG 7.
- One possible implementation of the detection phase, shown in Figure 7, is to process the test data frame by frame. A large number of samples, e.g., about 450 audio samples or more, are taken as the audio window. For each audio window having more then some fraction, for example, 80%, non-silence data is processed to calculate an audio MuEv or GP (glottal pulse). The audio features are computed for Audio MuEv or GP samples. The average spectrum values over a plurality of audio frames, for example, over 10 or more consecutive audio frames with 10% shift, are used for this purpose.
- a dominant audio class in a video frame is determined and associated to a video frame to define a MuEv. This is accomplished by locating matching locations, and estimating offset of audio and video.
- the step of acquiring data in an audio video synchronization system with input audio video information is as shown in FIG 4.
- Data acquisition includes the steps of receiving audio video information 201, separately extracting the audio information and the video information 203, analyzing the audio information 205 and the video information 207, and recovering audio and video analysis data there from.
- the audio and video data is stored 209 and recycled.
- Analyzing the data includes drawing scatter diagrams of audio moments from the audio data 211, drawing an audio decision boundary and storing the resulting audio decision data 213, drawing scatter diagrams of video moments from the video data 215. and drawing a video decision boundary 217 and storing the resulting video decision data 219
- the audio information is analyzed, for example by a method such as is shown in FIG 5.
- This method includes the steps of receiving an audio stream 301 until the fraction of captured audio samples reaches a threshold 303. If the fraction of captured audio reaches the threshold, the audio MuEv or glottal pulse of the captured audio samples is determined 307. The next step is calculating a Fast Fourier Transform (or Discrete Cosine Transform, or DCT) for sets of successive audio data of the size of the audio MuEvs or glottal pulses within a shift 309. This is done by calculating an average frequency spectrum of the Fast Fourier Transforms (or DCT) 311.
- the detected audio statistics 313 include one or more of the centralized and normalized Ml (mean), M2B AR (2 nd Moment), M3BAR (3 rd Moment), where "BAR” means logical "not”. This is discussed and detailed further below.
- the analysis of video information is as shown in FIG 6 (a) by a method that includes the steps of receiving a video stream and obtaining a video frame from the video frame 401, finding a lip region of a face in the video frame 403, and if the video frame is a silence frame, receiving a subsequent video frame 405.
- the inner and outer lip regions of the face are defined 407, the mean and variance of the inner and outer lip regions of the face are calculated 409, and the width and height of the lips are calculated 411.
- This method provides spatially based MuEvs that are not motion dependent. Again note that all of this spatially based information may be derived from a single frame, or even a single field, of video. Thus the potential of quickly finding many spatially based video MuEvs is substantially increased, as compared to a conventional motion based (temporal) analysis of lip movement. That is not to say, however, that movement based MuEvs are not useful, and they may be utilized alone or in combination with the spatially based MuEvs if desired.
- the video features are returned and the next frame is received.
- FIG. 6(b) an illustration of a scatter diagram 600 showing vowels and matching mouth shapes is shown.
- a speaker 602a, 602b and 602c.
- the different mouth shapes illustrated correspond to different vowel sounds. Their corresponding sounds can be plotted on scatter diagram 600.
- the Y axis is the Y component of the moment based measure
- the X axis is the X component of the moment based measure.
- the mouth shape of speaker 602a makes the /AA/ vowel sound as shown, and the scatter diagram output of the sound can be seen by the points on the scatter diagram 604a.
- the mouth shape is open, as is the case when the /AA/ vowel is spoken.
- speaker shape 602b outputs vowel sound /OO/, and the output of the sound is illustrated in the scatter points 604b.
- the mouth is open, but the shape is different for the /OO/ sound than the prior illustrated /AA/ sound.
- the different mouth shapes correspond to the different sounds, including vowels and other sounds such as /V/, /F/, /S/, /ZZ/, and many other sounds.
- Shape 602c has a shape that corresponds to the /EE/ vowel, and the scatter diagram illustrates the corresponding points 604c, which are in different locations on the diagram than the /AA/, sound and the /OO/ shape.
- the illustration shows how a scatter diagram can define the different sounds according to the moment based measure, and also shows how distinctive the different sounds and corresponding mouth shapes are distinctive.
- This method includes the steps of receiving a stream of audio and video information 601, retrieving individual audio and video information 603, analyzing the audio 605 and video information 613 and classifying the audio 607, which includes /AA/, /EE/, /00/ /M/, /P/, /B/, /V/, /S/ and other sounds, and video information 615, which includes /AA/, /EE/, /00/, IM/, /P/, /B/, /V/, /S/ and other sounds.
- Different sounds may be utilized in this process, and the invention may be practiced utilizing different sounds. Those skilled in the art will understand that, given this specification, different sounds can be utilized in order to fit a particular desired level of performance versus complexity without departing from the invention.
- the illustrations show that the sounds classified in the audio analysis and the video analysis are the same. It is possible in different situations, however, that they may be different. While different sounds than those suggested could be used, they would typically be the same for both sides. In one embodiment, it may be useful to use a larger (overlapping) set of different sounds for one (either audio or video) than for the other due to ease or difficulty of processing.
- a system may use /AA/, /EE/, /00/, /M/, /P/, /B/, /V/, /S/ , but if the audio is noisy or distorted or for some other reason related to the application might only use /AA/, /EE/, and /00/.
- Video where there is no presence of a head might use two, one or none for the duration of no head.
- Video with lots of talking heads might initially use a small set while it identifies which head is the one corresponding to the sound (i.e. which head has the microphone).
- a smaller set may be used to speed acquisition followed by use of a larger set to facilitate accuracy after initial acquisition. This smaller set/larger set could take place with both audio and video or either one. This is followed by filtering the audio 609 and video information 617 to remove randomly occurring classes, and associating the most dominant audio classes to corresponding video frames 611, finding matching locations 619; and estimating an asynchronous offset. 621.
- the audio and video information is classified into vowel sounds including at least
- a further aspect of our invention is a system for carrying out the above described method of measuring audio video synchronization. This is done by a method comprising the steps of Initial ATV MuEv Acquisition and Calibration Phase of an audio video synchronization system thus establishing a correlation of related Audio and Video MuEv-S, and Analysis phase which involves taking input audio video information, analyzing the audio information, analyzing the video information, calculating Audio MuEv and Video MuEv from the audio and video information; and determining and associating a dominant audio class in a video frame, locating matching locations, and estimating offset of audio and video.
- a further aspect of our invention is a program product comprising computer readable code for measuring audio video synchronization. This is done by a method comprising the steps of Initial A/V MuEv Acquisition and Calibration Phase of an audio video synchronization system thus establishing a correlation of related Audio and Video MuEv- s, and Analysis phase which involves taking input audio video information, analyzing the audio information, analyzing the video information, calculating Audio MuEv and Video MuEv from the audio and video information; and determining and associating a dominant audio class in a. video frame, locating matching locations, and estimating offset of audio and video.
- the invention may be implemented, for example, by having the various means of receiving video signals and associated signals, identifying Audio-visual events and comparing video signal and associated signal Audio-visual events to determine relative timing as a software application (as an operating system element), a dedicated processor, or a dedicated processor with dedicated code.
- the software executes a sequence of machine-readable instructions, which can also be referred to as code. These instructions may reside in various types of signal-bearing media.
- one aspect of the invention concerns a program product, comprising a signal-bearing medium or signal- bearing media tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform a method for receiving video signals and associated signals, identifying Audio-visual events and comparing video signal and associated signal Audio-visual events to determine relative timing.
- This signal-bearing medium may comprise, for example, memory in server.
- the memory in the server may be non- volatile storage, a data disc, or even memory on a vendor server for downloading to a processor for installation.
- the instructions may be embodied in a signal-bearing medium such as the optical data storage disc.
- the instructions may be stored on any of a variety of machine-readable data storage mediums or media, which may include, for example, a "hard drive", a RAID array, a RAMAC, a magnetic data storage diskette (such as a floppy disk), magnetic tape, digital optical tape, RAM, ROM, EPROM, EEPROM, flash memory, lattice and 3 dimensional array type optical storage, magneto-optical storage, paper punch cards, or any other suitable signal-bearing media including transmission media such as digital and/or analog communications links, which may be electrical, optical, and/or wireless.
- the machine-readable instructions may comprise software object code, compiled from a language such as "C++".
- program code may, for example, be compressed, encrypted, or both, and may include executable files, script files and wizards for installation, as in Zip files and cab files.
- machine-readable instructions or code residing in or on signal-bearing media include all of the above means of delivery.
- Audio MuEv Global Pulse Analysis.
- the method, system, and program product described is based on glottal pulse analysis.
- the concept of glottal pulse arises from the short comings of other voice analysis and conversion methods.
- the majority of prior art voice conversion methods deal mostly with the spectral features of voice.
- a short coming of spectral analysis is that the voice's source characteristics cannot be entirely manipulated in the spectral domain.
- the voice's source characteristics affect the voice quality of speech defining if a voice will have a modal (normal), pressed, breathy, creaky, harsh or whispery quality.
- the quality of voice is affected by the shape length, thickness, mass and tension of the vocal folds, and by the volume and frequency of the pulse flow.
- a complete voice conversion method needs to include a mapping of the source characteristics.
- the voice quality characteristics (as referred to glottal pulse) are much more obvious in the time domain than in the frequency domain.
- One method of obtaining the glottal pulse begins by deriving an estimate of the shape of the glottal pulse in the time domain. The estimate of the glottal pulse improves the source and the vocal tract deconvolution and the accuracy of formant estimation and mapping.
- the laryngeal parameters are used to describe the glottal pulse.
- the parameters are based on the LF (Liljencrants/Fant) model illustrated in FIG 9.
- GCI glottal closure instant
- the LF model parameters are obtained from an iterative application of a dynamic time alignment method to an estimate of the glottal pulse sequence.
- the initial estimate of the glottal pulse is obtained via an LP inverse filter.
- the estimate of the parameters of LP model is based on a pitch synchronous method using periods of zero-excitation coinciding with the close phase of a glottal pulse cycle.
- the parameterization process can be divided into two stages:
- (a) Initial estimation of the LF model parameters. An initial estimate of each parameter is obtained from analysis of an initial estimate of the excitation sequence.
- the parameter T e corresponds to the instant when the glottal derivative signal reaches its local minimum.
- the parameter AV is the magnitude of the signal at this instant.
- the parameter T p can be estimated as the first zero crossing to the left of T e .
- the parameter T 0 can be found as the first sample, to the right of T e , smaller than a certain preset threshold value.
- the parameter To can be estimated as the instant to the left of T p when the signal is lower than a certain threshold value and is constrained by the value of open quotient.
- Ta is estimated as the magnitude of the normalized spectrum (normalized by AV) during the closing phase.
- (b) Constrained non-linear optimization of the parameters.
- a dynamic time warping (DTW) method is employed. DTW time-aligns a synthetically generated glottal pulse with the one obtained through the inverse filtering.
- the aligned signal is a smoother version of the modeled signal, with its timing properties undistorted, but with no short term or other time fluctuations present in the synthetic signal.
- the technique is used iteratively, as the aligned signal can replace the estimated glottal pulse as the new template from which to estimate the LF parameters.
- an audio synchronization method in another embodiment, provides an audio output that is substantially independent of a given speaker's personal characteristics. Once the output is generated, it is substantially similar for any number of speakers, regardless of any individual speaker characteristics. According to the invention, an audio/video system so configured can reduce or remove one or more of the effects of different speaker related voice characteristics.
- Analysis is the methodological examination of information or data as will be known to the person of ordinary skill in the art from the teachings, including calculation and logical decisions and is preferred to be (but not limited to) observation from which a decision may be made.
- Calculation is computation, ciphering, reckoning, estimation or evaluation of information or data by mathematics as will be known to the person of ordinary skill in the art from the teachings and is preferred to (but not required to) produce an logical or numerical output.
- the person of ordinary skill will be able to implement appropriate analysis and/or calculation suitable to practice the invention in a form suitable for a particular application from the teachings herein.
- the most important perceptual aspects of the human voice are pitch, loudness, timbre and timing (related to tempo and rhythm). These characteristics are usually considered to be more or less independent of one another and they are considered to be related to the acoustic signal's fundamental frequency f 0 , amplitude, spectral envelope and time variation, respectively.
- f 0 is determined by individual body resonance (chest, throat, mouth cavity) and length of one's vocal cords. Pitch information is localized in the lower frequency spectrum of one's voice.
- the novel methodology concentrates on assessing one's voice characteristics in frequency domain, then eliminating first few harmonics, or the entire lower frequency band. The result leaves the essence, or the harmonic spectra, of the individual intelligent sound, phoneme, produced by human speaking apparatus.
- the output is an audio output that is independent of a speaker's personal characteristics.
- moments of Fourier Transform (or DCT)and Audio Normalization are used to eliminate dependency on amplitude and time variations, thus further enhancing the voice recognition methodology.
- f be the i ⁇ harmonic of the Fourier Transform (or DCT)
- n be the number of samples with respect to 10ms data
- i is scaled so that it covers the full frequency range. In this case, only m (corresponding to 6KHz) number of spectrum values are used out of n.
- the k central moment (fork >1) is defined as,
- Step 1000 one embodiment of a method according to the invention is illustrated.
- the process is illustrated in Figure 10, beginning at Step 1000.
- the process begins at Step 1002, where an audio sample is retrieved, for example, 10 milliseconds in this step, and the DFT and amplitude are computed in Step 1004.
- Step 1006 the audio pointer is shifted by an incremental value, for example, 0.5 milliseconds in this example, from the start of the last frame of the sample from 1002. From here, this loop is repeated for a predetermined number of times, 10 cycles in this example, and the process returns to the storage 1018, containing audio data having phoneme. Again this loop is repeated 10 times, then the process proceeds to Step 1008, where a process of averaging the spectrum values and scale by taking cube root is performed.
- Step 1010 the DC value, the first harmonic and the second harmonic are dropped. Also, the spectrum values corresponding to more than a predetermined frequency, 16 kilohertz in this example, are dropped as well.
- Step 1012 the normalized, centralized moments are calculated for Ml M2 BAR, M3 BAR, M20, M23 and M24.
- Step 1014 Ml is scaled by 1000 and other moments are scaled by 1,000,000.
- Step 1016 the audio pointer is shifted by a predetermined amount of time, 9 milliseconds in this example, from the start of the first audio frame of the initial audio frames from Steps 1002 through 1008.
- Step 1020 the moments for other phonemes are calculated.
- Step 1022 the moment features are segmented.
- moments of Fourier Transform (or DCT) of 10ms audio are considered as phoneme features.
- the Fourier Transforms (or DCT) for 9 more sets are calculated by shifting 10% samples.
- the average of the spectrum of these Fourier Transform (or DCT) coefficients are used for calculating moment features.
- the first three spectrum components are dropped while calculating moments.
- the next set of audio samples are taken with 10% overlap.
- the moments are then scaled and plotted pair- wise. The segmentation allows plotting on the x/y plot in two-dimensional moment space.
- lip shape and mouth shape are distinguishable.
- lip shape is lips only whereas mouth shape includes lips and other shapes, such as for example mouth cavity, teeth and other mouth characteristics.
- mouth shape includes lips and other shapes, such as for example mouth cavity, teeth and other mouth characteristics.
- Wide open mouth can be classified as /AA/, closed lips with no teeth present as IMI, /P/,/B/, and, when teeth are present as IMI, /EE/, /F/, IZI, /ZZ/ (like in pizza) and /S/.
- the correspondence with mouth shape and sound can be established.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Television Signal Processing For Recording (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP06750137A EP1969858A2 (en) | 2004-05-14 | 2006-04-13 | Method, system, and program product for measuring audio video synchronization using lip and teeth charateristics |
GB0622592A GB2440384B (en) | 2005-04-13 | 2006-04-13 | Method,system and program product for measuring audio video synchronization using lip and teeth characteristics |
CA002566844A CA2566844A1 (en) | 2005-04-13 | 2006-04-13 | Method, system, and program product for measuring audio video synchronization using lip and teeth charateristics |
AU2006235990A AU2006235990A1 (en) | 2005-04-13 | 2006-04-13 | Method, system, and program product for measuring audio video synchronization using lip and teeth charateristics |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2005/012588 WO2005115014A2 (en) | 2004-05-14 | 2005-04-13 | Method, system, and program product for measuring audio video synchronization |
USPCT/US05/12588 | 2005-04-13 | ||
USPCT/US05/41623 | 2005-11-16 | ||
PCT/US2005/041623 WO2007035183A2 (en) | 2005-04-13 | 2005-11-16 | Method, system, and program product for measuring audio video synchronization independent of speaker characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2006113409A2 true WO2006113409A2 (en) | 2006-10-26 |
WO2006113409A3 WO2006113409A3 (en) | 2007-06-07 |
Family
ID=37115719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2006/014023 WO2006113409A2 (en) | 2004-05-14 | 2006-04-13 | Method, system, and program product for measuring audio video synchronization using lip and teeth charateristics |
Country Status (3)
Country | Link |
---|---|
CA (1) | CA2566844A1 (en) |
GB (1) | GB2438691A (en) |
WO (1) | WO2006113409A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009024442A2 (en) * | 2007-08-22 | 2009-02-26 | Siemens Aktiengesellschaft | Method for synchronizing media data streams |
EP3079564B1 (en) * | 2013-12-12 | 2024-03-20 | L'oreal | Process for evaluation of at least one facial clinical sign |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110750152B (en) * | 2019-09-11 | 2023-08-29 | 云知声智能科技股份有限公司 | Man-machine interaction method and system based on lip actions |
CN111081270B (en) * | 2019-12-19 | 2021-06-01 | 大连即时智能科技有限公司 | Real-time audio-driven virtual character mouth shape synchronous control method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4313135A (en) * | 1980-07-28 | 1982-01-26 | Cooper J Carl | Method and apparatus for preserving or restoring audio to video synchronization |
US4769845A (en) * | 1986-04-10 | 1988-09-06 | Kabushiki Kaisha Carrylab | Method of recognizing speech using a lip image |
US5387943A (en) * | 1992-12-21 | 1995-02-07 | Tektronix, Inc. | Semiautomatic lip sync recovery system |
US5572261A (en) * | 1995-06-07 | 1996-11-05 | Cooper; J. Carl | Automatic audio to video timing measurement device and method |
US5880788A (en) * | 1996-03-25 | 1999-03-09 | Interval Research Corporation | Automated synchronization of video image sequences to new soundtracks |
US5920842A (en) * | 1994-10-12 | 1999-07-06 | Pixel Instruments | Signal synchronization |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4975960A (en) * | 1985-06-03 | 1990-12-04 | Petajan Eric D | Electronic facial tracking and detection system and method and apparatus for automated speech recognition |
US6829018B2 (en) * | 2001-09-17 | 2004-12-07 | Koninklijke Philips Electronics N.V. | Three-dimensional sound creation assisted by visual information |
-
2005
- 2005-11-16 GB GB0622589A patent/GB2438691A/en not_active Withdrawn
-
2006
- 2006-04-13 CA CA002566844A patent/CA2566844A1/en not_active Abandoned
- 2006-04-13 WO PCT/US2006/014023 patent/WO2006113409A2/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4313135A (en) * | 1980-07-28 | 1982-01-26 | Cooper J Carl | Method and apparatus for preserving or restoring audio to video synchronization |
US4313135B1 (en) * | 1980-07-28 | 1996-01-02 | J Carl Cooper | Method and apparatus for preserving or restoring audio to video |
US4769845A (en) * | 1986-04-10 | 1988-09-06 | Kabushiki Kaisha Carrylab | Method of recognizing speech using a lip image |
US5387943A (en) * | 1992-12-21 | 1995-02-07 | Tektronix, Inc. | Semiautomatic lip sync recovery system |
US5920842A (en) * | 1994-10-12 | 1999-07-06 | Pixel Instruments | Signal synchronization |
US5572261A (en) * | 1995-06-07 | 1996-11-05 | Cooper; J. Carl | Automatic audio to video timing measurement device and method |
US5880788A (en) * | 1996-03-25 | 1999-03-09 | Interval Research Corporation | Automated synchronization of video image sequences to new soundtracks |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009024442A2 (en) * | 2007-08-22 | 2009-02-26 | Siemens Aktiengesellschaft | Method for synchronizing media data streams |
WO2009024442A3 (en) * | 2007-08-22 | 2009-04-23 | Siemens Ag | Method for synchronizing media data streams |
EP3079564B1 (en) * | 2013-12-12 | 2024-03-20 | L'oreal | Process for evaluation of at least one facial clinical sign |
Also Published As
Publication number | Publication date |
---|---|
GB0622589D0 (en) | 2007-02-21 |
CA2566844A1 (en) | 2006-10-26 |
WO2006113409A3 (en) | 2007-06-07 |
GB2438691A (en) | 2007-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10397646B2 (en) | Method, system, and program product for measuring audio video synchronization using lip and teeth characteristics | |
US20080111887A1 (en) | Method, system, and program product for measuring audio video synchronization independent of speaker characteristics | |
WO2007035183A2 (en) | Method, system, and program product for measuring audio video synchronization independent of speaker characteristics | |
US20070153125A1 (en) | Method, system, and program product for measuring audio video synchronization | |
US6219640B1 (en) | Methods and apparatus for audio-visual speaker recognition and utterance verification | |
US8200061B2 (en) | Signal processing apparatus and method thereof | |
CN112037788B (en) | Voice correction fusion method | |
US7046300B2 (en) | Assessing consistency between facial motion and speech signals in video | |
Chetty | Biometric liveness checking using multimodal fuzzy fusion | |
Halperin et al. | Dynamic temporal alignment of speech to lips | |
EP1569200A1 (en) | Identification of the presence of speech in digital audio data | |
Argones Rua et al. | Audio-visual speech asynchrony detection using co-inertia analysis and coupled hidden markov models | |
CN108597501A (en) | A kind of audio-visual speech model based on residual error network and bidirectional valve controlled cycling element | |
WO2006113409A2 (en) | Method, system, and program product for measuring audio video synchronization using lip and teeth charateristics | |
CN113920560A (en) | Method, device and equipment for identifying identity of multi-modal speaker | |
Perez-Freire et al. | A multimedia approach for audio segmentation in TV broadcast news | |
JP2009278202A (en) | Video editing device, its method, program, and computer-readable recording medium | |
JPH10187182A (en) | Method and device for video classification | |
AU2006235990A8 (en) | Method, system, and program product for measuring audio video synchronization using lip and teeth charateristics | |
JP4849630B2 (en) | Utterance content identification device and personal identification device | |
Chetty | Biometric liveness detection based on cross modal fusion | |
El-Sallam et al. | Correlation based speech-video synchronization | |
Gurban | Multimodal feature extraction and fusion for audio-visual speech recognition | |
CN117437935A (en) | Audio-assisted depth fake face video detection method, system and equipment | |
TWI385646B (en) | Video and audio editing system, method and electronic device using same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200680021184.3 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 0622592 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20060413 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006235990 Country of ref document: AU Ref document number: 0622592.4 Country of ref document: GB |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2566844 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1432/MUMNP/2006 Country of ref document: IN |
|
WWP | Wipo information: published in national office |
Ref document number: 2006235990 Country of ref document: AU |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
NENP | Non-entry into the national phase |
Ref country code: RU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006750137 Country of ref document: EP |