WO2014116826A1 - Mobile, neurally-assisted personal assistant - Google Patents

Mobile, neurally-assisted personal assistant Download PDF

Info

Publication number
WO2014116826A1
WO2014116826A1 PCT/US2014/012750 US2014012750W WO2014116826A1 WO 2014116826 A1 WO2014116826 A1 WO 2014116826A1 US 2014012750 W US2014012750 W US 2014012750W WO 2014116826 A1 WO2014116826 A1 WO 2014116826A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
user
mobile device
stimuli
brain activity
Prior art date
Application number
PCT/US2014/012750
Other languages
French (fr)
Inventor
David Jangraw
Paul Sajda
Original Assignee
The Trustees Of Columbia University In The City Of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Trustees Of Columbia University In The City Of New York filed Critical The Trustees Of Columbia University In The City Of New York
Publication of WO2014116826A1 publication Critical patent/WO2014116826A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection

Definitions

  • the disclosed subject matter relates to systems and methods for detecting a user's subjective interest and delivery of information to the user based on objects, subjects or conditions that evoked the user's interest.
  • Electroencephalography can measure neural activity non- invasively and unobtrusively.
  • EEG systems also called
  • electroencephalographs can detect certain brain signals without a physical response from the user.
  • Exemplary techniques for detecting interest of a user, for example using EEG, are described in U.S. Patent No. 7,835,787 and U.S. Patent Application Publication No. 2012/0089552, each of which is incorporated by reference herein in its entirety.
  • a device includes an electroencephalograph, configured to be worn in proximity to the user's head, sense brain activity of the user, and generate corresponding brain activity information.
  • the device further includes a processing arrangement, coupled to the electroencephalograph, and configured to receive the brain activity information from the electroencephalograph.
  • the device can also include at least a first sensor, coupled to the processing arrangement, and configured to sense the one or more stimuli proximate to the user and to provide stimuli information to the processing arrangement.
  • the device can also include a user interface, coupled to the processing arrangement, and configured to receive the feedback information and to present corresponding human interpretable information to the user.
  • the processing arrangement can be further adapted to identify a relationship between the stimuli information and the brain activity information.
  • the brain activity information can include the user's subjective interest.
  • the stimulus can include sight, sound, taste, touch, and/or smell.
  • the first sensor can be a video camera, audio recording device, molecular sensor, global positioning system (GPS), accelerometer, and skin-conductance sensor.
  • the mobile device can include a second sensor, coupled to the processing arrangement, and configured to sense one or more physical actions of the user and to provide action information to the processing arrangement.
  • the processor can be further adapted to compare the brain activity information, the stimuli information, and the action information and generate feedback.
  • the second sensor can be a video camera, audio recording device, molecular sensor, GPS, accelerometer, and skin-conductance sensor.
  • the mobile device can be wearable.
  • the mobile device can include a tool, coupled to the processing arrangement, and configured to sense one or more reactions of the user and to provide reaction information to the processing arrangement.
  • the processor can be further adapted to compare the brain activity information, the stimuli information, and the reaction information and generate feedback information therefrom.
  • the tool can be a computer vision, eye tracking, and/or head tracking tool.
  • the human interpretable information can be audio, visual, and/or tactile information.
  • the user interface can be a mobile phone, a head- mounted display, or an audio-assistant.
  • the mobile device can have a storage device, configured to receive the feedback information from the processor and store the feedback information.
  • the processor can include an Internet connection for retrieving Internet information and the processor can be adapted to use the Internet information to generate the feedback information.
  • methods of providing a user with feedback related to a stimulus are provided.
  • a method includes generating brain activity information, e.g., using an electroencephalograph that is configured to be worn in proximity to the user's head and sense brain activity of the user.
  • the method also includes generating stimuli information using a first sensor generating feedback information using a processor and presenting human interpretable information to the user.
  • the method further includes generating action information using a second sensor. Determining feedback information can further include comparing the brain activity information, stimuli information, and the reaction information. In some embodiments the method further includes generating reaction information using a tool. Determining feedback information can further include comparing the brain activity information, the stimuli information, and the reaction information.
  • Figure 1 is an image of a user wearing an embodiment of the mobile device in accordance with the disclosed subject matter.
  • Figure 2 is an image of a user wearing an embodiment of the mobile device in accordance with the disclosed subject matter.
  • Figure 3 is a flow chart of a method of providing a user with feedback related to a stimulus in accordance with the disclosed subject matter.
  • Figure 4 is a block diagram of a system in accordance with the disclosed subject matter.
  • Figure 5 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.
  • Figure 6 is an image of an exemplary display in accordance with the disclosed subject matter.
  • Figure 7 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.
  • Figure 8 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.
  • Figure 9 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.
  • Figure 10 is a block diagram of a system in accordance with the disclosed subject matter.
  • a mobile device can be configured to detect the user's subjective interest using EEG and provide delivery information to the user based on the objects, subjects, or conditions that evoke the user's interest.
  • the user's interest can depend on sensory context that the user is experiencing, such as sights, sounds, smells, touch, and taste, and the user's preferences.
  • the device can detect both interest and context and combine these elements which can be used in applications including, for example and without limitation, video diaries and social networking. As such, the device can provide assistance to the user without the user needing to specifically request it.
  • FIGs. 1 and 2 show images of a user 1 wearing the mobile device 100 of the disclosed subject matter.
  • the mobile device 100 includes an electroencephalograph 10.
  • the electroencephalograph 10 is worn in proximity to the user's head. As shown in Figs. 1 and 2, the
  • the electroencephalograph can be worn as a hat or a skull cap. Alternatively or in addition, the electroencephalograph can be placed on as tattoos, using flexible electrodes.
  • the electrodes can be either dry electrodes or wet electrodes (using an electrolytic gel such as Ag-Ag/Cl).
  • the entire mobile device 100 can be wearable (Fig. 1).
  • the electroencephalograph 10 can detect certain brain signals of user 1 continuously and noninvasively. When the user 1 encounters a stimulus 5, the electroencephalograph 10 can detect the resulting brain waves and generate brain activity information. As shown in Fig. 1 , stimulus 5 is a visual stimulus. It will be appreciated that the term stimulus is recognized to include any suitable stimulus, including sight, sound, smell, touch, or taste stimuli.
  • the user's brain activity information is related to the user's subjective interest.
  • the brain activity information can be measured through amplitude-based measure of interest, which can be temporally specific and therefore connected to specific elements of the user's rapidly changing context.
  • power and/or phase measures can be used.
  • the mobile device 100 also includes a processing arrangement, for example, a processor 20.
  • the processor 20 is coupled to the electroencephalograph 10, and is configured to receive brain activity information therefrom.
  • the connection between the processor and the electroencephalograph can be via wire or radio- wave connections, such as Bluetooth or wireless local area networks, for example Wifi.
  • the processor 20 can be coupled to the hat or skull cap supporting the electroencephalograph 10.
  • the processor 20 can be separate from the wearable portion of the device, as shown in Fig. 2.
  • the processor 20 can be connected to the mobile device 100 wirelessly, for example, by blue tooth technology. In some embodiments, computation can be done in the cloud.
  • the mobile device 100 includes at least one sensor, which is shown in Fig. 1 as camera 30.
  • the camera can be placed on the eye glasses. It can include a wide field of view or a narrow field of view or both.
  • the camera view can be controlled through eye-gaze via a controller from a mobile eye tracker.
  • the sensor is configured to sense the stimuli 5 and provide stimuli information to the processing arrangement 20. If the sensor is a camera, it can send images or video to the processor which can take frames or part of the video based on evoked neutral or ocular responses.
  • the sensor is not limited to being a camera, and can be a variety of other sensors, including, but not limited to, audio recording device(s), for example a microphone 31, accelerometer(s) 32, GPS 33, skin conductance meter(s) 34, or molecular sensor(s) 35, all illustrated in Fig. 2.
  • the skin conductance meter(s) 34 can be used to sense anxiety, or, for example, galvanic skin response which can indicate a user's arousal.
  • the molecular sensor(s) 35 can be used to sense, for example, odors. As shown, some of the devices are separate from the hat, but can be incorporated to the hat and worn by user 1.
  • the processor 20 is adapted to compare the brain activity information generated by the electroencephalograph 10 and the stimuli activity information generated by the sensor 30 and generate feedback information therefrom.
  • the processor 20 can identify a relationship between the brain activity information and the stimuli activity information.
  • the relationship can be timing— the processor can, for example, determine that the brain activity information spiked at the same time the stimuli activity information was received.
  • the data can be synchronized through a parallel port pulse sent from a sensor to the electroencephalograph every few seconds, for example 2. The time at which the parallel port pulses are sent and received can be used to synchronize the data.
  • the relationship can indicate a user's interest in a stimulus. Alternatively, it can indicate a user's dislike of a stimulus, or other reactions, for example, fear of a stimulus.
  • the feedback information is received by a user interface.
  • the user interface can then present the feedback information to the user 1 as human
  • the user interface can be, for example, a head-mounted display 40.
  • the head mounted display 40 can be worn by the user like a pair of glasses and visually display information to the user.
  • one or more sensors for example camera 30 or microphone 31 , can be coupled to the glasses and worn by user 1.
  • the user interface can additionally or alternatively be the interface of a mobile phone, tablet, personal computer, audio assistant application, or other equivalent computing interfaces.
  • interpretable information can be visual information. In some embodiments, it can be audio information. In some embodiments, it can be tactile information, for example, a vibration or shock.
  • the mobile device 100 can include a second sensor, which can be, for example, a camera 30, audio recording device 31 , accelerometer 32, GPS 33, skin conductance meter 34, or molecular sensor 35.
  • the second sensor can be configured to sense one or more physical actions of the user and can provide action information to the processor 20.
  • the action can be any action by the user; in some embodiments, it can be a reaction to a stimulus.
  • the action can relate to being in a certain location and a
  • GPS 33 can be used to determine the location of the user 1.
  • the action can be sweating, due to anxiety or excitement, and skin conductance meter 34 can be used to sense the sweating.
  • the processor can further be adapted to compare the brain activity information, the stimuli information, and the action information, and generate feedback information therefrom. For example, the comparison can be based on timing. Mobile devices with more than two sensors are also contemplated, with each sensor providing information to the processor.
  • the mobile device can further include a tool, which can be, for example, computer vision 50, eye tracking 51 , or head tracking 52.
  • the tools can be configured to sense one or more reactions of the user and can provide reaction information to the processor 20.
  • the reaction information can be used to trace the user's interest back to the stimulus or context that evoked it.
  • the tool is an eye tracking device 51 , it can sense where the user 1 was looking at a given time.
  • the processor 20 can compare where the user was looking with stimuli information from the sensor (e.g., camera 30) and brain activity information from the electroencephalograph 10.
  • the processor can generate feedback information based on a correlation between when user 1 became interested in something (gleaned from brain activity information, for example by identifying a spike in brain activity information), what was in the user's visual field (gleaned from stimuli information) and where the user was looking (gleaned from reaction information).
  • Mobile devices with multiple tools are also contemplated, with each tool providing information to the processor. As shown in Fig. 2, the tools are coupled to the glasses worn by user 1 ; however, the tools can be coupled to the hat or can be separate entities.
  • the mobile device 100 can include a storage device 60.
  • the storage device 60 can be configured to receive feedback information from the processor 20 and store the feedback information.
  • the storage device 60 can be contained within the same housing as processor 20, as shown in figure 1.
  • the storage device 60 can be separate from the processor 20.
  • the storage device 60 can be connected to the mobile device 100 wirelessly, for example by blue tooth technology. In some embodiments the storage device 60 need not be carried by the user 1.
  • a method of providing a user with feedback related to a stimulus can include generating brain activity information (200) using an electroencephalograph.
  • the electroencephalograph can be worn in proximity to the user's head and sense brain activity of the user, and generate corresponding brain activity information.
  • the method can also include generating stimuli information (210) using a first sensor.
  • the sensor can be configured to sense one or more stimuli proximate the user, and generate corresponding stimuli information.
  • the method can include generating feedback information (240) using a processor.
  • the processor can be adapted to compare the brain activity information and stimuli information and generate feedback information therefrom.
  • the method can also include presenting human interpretable information (250) to the user using a user interface.
  • the user interface can be configured to receive the feedback information from the processor. It will be appreciated that the embodiments of the mobile device 100 described above can be used to perform the methods described here and below.
  • the method can further include generating action information (220) using a second sensor.
  • the second sensor can be configured to sense one or more physical actions of the user, and generate corresponding action information.
  • Generating feedback information can include comparing brain activity information, stimuli information, and action information.
  • the method can further include generating reaction information (230) using a tool.
  • the tool can be configured to sense one or more reactions of the user, and can generate corresponding reaction information.
  • Generating feedback information can include comparing activity information, stimuli information, and reaction information.
  • the method can include all four of generating stimuli information (210), generating brain activity information (200), generating action information (220), and generating reaction information (230).
  • Generating stimulus information and action information can include using multiple sensors.
  • Generating reaction information can include using more than one tool.
  • the process of generating feedback information can include comparing any and all information received by the processor.
  • Fig. 4 shows, for the purpose of illustration and not limitation, a block diagram of an exemplary embodiment of the disclosed subject matter.
  • the diagram shows that the system can include multiple sensors and a tool.
  • An EEG system (301) (abbreviated as EEG in the Figs. 4-9), for example an electroencephalograph, can be configured for brain information sensing.
  • Sensor 1 can be configured for
  • the EEG can detect brain information, which is sent to the processor and can be used to determine that the user's interest has been piqued (305).
  • Sensor 1 can detect stimuli information, which is sent to the processor and is used for object detection (306).
  • Information about the object (306) and information from tool 1 (302) can be used to identify an object of focus (307).
  • the processor can then identify an object of interest (308).
  • information from sensor 2 (304), object detection information (306), and information from sensor 1 (302) can be used to determine a user's context (309).
  • the user's context (309) and information about the object of interest (308) can be combined in various applications (310), to provide feedback information, which is provided to the user interface (31 1).
  • the device can be configured to determine the user's interest in a subject or stimuli while the user is active and in suitable time to allow the device to deliver pertinent information to the user about the subject, as well as direct the user to other locations, subjects, or stimuli that might also be of interest.
  • Figs. 5-8 illustrate specific applications of the exemplary embodiments of the disclosed subject matter.
  • the shopping assistant includes an EEG (301), which can detect brain information and send it to the processor to determine interest detection (305).
  • the application can also include a camera (313) for sensor 1, which can capture what the user is seeing.
  • Sensor 2 can be a GPS device (314), which can provide information to the processor to determine where the user is located, for example, at a store on 34 th street (318).
  • An eye tracker tool (312) can also be included, which can be used to determine where the user is looking.
  • Information from the tool (312) and sensor 1 (313) can be combined to determine what the user is looking at, for example, a dress (316).
  • the processor can determine that it is a dress that the user likes (317). This information and the location information
  • (318) can be combined in a shopping application, for example, Google shopping
  • the device can provide the user with information regarding the price of the dress at nearby shops (320).
  • Fig. 6 shows an example display.
  • the object of interest was a video game controller.
  • the device outputs what caught the user's interest, what the device is, and information about pricing at a nearby store.
  • the output can further include a map, indicating how to get to the store, and the user interface receives input from the user regarding the information provided.
  • Fig. 7 for the purpose of illustration and not limitation, one embodiment of the disclosed subject matter includes a music social networking application.
  • the application includes an EEG (301), which can detect brain information and send it to the processor to determine interest detection (305).
  • the application can also include a microphone (321), which can receive audio stimulus information.
  • That information can be sent to the processor, where an application (e.g., Shazam (322)) can be used to determine a song that the user is listening to (323).
  • the processor can further determine that this is a song the user likes (324).
  • a GPS (314) is also included, which can send information to the processor to determine the user's location (325).
  • the information can be run through a social media network, for example, Facebook (326), and the output can ask the user if he or she would like to post the song to his or her profile (327).
  • an addiction monitor includes an EEG (301), which can detect brain information and send it to the processor to determine interest detection (305).
  • a camera (313) can be used for object detection (306) to detect stimuli (329), and combined with the interest information, the processor can determine that the user wants a drink (330).
  • the GPS (314) and skin conductance (328) devices can provide information to the processor to determine that the user is at a bar and nervous (331). This information can be input into a texting application (332), and the user's sponsor can be notified (333), either with user interaction or automatically.
  • FIGs. 5-8 show specific applications of the disclosed subject matter, any information delivery, based on user's interest and context, is considered.
  • Other exemplary embodiments include applications like personal video diaries, sharing information with friends, or self-reflection and behavior modification.
  • devices can be used for assisting those suffering from mental disorders.
  • one embodiment can include a memory assistant for
  • the device can be used to detect rapid interest shifts and remind the user to focus.
  • the device includes an EEG (301) which can detect brain information and send it to the processor to determine interest detection (305).
  • the device can also include sensor 1 (304), which can detect information about a stimulus or object, and sensor 2 (334), which can detect information about the user's focus.
  • the processor can determine an object that the user is focused on (307), and can determine if it is an object of interest (308).
  • Information about the object (306 and 335) can further be used to determine context (309), for example, a location of the user.
  • the context (309) and object of interest (308) can be used in various applications (310), which can provide feedback through a user interface (336).
  • a device can employ a user's naturally evoked EEG, eye position, and pupil dilation to construct a hybrid classifier capable of distinguishing objects of interest from distractors.
  • the system can use a computer vision (CV) graph to reject anomalies in the hybrid classifier's predictions and find other, visually similar objects.
  • CV computer vision
  • naturally evoked neural and ocular signals can be simultaneously exploited and integrated with environmental date to enable augmented search and navigation in a hybrid brain-computer interface (hBCI) application.
  • hBCI hybrid brain-computer interface
  • the processor of the device can receive data from the electroencephalograph and an eye tracker.
  • the data can be synchronized with a parallel port pulse sent from an eye tracker computer to an EEG amplifier every few seconds, for example, every two seconds.
  • the time at which the parallel port pulses are sent and received can be used to synchronize the eye tracker and the EEG data.
  • the processor can use MATLAB (The Math Works Inc., MA) to process the eye position and pupillometry data.
  • the processor can use an EEGLAB toolbox to process the EEG data.
  • the EEG data can be band-pass filtered from 0.5 to 100 Hz, notch filtered at 60 Hz, and down sampled to 250 Hz. All blocks can be concatenated, and excessively noisy channels can be removed.
  • a hierarchical classifier can be used to accommodate multiple modalities.
  • the EEG data from a period of time can be separated into a discrete number of bins.
  • a set of "within-bin" weights across the individual components can be determined for each bin using Fisher Linear Discriminant Analysis (FLDA).
  • FLDA Fisher Linear Discriminant Analysis
  • Pupil dilation and dwell time data can be processed similarly to the EEG data.
  • the pupil dilation data can be binned and FLDA can be applied to create a discriminant value whose "sign" is the same as the EEG data's (so that targets > distractors).
  • the dwell time data can also be passed through FLDA.
  • the scale of each EEG, pupil dilation, and dwell time feature can be normalized by dividing by the standard deviation of that feature across all evaluation trials.
  • a forward model for each EEG bin can be calculated to examine the scalp topography of the EEG data contributing to the discriminating components.
  • This forward model can be viewed as a scalp map and interpreted as the coupling between the discriminating component and the original EEG recording.
  • Second-level feature vectors can be classified by deriving "cross-bin” weights using logistic regression (LR), which maximizes the conditional log likelihood of the correct class.
  • LR logistic regression
  • the processor can further identify other objects or stimuli related to the stimuli of interest, using a CV graph.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A mobile device for monitoring a user's reaction to one or more stimuli and providing feedback to the user related to the stimuli is provided. The mobile system has an electroencephalograph, configured to be worn in proximity to the user's head, sense brain activity of the user, and generate corresponding brain activity information. A processing arrangement can receive brain activity information, compare such information to stimuli information, and generate feedback information for the user.

Description

MOBILE, NEURALLY- ASSISTED PERSONAL ASSISTANT
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of priority of Provisional
Application Nos. 61/756,223 filed January 24, 2013 and 61/887,577 filed October 7, 2013, which are hereby incorporated by reference in their entireties.
GRANT INFORMATION
This invention was made with government support under Grant No. W91 lNF-1 1-1-0219 awarded by the Army Research Office and Grant No. W911NF- 10-2-0022 awarded by the Army Research Laboratory. The U.S. government has certain rights in this invention.
BACKGROUND
The disclosed subject matter relates to systems and methods for detecting a user's subjective interest and delivery of information to the user based on objects, subjects or conditions that evoked the user's interest.
Electroencephalography (EEG) can measure neural activity non- invasively and unobtrusively. As such, EEG systems, also called
electroencephalographs, can detect certain brain signals without a physical response from the user. Exemplary techniques for detecting interest of a user, for example using EEG, are described in U.S. Patent No. 7,835,787 and U.S. Patent Application Publication No. 2012/0089552, each of which is incorporated by reference herein in its entirety.
SUMMARY
The disclosed subject matter provides mobile devices for monitoring a user's reaction to one or more stimuli and providing feedback to the user related to the stimuli. In an exemplary embodiment, a device includes an electroencephalograph, configured to be worn in proximity to the user's head, sense brain activity of the user, and generate corresponding brain activity information. The device further includes a processing arrangement, coupled to the electroencephalograph, and configured to receive the brain activity information from the electroencephalograph. The device can also include at least a first sensor, coupled to the processing arrangement, and configured to sense the one or more stimuli proximate to the user and to provide stimuli information to the processing arrangement. The device can also include a user interface, coupled to the processing arrangement, and configured to receive the feedback information and to present corresponding human interpretable information to the user.
In some embodiments, the processing arrangement can be further adapted to identify a relationship between the stimuli information and the brain activity information. The brain activity information can include the user's subjective interest. The stimulus can include sight, sound, taste, touch, and/or smell. The first sensor can be a video camera, audio recording device, molecular sensor, global positioning system (GPS), accelerometer, and skin-conductance sensor.
In some embodiments, the mobile device can include a second sensor, coupled to the processing arrangement, and configured to sense one or more physical actions of the user and to provide action information to the processing arrangement. The processor can be further adapted to compare the brain activity information, the stimuli information, and the action information and generate feedback. The second sensor can be a video camera, audio recording device, molecular sensor, GPS, accelerometer, and skin-conductance sensor. The mobile device can be wearable.
In some embodiments, the mobile device can include a tool, coupled to the processing arrangement, and configured to sense one or more reactions of the user and to provide reaction information to the processing arrangement. The processor can be further adapted to compare the brain activity information, the stimuli information, and the reaction information and generate feedback information therefrom. The tool can be a computer vision, eye tracking, and/or head tracking tool. In some embodiments, the human interpretable information can be audio, visual, and/or tactile information. In some embodiments, the user interface can be a mobile phone, a head- mounted display, or an audio-assistant. In some embodiments, the mobile device can have a storage device, configured to receive the feedback information from the processor and store the feedback information. The processor can include an Internet connection for retrieving Internet information and the processor can be adapted to use the Internet information to generate the feedback information. In another aspect of the disclosed subject matter, methods of providing a user with feedback related to a stimulus are provided. In one example, a method includes generating brain activity information, e.g., using an electroencephalograph that is configured to be worn in proximity to the user's head and sense brain activity of the user. The method also includes generating stimuli information using a first sensor generating feedback information using a processor and presenting human interpretable information to the user.
In some embodiments, the method further includes generating action information using a second sensor. Determining feedback information can further include comparing the brain activity information, stimuli information, and the reaction information. In some embodiments the method further includes generating reaction information using a tool. Determining feedback information can further include comparing the brain activity information, the stimuli information, and the reaction information.
The accompanying drawings, which are incorporated in and constitute part of this specification, are included to illustrate and provide a further understanding of the method and system of the disclosed subject matter. Together with the description, the drawings serve to explain the principles of the disclosed subject matter.
BRIEF DESCRIPTION OF THE FIGURES
Figure 1 is an image of a user wearing an embodiment of the mobile device in accordance with the disclosed subject matter.
Figure 2 is an image of a user wearing an embodiment of the mobile device in accordance with the disclosed subject matter.
Figure 3 is a flow chart of a method of providing a user with feedback related to a stimulus in accordance with the disclosed subject matter.
Figure 4 is a block diagram of a system in accordance with the disclosed subject matter.
Figure 5 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.
Figure 6 is an image of an exemplary display in accordance with the disclosed subject matter. Figure 7 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.
Figure 8 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.
Figure 9 is a block diagram of an exemplary embodiment of a system in accordance with the disclosed subject matter.
Figure 10 is a block diagram of a system in accordance with the disclosed subject matter.
DETAILED DESCRIPTION
The methods and systems presented herein can be used for a mobile device for monitoring a user's reaction to one or more stimuli and providing feedback to the user related to the stimuli. In some embodiments, a mobile device can be configured to detect the user's subjective interest using EEG and provide delivery information to the user based on the objects, subjects, or conditions that evoke the user's interest. The user's interest can depend on sensory context that the user is experiencing, such as sights, sounds, smells, touch, and taste, and the user's preferences. The device can detect both interest and context and combine these elements which can be used in applications including, for example and without limitation, video diaries and social networking. As such, the device can provide assistance to the user without the user needing to specifically request it.
For purposes of illustration and not limitation, Figs. 1 and 2 show images of a user 1 wearing the mobile device 100 of the disclosed subject matter. The mobile device 100 includes an electroencephalograph 10. The electroencephalograph 10 is worn in proximity to the user's head. As shown in Figs. 1 and 2, the
electroencephalograph can be worn as a hat or a skull cap. Alternatively or in addition, the electroencephalograph can be placed on as tattoos, using flexible electrodes. The electrodes can be either dry electrodes or wet electrodes (using an electrolytic gel such as Ag-Ag/Cl). In some embodiments, the entire mobile device 100 can be wearable (Fig. 1). The electroencephalograph 10 can detect certain brain signals of user 1 continuously and noninvasively. When the user 1 encounters a stimulus 5, the electroencephalograph 10 can detect the resulting brain waves and generate brain activity information. As shown in Fig. 1 , stimulus 5 is a visual stimulus. It will be appreciated that the term stimulus is recognized to include any suitable stimulus, including sight, sound, smell, touch, or taste stimuli. In some embodiments, the user's brain activity information is related to the user's subjective interest. In some embodiments, the brain activity information can be measured through amplitude-based measure of interest, which can be temporally specific and therefore connected to specific elements of the user's rapidly changing context. Alternatively or in addition, power and/or phase measures can be used.
The mobile device 100 also includes a processing arrangement, for example, a processor 20. The processor 20 is coupled to the electroencephalograph 10, and is configured to receive brain activity information therefrom. The connection between the processor and the electroencephalograph can be via wire or radio- wave connections, such as Bluetooth or wireless local area networks, for example Wifi. As shown in Fig. 1 , the processor 20 can be coupled to the hat or skull cap supporting the electroencephalograph 10. However, the processor 20 can be separate from the wearable portion of the device, as shown in Fig. 2. In such embodiments the processor 20 can be connected to the mobile device 100 wirelessly, for example, by blue tooth technology. In some embodiments, computation can be done in the cloud.
The mobile device 100 includes at least one sensor, which is shown in Fig. 1 as camera 30. The camera can be placed on the eye glasses. It can include a wide field of view or a narrow field of view or both. The camera view can be controlled through eye-gaze via a controller from a mobile eye tracker. The sensor is configured to sense the stimuli 5 and provide stimuli information to the processing arrangement 20. If the sensor is a camera, it can send images or video to the processor which can take frames or part of the video based on evoked neutral or ocular responses. The sensor is not limited to being a camera, and can be a variety of other sensors, including, but not limited to, audio recording device(s), for example a microphone 31, accelerometer(s) 32, GPS 33, skin conductance meter(s) 34, or molecular sensor(s) 35, all illustrated in Fig. 2. The skin conductance meter(s) 34 can be used to sense anxiety, or, for example, galvanic skin response which can indicate a user's arousal. The molecular sensor(s) 35 can be used to sense, for example, odors. As shown, some of the devices are separate from the hat, but can be incorporated to the hat and worn by user 1.
The processor 20 is adapted to compare the brain activity information generated by the electroencephalograph 10 and the stimuli activity information generated by the sensor 30 and generate feedback information therefrom. For example, the processor 20 can identify a relationship between the brain activity information and the stimuli activity information. For example, the relationship can be timing— the processor can, for example, determine that the brain activity information spiked at the same time the stimuli activity information was received. In some embodiments, the data can be synchronized through a parallel port pulse sent from a sensor to the electroencephalograph every few seconds, for example 2. The time at which the parallel port pulses are sent and received can be used to synchronize the data. The relationship can indicate a user's interest in a stimulus. Alternatively, it can indicate a user's dislike of a stimulus, or other reactions, for example, fear of a stimulus.
The feedback information is received by a user interface. The user interface can then present the feedback information to the user 1 as human
interpretable information. The user interface can be, for example, a head-mounted display 40. The head mounted display 40 can be worn by the user like a pair of glasses and visually display information to the user. In some embodiments, one or more sensors, for example camera 30 or microphone 31 , can be coupled to the glasses and worn by user 1. The user interface can additionally or alternatively be the interface of a mobile phone, tablet, personal computer, audio assistant application, or other equivalent computing interfaces. In some embodiments, the human
interpretable information can be visual information. In some embodiments, it can be audio information. In some embodiments, it can be tactile information, for example, a vibration or shock.
In some embodiments, the mobile device 100 can include a second sensor, which can be, for example, a camera 30, audio recording device 31 , accelerometer 32, GPS 33, skin conductance meter 34, or molecular sensor 35. The second sensor can be configured to sense one or more physical actions of the user and can provide action information to the processor 20. The action can be any action by the user; in some embodiments, it can be a reaction to a stimulus.
For example, the action can relate to being in a certain location and a
GPS 33 can be used to determine the location of the user 1. For another example, the action can be sweating, due to anxiety or excitement, and skin conductance meter 34 can be used to sense the sweating. The processor can further be adapted to compare the brain activity information, the stimuli information, and the action information, and generate feedback information therefrom. For example, the comparison can be based on timing. Mobile devices with more than two sensors are also contemplated, with each sensor providing information to the processor.
In some embodiments, the mobile device can further include a tool, which can be, for example, computer vision 50, eye tracking 51 , or head tracking 52. The tools can be configured to sense one or more reactions of the user and can provide reaction information to the processor 20. The reaction information can be used to trace the user's interest back to the stimulus or context that evoked it. For example, if the tool is an eye tracking device 51 , it can sense where the user 1 was looking at a given time. The processor 20 can compare where the user was looking with stimuli information from the sensor (e.g., camera 30) and brain activity information from the electroencephalograph 10. The processor can generate feedback information based on a correlation between when user 1 became interested in something (gleaned from brain activity information, for example by identifying a spike in brain activity information), what was in the user's visual field (gleaned from stimuli information) and where the user was looking (gleaned from reaction information). Mobile devices with multiple tools are also contemplated, with each tool providing information to the processor. As shown in Fig. 2, the tools are coupled to the glasses worn by user 1 ; however, the tools can be coupled to the hat or can be separate entities.
In some embodiments, the mobile device 100 can include a storage device 60. The storage device 60 can be configured to receive feedback information from the processor 20 and store the feedback information. The storage device 60 can be contained within the same housing as processor 20, as shown in figure 1. In some embodiments the storage device 60 can be separate from the processor 20. In such embodiments, the storage device 60 can be connected to the mobile device 100 wirelessly, for example by blue tooth technology. In some embodiments the storage device 60 need not be carried by the user 1.
Referring now to Fig. 3, for the purposes of illustration and not limitation, in some embodiments there is provided a method of providing a user with feedback related to a stimulus. The method can include generating brain activity information (200) using an electroencephalograph.
The electroencephalograph can be worn in proximity to the user's head and sense brain activity of the user, and generate corresponding brain activity information. The method can also include generating stimuli information (210) using a first sensor. The sensor can be configured to sense one or more stimuli proximate the user, and generate corresponding stimuli information. The method can include generating feedback information (240) using a processor. The processor can be adapted to compare the brain activity information and stimuli information and generate feedback information therefrom. The method can also include presenting human interpretable information (250) to the user using a user interface. The user interface can be configured to receive the feedback information from the processor. It will be appreciated that the embodiments of the mobile device 100 described above can be used to perform the methods described here and below.
In some embodiments the method can further include generating action information (220) using a second sensor. The second sensor can be configured to sense one or more physical actions of the user, and generate corresponding action information. Generating feedback information can include comparing brain activity information, stimuli information, and action information. In some embodiment the method can further include generating reaction information (230) using a tool. The tool can be configured to sense one or more reactions of the user, and can generate corresponding reaction information. Generating feedback information can include comparing activity information, stimuli information, and reaction information. In some embodiments the method can include all four of generating stimuli information (210), generating brain activity information (200), generating action information (220), and generating reaction information (230). Generating stimulus information and action information can include using multiple sensors. Generating reaction information can include using more than one tool. The process of generating feedback information can include comparing any and all information received by the processor.
Fig. 4 shows, for the purpose of illustration and not limitation, a block diagram of an exemplary embodiment of the disclosed subject matter. The diagram shows that the system can include multiple sensors and a tool. An EEG system (301) (abbreviated as EEG in the Figs. 4-9), for example an electroencephalograph, can be configured for brain information sensing. Sensor 1 can be configured for
environment sensing (303); sensor 2 can be configured for context sensing (304); and tool 1 can be configured for user state sensing (302). The EEG can detect brain information, which is sent to the processor and can be used to determine that the user's interest has been piqued (305). Sensor 1 can detect stimuli information, which is sent to the processor and is used for object detection (306). Information about the object (306) and information from tool 1 (302) can be used to identify an object of focus (307). Together with the interest detection (305), the processor can then identify an object of interest (308). In parallel, information from sensor 2 (304), object detection information (306), and information from sensor 1 (302) can be used to determine a user's context (309). The user's context (309) and information about the object of interest (308) can be combined in various applications (310), to provide feedback information, which is provided to the user interface (31 1). In some embodiments, the device can be configured to determine the user's interest in a subject or stimuli while the user is active and in suitable time to allow the device to deliver pertinent information to the user about the subject, as well as direct the user to other locations, subjects, or stimuli that might also be of interest. Figs. 5-8 illustrate specific applications of the exemplary embodiments of the disclosed subject matter.
Referring to Figs. 5 and 6, for the purpose of illustration and not limitation, one embodiment of the disclosed subject matter includes a shopping assistant. The shopping assistant includes an EEG (301), which can detect brain information and send it to the processor to determine interest detection (305). The application can also include a camera (313) for sensor 1, which can capture what the user is seeing. Sensor 2 can be a GPS device (314), which can provide information to the processor to determine where the user is located, for example, at a store on 34th street (318). An eye tracker tool (312) can also be included, which can be used to determine where the user is looking. Information from the tool (312) and sensor 1 (313) can be combined to determine what the user is looking at, for example, a dress (316). Combined with the interest detection (305), the processor can determine that it is a dress that the user likes (317). This information and the location information
(318) can be combined in a shopping application, for example, Google shopping
(319) , and the device can provide the user with information regarding the price of the dress at nearby shops (320).
Fig. 6 shows an example display. In the example shown in Fig. 6, the object of interest was a video game controller. The device outputs what caught the user's interest, what the device is, and information about pricing at a nearby store. The output can further include a map, indicating how to get to the store, and the user interface receives input from the user regarding the information provided. Referring to Fig. 7, for the purpose of illustration and not limitation, one embodiment of the disclosed subject matter includes a music social networking application. The application includes an EEG (301), which can detect brain information and send it to the processor to determine interest detection (305). The application can also include a microphone (321), which can receive audio stimulus information. That information can be sent to the processor, where an application (e.g., Shazam (322)) can be used to determine a song that the user is listening to (323). The processor can further determine that this is a song the user likes (324). A GPS (314) is also included, which can send information to the processor to determine the user's location (325). The information can be run through a social media network, for example, Facebook (326), and the output can ask the user if he or she would like to post the song to his or her profile (327).
Referring to Fig. 8, for the purpose of illustration and not limitation, another embodiment of the disclosed subject matter includes an addiction monitor. The monitor includes an EEG (301), which can detect brain information and send it to the processor to determine interest detection (305). A camera (313) can be used for object detection (306) to detect stimuli (329), and combined with the interest information, the processor can determine that the user wants a drink (330). The GPS (314) and skin conductance (328) devices can provide information to the processor to determine that the user is at a bar and nervous (331). This information can be input into a texting application (332), and the user's sponsor can be notified (333), either with user interaction or automatically.
While Figs. 5-8 show specific applications of the disclosed subject matter, any information delivery, based on user's interest and context, is considered. Other exemplary embodiments include applications like personal video diaries, sharing information with friends, or self-reflection and behavior modification. In other embodiments, devices can be used for assisting those suffering from mental disorders. For example, one embodiment can include a memory assistant for
Alzheimer's patients. Facial recognition can be performed on faces of interest and used to prompt the user with the name of each person. Alternatively, or in addition, a video diary can be created, stored, and reviewed at the end of the day to reinforce new memories. In another embodiment for Attention Deficit Disorder sufferers, the device can be used to detect rapid interest shifts and remind the user to focus. In another embodiment, shown in Fig. 9 for the purpose of illustration and not limitation, the device includes an EEG (301) which can detect brain information and send it to the processor to determine interest detection (305). The device can also include sensor 1 (304), which can detect information about a stimulus or object, and sensor 2 (334), which can detect information about the user's focus. The processor can determine an object that the user is focused on (307), and can determine if it is an object of interest (308). Information about the object (306 and 335) can further be used to determine context (309), for example, a location of the user. The context (309) and object of interest (308) can be used in various applications (310), which can provide feedback through a user interface (336).
In another exemplary embodiment, a device can employ a user's naturally evoked EEG, eye position, and pupil dilation to construct a hybrid classifier capable of distinguishing objects of interest from distractors. The system can use a computer vision (CV) graph to reject anomalies in the hybrid classifier's predictions and find other, visually similar objects. Accordingly, naturally evoked neural and ocular signals can be simultaneously exploited and integrated with environmental date to enable augmented search and navigation in a hybrid brain-computer interface (hBCI) application.
The processor of the device can receive data from the electroencephalograph and an eye tracker. The data can be synchronized with a parallel port pulse sent from an eye tracker computer to an EEG amplifier every few seconds, for example, every two seconds. The time at which the parallel port pulses are sent and received can be used to synchronize the eye tracker and the EEG data.
The processor can use MATLAB (The Math Works Inc., MA) to process the eye position and pupillometry data. The processor can use an EEGLAB toolbox to process the EEG data. The EEG data can be band-pass filtered from 0.5 to 100 Hz, notch filtered at 60 Hz, and down sampled to 250 Hz. All blocks can be concatenated, and excessively noisy channels can be removed.
A hierarchical classifier can be used to accommodate multiple modalities. The EEG data from a period of time can be separated into a discrete number of bins. A set of "within-bin" weights across the individual components can be determined for each bin using Fisher Linear Discriminant Analysis (FLDA). Pupil dilation and dwell time data can be processed similarly to the EEG data. The pupil dilation data can be binned and FLDA can be applied to create a discriminant value whose "sign" is the same as the EEG data's (so that targets > distractors). The dwell time data can also be passed through FLDA. The scale of each EEG, pupil dilation, and dwell time feature can be normalized by dividing by the standard deviation of that feature across all evaluation trials.
A forward model for each EEG bin can be calculated to examine the scalp topography of the EEG data contributing to the discriminating components. This forward model can be viewed as a scalp map and interpreted as the coupling between the discriminating component and the original EEG recording.
Second-level feature vectors can be classified by deriving "cross-bin" weights using logistic regression (LR), which maximizes the conditional log likelihood of the correct class. These weights can be applied to the within-bin interest scores from a separate set of testing trials to get a single "cross-bin interest score" for each trial.
Using the identified cross-bin interest score associated with various stimuli, the processor can further identify other objects or stimuli related to the stimuli of interest, using a CV graph.
While the disclosed subject matter is described herein in terms of certain exemplary embodiments, those skilled in the art will recognize that various modifications and improvements can be made to the disclosed subject matter without departing from the scope thereof. Moreover, although individual features of one embodiment of the disclosed subject matter can be discussed herein, or shown in the drawing of one of the embodiments and not in another embodiment, it should be apparent that individual features of one embodiment can be combined with one or more features of another embodiment or features from a plurality of embodiments. Thus, the foregoing description of specific embodiments of the disclosed subject matter has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosed subject matter to those
embodiments disclosed.

Claims

1. A mobile device for monitoring a user's reaction to one or more stimuli and providing feedback to the user related to the stimuli, comprising:
an electroencephalograph, configured to be worn in proximity to the user's head, sense brain activity of the user, and generate corresponding brain activity information;
a processing arrangement, coupled to the electroencephalograph, and configured to receive the brain activity information from the electroencephalograph, a first sensor, coupled to the processing arrangement, and configured to sense the one or more stimuli proximate to the user and to provide stimuli information to the processing arrangement;
wherein the processing arrangement is adapted to compare the brain activity information and the stimuli information and generate feedback information therefrom; and
a user interface, coupled to the processing arrangement, configured to receive the feedback information therefrom and to present corresponding human interpretable information to the user.
2. The mobile device of claim 1, wherein the processing arrangement is further adapted to identify a relationship between the at least one input and the brain activity information.
3. The mobile device of claim 1, wherein the brain activity information comprises the user's subjective interest.
4. The mobile device of claim 1, wherein the stimulus is selected from the group consisting of a sight, sound, taste, touch, and smell.
5. The mobile device of claim 1, wherein the first sensor is selected from the group consisting of a video camera, audio recording device, molecular sensor, global positioning system (GPS), accelerometer, and skin-conductance sensor.
6. The mobile device of claim 1, further comprising a second sensor, coupled to the processing arrangement, and configured to sense one or more physical reactions of the user and to provide reaction information to the processing
arrangement.
7. The mobile device of claim 6, wherein the processor is further adapted to compare the brain activity information, the stimuli information, and the reaction information and generate feedback information therefrom.
8. The mobile device of claim 6, wherein the second sensor is selected from the group consisting of a video camera, audio recording device, molecular sensor, global positioning system (GPS), accelerometer, and skin-conductance sensor.
9. The mobile device of claim 1, wherein the mobile device is wearable.
10. The mobile device of claim 1, further comprising at least one tool, coupled to the processing arrangement, and configured to sense one or more reactions of the user and to provide action information to the processing arrangement.
11. The mobile device of claim 10, wherein the processor is further adapted to compare the brain activity information, the stimuli information, and the reaction information and generate feedback information therefrom.
12. The mobile device of claim 10, wherein the at least one tool is selected from the group consisting of computer vision, eye tracking, and head tracking.
13. The mobile device of claim 1, wherein the human interpretable information comprises audio, visual, and/or tactile information.
14. The mobile device of claim 1, wherein the user interface comprises a mobile phone.
15. The mobile device of claim 1, wherein the user interface comprises a head-mounted display.
16. The mobile device of claim 1, wherein the user interface comprises an audio-assistant application.
17. The mobile device of claim 1, further comprising a storage device, configured to receive the feedback information from the processor and store the feedback information.
18. The mobile device of claim 1 , wherein the processor further comprises an Internet connection for retrieving Internet information and the processor is further adapted to use the Internet information to generate the feedback information.
19. A method of providing a user with feedback related to a stimulus, comprising:
generating brain activity information using an electroencephalograph, configured to be worn in proximity to the user's head and sense brain activity of the user, and generate corresponding brain activity information;
generating stimuli information using a first sensor, configured to sense one or more stimuli proximate the user, and generate corresponding stimuli information; generating feedback information using a processor adapted to compare the brain activity information and stimuli information and generate feedback therefrom; and
presenting human interpretable information to the user using a user interface configured to receive the feedback information from the processor.
20. The method of claim 19, further comprising generating action information using a second sensor, configured to sense one or more physical actions of the user, and generate corresponding reaction information; and
wherein determining feedback information further comprises comparing the brain activity information, the stimuli information, and the action information.
21. The method of claim 19, further comprising generating reaction information using a tool, configured to sense one or more reactions of the user, and generate corresponding reaction information; and
wherein determining feedback information further comprises comparing the brain activity information, the stimuli information, and the reaction information.
PCT/US2014/012750 2013-01-24 2014-01-23 Mobile, neurally-assisted personal assistant WO2014116826A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361756223P 2013-01-24 2013-01-24
US61/756,223 2013-01-24
US201361887577P 2013-10-07 2013-10-07
US61/887,577 2013-10-07

Publications (1)

Publication Number Publication Date
WO2014116826A1 true WO2014116826A1 (en) 2014-07-31

Family

ID=51228031

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/012750 WO2014116826A1 (en) 2013-01-24 2014-01-23 Mobile, neurally-assisted personal assistant

Country Status (1)

Country Link
WO (1) WO2014116826A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017016941A1 (en) * 2015-07-29 2017-02-02 Koninklijke Philips N.V. Wearable device, method and computer program product
US10419655B2 (en) 2015-04-27 2019-09-17 Snap-Aid Patents Ltd. Estimating and using relative head pose and camera field-of-view
EP3672478A1 (en) * 2017-08-23 2020-07-01 Neurable Inc. Brain-computer interface with high-speed eye tracking features
US11366517B2 (en) 2018-09-21 2022-06-21 Neurable Inc. Human-computer interface using high-speed and accurate tracking of user interactions
EP4166067A1 (en) * 2021-10-15 2023-04-19 Mentalista On-board device for synchronised collection of brain waves and environmental data
US11972049B2 (en) 2022-01-31 2024-04-30 Neurable Inc. Brain-computer interface with high-speed eye tracking features

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060277474A1 (en) * 1998-12-18 2006-12-07 Tangis Corporation Automated selection of appropriate information based on a computer user's context
US20070265507A1 (en) * 2006-03-13 2007-11-15 Imotions Emotion Technology Aps Visual attention and emotional response detection and display system
US20090025023A1 (en) * 2007-06-06 2009-01-22 Neurofocus Inc. Multi-market program and commercial response monitoring system using neuro-response measurements
US20110161163A1 (en) * 2009-12-30 2011-06-30 Clear Channel Management Services, Inc. Wearable advertising ratings methods and systems
US20120293407A1 (en) * 2011-05-19 2012-11-22 Samsung Electronics Co. Ltd. Head mounted display device and image display control method therefor
US20120327116A1 (en) * 2011-06-23 2012-12-27 Microsoft Corporation Total field of view classification for head-mounted display

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060277474A1 (en) * 1998-12-18 2006-12-07 Tangis Corporation Automated selection of appropriate information based on a computer user's context
US20070265507A1 (en) * 2006-03-13 2007-11-15 Imotions Emotion Technology Aps Visual attention and emotional response detection and display system
US20090025023A1 (en) * 2007-06-06 2009-01-22 Neurofocus Inc. Multi-market program and commercial response monitoring system using neuro-response measurements
US20110161163A1 (en) * 2009-12-30 2011-06-30 Clear Channel Management Services, Inc. Wearable advertising ratings methods and systems
US20120293407A1 (en) * 2011-05-19 2012-11-22 Samsung Electronics Co. Ltd. Head mounted display device and image display control method therefor
US20120327116A1 (en) * 2011-06-23 2012-12-27 Microsoft Corporation Total field of view classification for head-mounted display

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10419655B2 (en) 2015-04-27 2019-09-17 Snap-Aid Patents Ltd. Estimating and using relative head pose and camera field-of-view
US10594916B2 (en) 2015-04-27 2020-03-17 Snap-Aid Patents Ltd. Estimating and using relative head pose and camera field-of-view
US11019246B2 (en) 2015-04-27 2021-05-25 Snap-Aid Patents Ltd. Estimating and using relative head pose and camera field-of-view
WO2017016941A1 (en) * 2015-07-29 2017-02-02 Koninklijke Philips N.V. Wearable device, method and computer program product
EP3672478A1 (en) * 2017-08-23 2020-07-01 Neurable Inc. Brain-computer interface with high-speed eye tracking features
CN111629653A (en) * 2017-08-23 2020-09-04 神经股份有限公司 Brain-computer interface with high speed eye tracking features
EP3672478A4 (en) * 2017-08-23 2021-05-19 Neurable Inc. Brain-computer interface with high-speed eye tracking features
US11269414B2 (en) 2017-08-23 2022-03-08 Neurable Inc. Brain-computer interface with high-speed eye tracking features
US11366517B2 (en) 2018-09-21 2022-06-21 Neurable Inc. Human-computer interface using high-speed and accurate tracking of user interactions
EP4166067A1 (en) * 2021-10-15 2023-04-19 Mentalista On-board device for synchronised collection of brain waves and environmental data
US11972049B2 (en) 2022-01-31 2024-04-30 Neurable Inc. Brain-computer interface with high-speed eye tracking features

Similar Documents

Publication Publication Date Title
US10799140B2 (en) System and method for instructing a behavior change in a user
US8676230B2 (en) Bio signal based mobile device applications
Greene et al. A survey of affective computing for stress detection: Evaluating technologies in stress detection for better health
Pasluosta et al. An emerging era in the management of Parkinson's disease: wearable technologies and the internet of things
KR102471442B1 (en) A system and a method for generating stress level and stress resilience level information for an individual
Healey et al. Out of the lab and into the fray: Towards modeling emotion in everyday life
US20130245396A1 (en) Mental state analysis using wearable-camera devices
JP2006525829A (en) Intelligent deception verification system
CN108348181A (en) Method and system for monitoring and improving attention
WO2014138925A1 (en) Wearable computing apparatus and method
KR20130130803A (en) Dry sensor eeg/emg and motion sensing system for seizure detection and monitoring
JP2009508553A (en) System and method for determining human emotion by analyzing eyeball properties
JP2015229040A (en) Emotion analysis system, emotion analysis method, and emotion analysis program
WO2014116826A1 (en) Mobile, neurally-assisted personal assistant
Wascher et al. Neuroergonomics on the go: An evaluation of the potential of mobile EEG for workplace assessment and design
EP3302257B1 (en) System for supporting an elderly, frail and/or diseased person
Rahman et al. Automated detection approaches to autism spectrum disorder based on human activity analysis: A review
US20200402641A1 (en) Systems and methods for capturing and presenting life moment information for subjects with cognitive impairment
Prendinger et al. Symmetric multimodality revisited: Unveiling users' physiological activity
Nalepa et al. AfCAI systems: A ective Computing with Context Awareness for Ambient Intelligence. Research proposal.
WO2023037714A1 (en) Information processing system, information processing method and computer program product
Ruotsalo et al. Affective Relevance
Hegazy et al. Developing an affective working companion utilising GSR data
Wache Implicit human-computer interaction: two complementary approaches
CN117137499A (en) Consciousness state analysis method and device based on multi-scale feature fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14743743

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14743743

Country of ref document: EP

Kind code of ref document: A1