WO2011092549A1 - Method and apparatus for assigning a feature class value - Google Patents

Method and apparatus for assigning a feature class value Download PDF

Info

Publication number
WO2011092549A1
WO2011092549A1 PCT/IB2010/050363 IB2010050363W WO2011092549A1 WO 2011092549 A1 WO2011092549 A1 WO 2011092549A1 IB 2010050363 W IB2010050363 W IB 2010050363W WO 2011092549 A1 WO2011092549 A1 WO 2011092549A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
perception
correlation
data signals
data set
Prior art date
Application number
PCT/IB2010/050363
Other languages
French (fr)
Inventor
Sunil Sivadas
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/IB2010/050363 priority Critical patent/WO2011092549A1/en
Publication of WO2011092549A1 publication Critical patent/WO2011092549A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/038Indexing scheme relating to G06F3/038
    • G06F2203/0381Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer

Definitions

  • the present invention relates to apparatus for human computer interaction.
  • the invention further relates to, but is not limited to, apparatus for human computer interaction in mobile devices.
  • HCI human computer interaction
  • desktop such as office personal computers (or desktop PC)
  • mobile apparatus such as telephones, portable computers, and media and audio players.
  • the typical user interacts with other humans using speech, gestures and other forms of communication (including non-verbal signals) in a full-duplex mode.
  • person-to-person communication whether verbal or non-verbal, occurs in two directions simultaneously.
  • These parallel communications are interpreted in real time based on context, sensory perception and cognition.
  • WIMP window-icon-menu-pointing
  • HCI human computer interaction
  • the user interface has to be designed to be multi-modal and multi-sensory, in other words is capable of being operated in various modes and furthermore has multiple sensors to detect different user inputs which allows the user interface to "sense" various contexts or loads on the user.
  • the different input and output modalities may allow the user interface to choose the optimal modality of a user interface for a given context without direct selection from the user.
  • An example of adapting the user interface based on context is, for example, to change the volume of the ring tone or selection of a vibration function in a mobile communications device. By sensing the ambient audio volume level from the microphone, the ring tone volume can be adjusted so that the user is able to hear the mobile communications device when called. Similarly a further example is the enabling or disabling of a speech-to-text function based once again on the detected volume of ambient noise.
  • the context based user interface may be designed to disable the speech-to-text function if the ambient audio volume level detected from the microphone is particularly high and therefore likely to have many errors.
  • the user interface may open up a map and routing application as well as an internet browser page automatically directed to the local railway station timetable page rather than the user having to manually find the application and then search the internet for the relevant timetable information.
  • a potential application area for context sensitive user interfacing is in gaming.
  • the level of interest or difficulty in the game may be adapted to try to attempt to keep the player "in the zone". Typically this is only attempted to "level” the difficulty so that the player does not complete the game too quickly.
  • the level of anxiety or boredom of the player may be detected using various bio-sensors.
  • a brain-computer interface BCI to monitor the bio-signals and other sensors such as heart rate (using an electrocardiogram ECG), blood pressure, posture and muscle movement (using a surface electromyogram sEMG), the user may be monitored directly and the game controlled by changing the level or difficulty of the game being played.
  • This invention proceeds from the consideration that using multiple sensors to analyse user interface inputs that are correlated and given a context, by using a well designed pre-processing operation on such sensor data streams from which are then jointly analysed, a correlation between various relationships of sensors can be determined. Furthermore by effective ordering the results of the analysis specific features may be extracted and used to assist the control and interfacing of the apparatus. Embodiments of the present invention aim to address the above problem.
  • a method comprising: capturing at least two data signals from at least two sensors; determining a correlation data set for the at least two data signals; assigning a feature class value dependent on the correlation data set.
  • Determining the correlation data set for the at least two data signals may further comprise: aligning the at least two data signals; projecting at least one of the at least two data signals onto a first basis vector to generate a first sample space; projecting a further of the at least two data signals onto a second basis vector to generate a second sample space; wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
  • Determining the correlation data set for the at least two data signals may comprise determining a kernel canonical correlation for the at least two data signals.
  • Determining the correlation data set for the at least two data signals may comprise: sorting the correlation data set in order of magnitude of correlation; and discarding at least one dimension of the data set dependent on the magnitude of correlation.
  • the method may further comprise performing a context related operation dependent on the feature class value.
  • the context related operation may comprise setting a computer game difficulty level.
  • Assigning the feature class value dependent on the correlation data may comprise performing a training operation to associate a feature class to the correlation data set.
  • Capturing at least two data signals from at least two sensors may comprise: receiving from a first sensor type at least one data signal stream; and receiving from a second sensor type at least one further data signal stream.
  • the method may further comprise filtering the at least two data signals prior to determining a correlation data set for the at least two data signals.
  • the at least two sensors may comprise at least two of: camera sensor for vision perception; microphone sensor for hearing perception; temperature sensor for temperature perception; humidity sensor for humidity perception; tactile sensor for touch and/or pressure perception; electromyography sensor for muscle movement perception; chemical sensor for smell perception; compass for direction perception; satellite positioning sensor for location perception; gyroscope/accelerometer for orientation perception; antenna sensor for signal strength perception; battery sensor for power perception; electroencephalograph for brain activity perception; and pulse monitor, blood pressure sensor, electrocardiograph for heart activity perception.
  • an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: capturing at least two data signals from at least two sensors; determining a correlation data set for the at least two data signals; and assigning a feature class value dependent on the correlation data set.
  • the apparatus caused to at least perform determining a correlation data set for the at least two data signals is preferably further caused to perform: aligning the at least two data signals; projecting at least one of the at least two data signals onto a first basis vector to generate a first sample space; projecting a further of the at least two data signals onto a second basis vector to generate a second sample space; and wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
  • the apparatus caused to at least perform determining a correlation data set for the at least two data signals is preferably further caused to perform determining a kernel canonical correlation for the at least two data signals,
  • the apparatus caused to at least perform determining a correlation data set for the at least two data signals is preferably further caused to perform: sorting the correlation data set in order of magnitude of correlation; and discarding at least one dimension of the data set dependent on the magnitude of correlation.
  • the apparatus may be further caused to perform a context related operation dependent on the feature class value.
  • the context related operation preferably comprises setting a computer game difficulty level.
  • the apparatus caused to at least perform assigning a feature class value dependent on the correlation data is preferably further caused to perform a training operation to associate a feature class to the correlation data set.
  • the apparatus caused to at least perform capturing at least two data signals from at least two sensors is preferably caused to perform: receiving from a first sensor type at least one data signal stream; and receiving from a second sensor type at least one further data signal stream.
  • the apparatus may be caused to at least further perform filtering the at least two data signals prior to determining a correlation data set for the at least two data signals.
  • the at least two sensors preferably comprises at least two of: camera sensor for vision perception; microphone sensor for hearing perception; temperature sensor for temperature perception; humidity sensor for humidity perception; tactile sensor for touch and/or pressure perception; electromyography sensor for muscle movement perception; chemical sensor for smell perception; compass for direction perception; satellite positioning sensor for location perception; gyroscope/accelerometer for orientation perception; antenna sensor for signal strength perception; battery sensor for power perception; electroencephalograph for brain activity perception; and pulse monitor, blood pressure sensor, electrocardiograph for heart activity perception.
  • an apparatus comprising: a signal processor configured to capture at least two data signals from at least two sensors; a feature extractor configured to determine a correlation data set for the at least two data signals; and a feature classifier configured to assign a feature class value dependent on the correlation data set.
  • the feature extractor may comprise a buffer configured to align the at least two data signals; and an eigenanalysisr configured to project at least one of the at least two data signals onto a first basis vector to generate a first sample space; and project a further of the at least two data signals onto a second basis vector to generate a second sample space; wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
  • the feature extractor may further comprise a kernel canonical correlator configured to perform a kernel canonical correlation on the at least two data signals.
  • the feature extractor may further comprise a correlation sorter configured to sort the correlation data set in order of magnitude of correlation; and discard at least one dimension of the data set dependent on the magnitude of correlation.
  • the apparatus may comprise a resource controller configured to perform a context related operation dependent on the feature class value.
  • the context related operation may comprise setting a computer game difficulty level.
  • the feature classifier may comprise a feature trainer configured to perform a training operation to associate a feature class to the correlation data set.
  • the signal processor may be configured to receive from a first sensor type at least one data signal stream; and receive from a second sensor type at least one further data signal stream.
  • the signal processor may comprise a filter configured to filter the at least two data signals prior to determining a correlation data set for the at least two data signals.
  • the at least two sensors may comprise at least two of: camera sensor for vision perception; microphone sensor for hearing perception; temperature sensor for temperature perception; humidity sensor for humidity perception; tactile sensor for touch and/or pressure perception; electromyography sensor for muscle movement perception; chemical sensor for smell perception; compass for direction perception; satellite positioning sensor for location perception; gyroscope/accelerometer for orientation perception; antenna sensor for signal strength perception; battery sensor for power perception; electroencephalograph for brain activity perception; and pulse monitor, blood pressure sensor, electrocardiograph for heart activity perception.
  • an apparatus comprising: signal capture means configured to capture at least two data signals from at least two sensors; signal processing means configured to determine a correlation data set for the at least two data signals; and feature classifying means configured to assign a feature class value dependent on the correlation data set.
  • a computer-readable medium encoded with instructions that, when executed by a computer perform: capturing at least two data signals from at least two sensors; determining a correlation data set for the at least two data signals; and assigning a feature class value dependent on the correlation data set.
  • An electronic device may comprise apparatus as described above.
  • a chipset may comprise apparatus as described above.
  • Figure 1 shows schematically an electronic device configured to implement embodiments of the application
  • Figure 2 shows schematically a context sensitive user interface configuration which may employed in embodiments of the application
  • Figure 3 shows a table of example sensors which may be employed in some embodiments of the application
  • Figure 4 shows a table of example variables and sensed levels as may be employed in some embodiments
  • Figure 5 shows schematically the context sensitive block from the context sensitive user interface shown in Figure 2 in further detail
  • Figures 6a, 6b and 6c show flow diagrams of the operations of the context sensitive block according to some embodiments
  • Figure 7 shows schematically apparatus configured to implement of a multi- sensor context sensitive user interface in a gaming application according to some embodiments
  • Figure 8 shows a flow diagram of the operation of the apparatus multi- sensor context sensitive user interface in a gaming application shown in Figure 7;
  • Figure 9 shows a schematic view of a Trainer/Classifier as shown in Figure 5 in further detail.
  • FIG. 1 shows a schematic block diagram of an exemplary electronic device 10 or apparatus, which may incorporate enhanced signal to noise context sensitive user interface performance components and methods.
  • the apparatus 10 may for example be a mobile terminal or user equipment for a wireless communication system.
  • the electronic device may be any audio player, such as an mp3 player or media (mp4) player, equipped with suitable sensors as described below.
  • the apparatus 10 may be any suitable computing equipment requiring user input. For example games consoles, personal computer, personal digital assistants, or any apparatus with displays and/or input devices.
  • the apparatus 10 in some embodiments comprises a processor 21 .
  • the processor 21 may be configured to execute various program codes.
  • the implemented program codes may comprise a context driven user interface enhancement code.
  • the implemented program codes 23 may be stored for example in the memory 22 for retrieval by the processor 21 whenever needed.
  • the memory 22 could further provide a section 24 for storing data, for example data that has been processed in accordance with the embodiments.
  • the context driven user interface enhancement code may in embodiments be implemented at least partially in hardware or firmware.
  • the processor 21 may comprise an audio subsystem 14 which may comprise an audio output subsystem where in some embodiments the processor 21 is linked via a digital-to-analogue converter (DAC) 32 to a speaker 33.
  • DAC digital-to-analogue converter
  • the digital to analogue converter (DAC) 32 may be any suitable converter.
  • the speaker 33 may for example be any suitable audio transducer equipment suitable for producing acoustic waves for the user's ears generated from the electronic audio signal output from the DAC 32.
  • the speaker 33 in some embodiments may in some embodiments be a headset or ear worn speaker (EWS) and may be connected to the electronic device 10 via a headphone connector.
  • the speaker 33 may comprise the DAC 32.
  • the speaker 33 may connect to the electronic device 10 wirelessly 10, for example by using a low power radio frequency connection such as demonstrated by the Bluetooth A2DP profile.
  • the apparatus 10 audio subsystem may further comprise an audio input subsystem which in some embodiments further comprises at least two microphones in a microphone array for inputting or capturing acoustic waves and outputting audio or speech signals to be processed according to embodiments of the application.
  • This audio or speech signals may according to some embodiments be transmitted to other electronic devices via the transceiver 13 or may be stored in the data section 24 of the memory 22 for later processing.
  • a corresponding program code or hardware to control the capture of audio signals using the at least two microphones may be activated to this end by the user via an user interface 15.
  • the apparatus audio input subsystem in such embodiments may further comprise an analogue-to-digital converter (ADC) configured to convert the input analogue audio signals from the microphone array into digital audio signals and provide the digital audio signals to the processor 21 .
  • ADC analogue-to-digital converter
  • the apparatus 10 may further comprise a display subsystem 32.
  • the display subsystem 32 may comprise any suitable display technology, for example a touch screen display, LCD (liquid crystal display), organic light emitting diode (OLED) display, or an output to a separate display.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • the processor 21 is further linked to a transceiver (TX RX) 13, to a user interface (Ul) 15 and to a memory 22.
  • TX RX transceiver
  • Ul user interface
  • the user interface 15 may enable a user to input commands or data to the apparatus 10. Any suitable input technology may be employed by the apparatus 10. It would be understood for example the apparatus in some embodiments may employ at least one of a keypad, keyboard, mouse, trackball, touch screen, joystick and wireless controller to provide inputs to the apparatus 10.
  • the user interface may in some embodiments be any suitable combination of input and display technology, for example a touch screen display suitable for both receiving inputs from the user and displaying information to the user may be implemented. Examples of which may be capacitive, resistive and multi-touch displays configured to react to objects in contact with and neighbouring the display surface.
  • the transceiver 13 may be any suitable communication technology and be configured to enable communication with other electronic devices, for example via a wireless communication network.
  • the apparatus may comprise sensors or a sensor bank 16.
  • the sensor bank 16 receives information about the environment and the operator of the apparatus in which the apparatus 10 is operating and passes this information to the processor 21 in order to affect the processing of the audio signal and in particular to affect the processor 21 .
  • the sensor bank 16 may in some embodiments comprise at least one of the following set of sensors, as shown in Figure 3. It would be understood sensors others than shown in Figure 3 may also be implemented as part of the sensor bank.
  • Figure 3 is divided into three columns.
  • the first column is the "perception" 201
  • the second column is the sensor 203 associated with and monitoring the perception 201
  • the third column 205 is the modality associated with the sensor and the perception.
  • the perception of vision may be sensed by a camera or camera module which has as its modality a visual mode of detection.
  • the sensor bank 16 may thus in some embodiments comprise a camera module.
  • the camera module may in some embodiments comprise at least one camera having a lens for focusing an image on to a digital image capture means such as a charged coupled device (CCD).
  • the digital image capture means may be any suitable image capturing device such as complementary metal oxide semiconductor (CMOS) image sensor.
  • CMOS complementary metal oxide semiconductor
  • the camera module further comprises in some embodiments a flash lamp for illuminating an object before capturing an image of the object.
  • the flash lamp is in such embodiments linked to a camera processor for controlling the operation of the flash lamp.
  • the camera may be configured to perform infra-red and near infra-red sensing for low ambient light sensing.
  • the at least one camera may be also linked to the camera processor for processing signals received from the at least one camera before passing the processed image to the processor.
  • the camera processor may be linked to a local camera memory which may store program codes for the camera processor to execute when capturing an image.
  • the local camera memory may be used in some embodiments as a buffer for storing the captured image before and during local processing.
  • the camera processor and the camera memory are implemented within the processor 21 and memory 22 respectively.
  • the camera module may in some embodiments be configured to determine the position of the apparatus 10 with regards to the user by capturing images of the user from the device and determining an approximate position or orientation relative to the user.
  • the camera module 101 may comprise more than one camera capturing images at the same time at slightly different positions or orientations.
  • the camera module may be configured to monitor the user, for example monitor eye movement, or pupil dilation or some other physiological function.
  • the camera module may in some embodiments be further configured to perform facial recognition on the captured images and therefore may estimate the position of the mouth of the detected face.
  • the estimation of the direction or orientation between the apparatus to the mouth of the user may be applied when the phone is used in a hands-free mode of operation, or in a audio-video conference mode of operation where the camera image information may be used in some embodiments both as images to be transmitted but also locate the user speaking to improve the signal to noise ratio for the detection of emotion from the mouth shape.
  • the perception of hearing may be detected via the microphone or microphone array sensor as discussed with reference to the audio input subsystem and has an auditory modality.
  • the perception of temperature may be detected in some embodiments by the apparatus sensor bank 16 comprising a temperature sensor such as a thermister, a thermocouple or any suitable temperature sensing apparatus.
  • a temperature sensor such as a thermister, a thermocouple or any suitable temperature sensing apparatus.
  • the perception of humidity may be detected in some embodiments by the apparatus sensor bank 16 comprising a humidity sensor.
  • the perception of touch or pressure may in some embodiments be detected by the apparatus sensor bank 16 comprising at least one tactile sensor.
  • the tactile sensor may be implemented as a touch screen and some embodiments as part of the display subsystem.
  • the perception of touch or pressure has a modality of tactile nature.
  • the perception of muscle movement may be detected or monitored by in some embodiments the sensor bank 16 comprising at least one electromyographical sensors (EMG sensors). It would be appreciated that in some embodiments surface EMG (sEMG) sensors are used as they are less invasive.
  • EMG electromyographical sensors
  • the perception of smell may be detected in some embodiments by the sensor bank 16 comprising smell sensors such as chemical detectors configured to detect specific organic or non-organic chemicals. This perception of smell is associated with an olfactory mode of determination.
  • the perception of direction may be detected in some embodiments by the sensor bank 16 comprising a compass such as a solid state compass.
  • the perception of location may be detected in some embodiments by the sensor bank 16 comprising a satellite positioning estimator or other locating device apparatus.
  • the perception of orientation may be detected by the sensor bank 16 comprising in some embodiments sensors such as gyroscopes or accelerometers.
  • the sensor bank 16 comprises a position/orientation sensor which may be a gravity sensor configured to output the electronic device's orientation with respect to the vertical axis.
  • the gravity sensor for example may be implemented as an array of mercury switches set at various angles to the vertical with the output of the switches indicating the angle of the electronic device with respect to the vertical axis.
  • the position/orientation sensors may be implemented, as described previously, as a satellite position system such as a global positioning system (GPS) sensor whereby a receiver is able to estimate the position of the apparatus from receiving timing data from orbiting satellites.
  • GPS global positioning system
  • the GPS information may be used to derive orientation and movement data by comparing the estimated position of the receiver at two time instances.
  • the sensor bank 16 further comprises a position/motion sensor in the form of a step counter.
  • a step counter may in some embodiments detect the motion of the apparatus when mounted on the user as the apparatus rhythmically moves up and down as the user walks. The periodicity of the steps may themselves be used to produce an estimate of the speed of motion of the user in some embodiments.
  • the step counter in combination with a known location estimate and an orientation estimate may be implemented as part of a location sensor outputting location estimates.
  • the step counter may be implemented as a gravity sensor.
  • the sensor bank 16 may comprise at least one accelerometer configured to determine any change in motion of the apparatus.
  • the position/orientation sensor may comprise a capacitive sensor capable of determining an approximate distance from the apparatus to the user's head when the user is operating the apparatus. Thus as the apparatus is moved in position relative to the head this motion is detected.
  • the perception of signal strength may be detected by the sensor bank 16 in some embodiments comprising a signal power or signal strength estimator monitoring a signal received via an antenna and a receiver.
  • the perception of power may be detected by the sensor bank 16 in some embodiments comprising a battery sensor.
  • the instantaneous power consumption of the apparatus may be monitored, and in some embodiments the current battery charge level estimated.
  • an expected battery life before recharge/replacement may be determined in some embodiments.
  • the perception of brain activity may be detected in some embodiments by the sensor bank 16 comprising an electroencephalogram sensor (EEG). Any suitable ECG sensor configuration may be employed.
  • the perception of heart activity may be detected in some embodiments by the sensor bank 16 comprising at least one of a blood pressure sensor, a pulse monitor or electrocardiogram sensor (ECG).
  • a blood pressure sensor e.g., a blood pressure sensor
  • ECG electrocardiogram sensor
  • FIG. 4 some examples of context variables which may be employed in embodiments and describe monitored characters of the apparatus, are shown.
  • the examples shown in Figure 4 are divided into two columns.
  • the first column shows the device context variables 301 and the second column shows the sensed level labels 303 associated with the device context variables 301.
  • a memory device context variable 305 may have sensed labels for Phone. Freespace which identifies the amount of free memory available in the apparatus memory in total, Card. Freespace which identifies the amount of free memory available in the any memory card inserted in the apparatus, and RAM. Freespace which identifies the amount of free memory in the apparatus random access memory. Furthermore the memory 305 device context variable 301 may have sensed labels for the Backup. Latest and Restore. Latest labels, which identify the date time stamp for the latest backup indicating when the last user data information backup was carried out and also the latest restore point of the operating system respectively.
  • FIG. 4 shows device context variables 301 for Phone Settings variables 307 where in some embodiments there may be sensed labels such as Internet.Available which indicates if there are settings for internet connection on the apparatus, SMS.Available which indicates if there are settings for Short Message Service applications, Email.Available which indicates if there are settings for email applications, GPRS.Available which indicates if there are settings for General Packet Radio Service applications, and WAP.Available which indicates if there are settings for Wireless Access Protocol applications.
  • the apparatus context variables 301 may in some embodiments comprise FileExist variables which may have sensed labels for Created, Deleted and Exists. Created sense label indicates when the file was first saved or later amended. Deleted indicates when the file was deleted or if it has been deleted. Exists indicates whether or not the file currently exists or not.
  • a device context variable 301 called Process 31 1 which has a sense label of Crash which indicates whether or not the current process is in action or has crashed.
  • a device context variable 301 called Application which may in some embodiments have sense labels called Installed. "ApplicationName" indicating the application name as used or displayed by the apparatus, Installed .Widsets and Installed. Ngage which indicates the installed component libraries used or available.
  • SMS device context variables 301
  • Messages 315 which includes such sensed labels called SMS. Sent which indicates when (and if) the SMS has been correctly sent, SMS. Failed which indicates when (and if) the SMS message has failed to be sent.
  • the message variable may comprise in some embodiments other sense labels such as MMS.Sent and MMS. Failed which are similar to the SMS labels but directed to multimedia message service messages.
  • sense labels 303 of Email Sent and Email. Failed which carry similar information but relate specifically to electronic mail messages.
  • Some further embodiments may comprise a device context variable 301 called Networklnfo 317 may have a sense label Cellld identifying the current cell identification value within which the apparatus is communicating.
  • the apparatus 10 may in some embodiments comprise a context variable 301 called Network! " raffic 319 which may have labels called Downlink.Bandwidth and Uplink.Bandwidth which contain values associated with the current downlink and uplink bandwidth the apparatus is currently experiencing.
  • some embodiments may comprise a device context variables 301 called GPS 321 which may have sense labels called GPS. Coordinate which indicates the apparatus current location according to the GPS positioning estimate, and GPS.SignalStrength which indicates the GPS signal strength to provide a confidence estimate of how accurate the GPS location estimate is.
  • a context variable 301 called Battery 323 which has sense labels called Level, Charger, UsagePattern which would indicate respectively the battery current level or current power consumption level, the detection of whether or not a charger is in use, and furthermore a power usage pattern which could predict the battery life.
  • a device context variable 301 called Application. State 325 which may have sense labels called Foreground.Application indicating a current application being run in the "foreground”, Background.Application indicating an application being run in the background and furthermore a Foreground.View label indicating what view is to be displayed in the foreground for the application currently being run in the foreground.
  • the processor 21 comprises in some embodiments the user interface configuration processor 102, which is configured to communicate with a context processor 100.
  • the context processor 100 in some embodiments further comprises a context engine 103 which is configured to receive information from the context sensing block 101 and output data to the resource control block 105 and further may communicate with the user interface configuration processor 102.
  • the context sensing block 101 is configured in some embodiments to receive sensor information from the sensor bank 16 and communicate sensed context information to the context engine 103.
  • the resource control block 105 in some embodiments is configured to both input and output resource control information from the resources 33 in other words control the apparatus in light of the sensed information.
  • the sensors or sensor bank 16 thus in some embodiments provide information to the context sensing block 101 , where the context sensing block 101 may be configured in some embodiments to process the sensor information and output specific context label which may be generated (as described in further detail later) from a combination of the sensed information.
  • the context engine 103 in some embodiments may, dependent on the context sense label parameter or value, determine an action or operation to be carried out (or stop an action from occurring). For example, as will be described later, the context engine may control the difficulty level of a computer game.
  • the context engine 103 may interact with user interface configuration processor 102 which is configured to generate or recall configuration details required by the context engine in order to control the interaction.
  • the user interface configuration processor 102 may comprise information relating to the various levels of difficulty and game parameter information relating to the levels of difficulty which are recalled by the context engine 103.
  • the context engine 103 may further in embodiments be configured to output to the resource control block 105 specific information for controlling resources such as memory, display memory and audio memory.
  • the resource control block 105 may receive this resource control information and specifically drive the resources using this information.
  • the resource control block controls the display and audio subsystem memory and configuration.
  • the resources, for example the audio and video subsystems then are controlled to display to the user specific information which is context driven.
  • the context sensing block 101 comprises a pre-processor 403 and feature extractor/classifier 405.
  • a pre-processor 403 allocated to each of the sensors so that each sensor is associated with a pre-processor which outputs pre-processed sensor data to at least one feature extractor/classifier 405.
  • the pre-processor 403 may handle more than one type of sensor data.
  • the pre-processor comprises a pre-processor controller 410 configured to configure the pre-processor 403 to handle each of the types of data and output a specific sensor data type of a predefined variable configuration to the feature extractor/classifier 405.
  • the pre-processor 403 as shown in Figure 5 may comprise components such as a digitiser 41 1 , a time-to- frequency domain converter 413, and a sub-band filter 415.
  • the digitiser 41 1 may be configured to receive the sensor information and convert an analogue sensor information input to a digital format. Any suitable analogue to digital conversion may be implemented within the digitiser 41 1 . Furthermore the digitiser may be configured to be controlled by the pre-processor controller 410 in order to output sensor information in a form acceptable for further pre-processing.
  • the time to frequency domain converter 413 may in some embodiments comprise a Fast Fourier Transform (FFT), a Discrete Fourier Transform (DFT), a Discrete Cosine Transform (DCT), a Modified Discrete Cosine Transform (MDCT) or any suitable time-to-frequency domain converter.
  • the time-to-frequency domain converter thus in these embodiments is configured to receive time domain information, for example digitised electroencephalographic (EEG) sensor signal data and output frequency coefficient values representing the relative frequency components of the EEG signals.
  • EEG electroencephalographic
  • the pre-processor controller 410 may furthermore control the time to frequency domain converter 413 in terms of the length of sample period, the number of samples input, the number of frequency outputs, etc.
  • the sub-band filter 415 may be configured to receive from the time to frequency domain converter, the frequency coefficient values and output from the frequency coefficient values specific sub-bands activity labels.
  • the sub-band output from the sub-band filter 415 may in some embodiments be configured to be contiguous, in other words the frequency range output by the sub-band filter may represent a continuous frequency spectrum, or in some other embodiments be overlapping or separate from each other.
  • the sub-bands may be linear and uniform in bandwidth whereas in other embodiments the sub-bands output may be non-linear or non uniform in bandwidth, for example in auditory data the output from the sub-band filter 415 may be based on a psychoacoustic model.
  • the output of the pre-processor 403 may be passed to the feature extractor/classifier 405.
  • the feature extractor/classifier 405 may comprise a sample space buffer 421 configured to receive pre-processed data prior to analysis, a basis vector selector/Eigenanalyzer 425 which in some embodiments receives the sample space buffer data and outputs a series of basis vectors reflecting correlation between the sensor type input data, a canonical correlation sorter/producer 427 which is configured in some embodiments to receive the information relating to the correlation of the data sets and sort, or reorder, or reduce the sensor information, the trainer/classifier 429 which in some embodiments is configured to train or classify the correlation data to indicate a specific context label.
  • the feature extractor/classifier may comprise a controller 420 which is configured in some embodiments to control the sample space buffer 421 , basis vector selector/Eigenanalyzer 425, the canonical correlation sorter/producer 427 and the trainer/classifier 429.
  • Correlations in some embodiments can be computed between psychophysiological signals and device/application status. For example, if an application crashes or phone rings, psychophysiological signals indicate a response corresponding to user registering or ignoring the event.
  • FIG. 6a shows an overview of the example operation.
  • the surface electromyography (sEMG) sensor captures the muscle movement electrical data which may be in the form of at least one sensed muscle movement sensor location and outputs this information to the pre-processor.
  • the operation of sensing the muscle movement is shown in Figure 6a by step 501 a.
  • the pre-processor 403 may as shown in Figure 5 comprise a digitiser 41 1 configured to receive the surface EMG analogue data and digitise this data.
  • the output digitised sEMG signal may be passed to a sEMG Fast Fourier Transformer (FFT) 413.
  • FFT Fast Fourier Transformer
  • the Time-to-Frequency domain conversion of the sEMG in this example may be a FFT.
  • the time-to-frequency domain converter may comprise a short time period Fourier Transformer or wavelet packet transformer in order to output a frequency domain version of the sEMG data.
  • the operation of performing a FFT on the sEMG data is shown in Figure 6b by step 555.
  • the frequency domain sEMG data may then be sub-band filtered or processed to generate a power estimation for sub-bands of the FFT sEMG data.
  • the power estimation of FFT sEMG data is shown in Figure 6b by step 557.
  • each of the sensed positions may output at least one frequency band power estimation value for the sEMG for a specific period which may indicate locations and timings of specific muscle activity.
  • the pre-processing of the sEMG is shown in the overview figure of Figure 6a by the step 503a.
  • the pre-processed sEMG data in other words an estimation of power of each of the sensed locations for at least one frequency sub-band may then be output in embodiments of the application to the feature extractor/classifier 405.
  • the brain activity perception is indicated by an electroencephalogram (EEG) signal, in other words the electrical activity of the surface of the scalp is monitored to determine the electrical activity of the cortical area.
  • EEG electroencephalogram
  • the EEG sensors in some embodiments generate EEG signals dependent on the electrical fields detected and output this analogue EEG data for each of the sensors to the pre-processor 403.
  • the generation/capture of EEG data streams is shown in the overview figure of Figure 6a by step 501 b and in the more detailed pre-processing operations figure of Figure 6b by step 561 .
  • the pre-processor may in some embodiments receive the EEG signals and digitises the EEG signals using a digitiser 41 1 .
  • the output digital EEG signals may then be output to a further Fast Fourier Transformer (or any suitable time-to- frequency domain converter) 413.
  • the operation of digitising the EEG signals is shown in Figure 6b by step 563.
  • the time-to-frequency domain converter or in this example the further Fast Fourier Transformer 413 may then in some embodiments convert the time domain EEG signals from each of the sensors to a frequency domain representation of these signals. As described previously, any suitable time to frequency domain conversion may be chosen or implemented within the time to frequency domain converter 413.
  • the operation of applying a FFT to the EEG signals to generate frequency domain EEG coefficients is shown in Figure 6b by step 565.
  • the frequency domain EEG coefficients are then passed to the sub-band filter 415.
  • the sub-band filter may in some embodiments then perform a sub-band analysis of each frequency domain coefficient to output values for brain activity in "known" frequency ranges.
  • the filtered sub-band coefficient values may then be output to the feature extractor/classifier for each of the sensed locations.
  • the sub-band filtering of the EEG data is shown in Figure 6b by step 567.
  • the electrodes may be positioned according to a system where within a headset is located a plurality of electrodes configured to detect electrical fields of the surface of the scalp when the headset is applied.
  • the EEG signals detected by the headset may have a range of characteristics, but for the purposes of illustration typical characteristics are as follows: Amplitude 10 - 4000 microvolts, frequency range 0.16-256 Hz and sampling rate 128 - 2048 Hz.
  • the data samples may be further conditioned by the pre-processor in some embodiments to reduce possible noise including external interference introduced in signal collection, storage and retrieval.
  • the pre-processor filters the captured EEG signals using a notch filter to remove power line signal frequencies about 50 to 60 Hz and using a low pass filter configured to remove high frequency noise originating from switching circuits within the EEG acquisition hardware. Furthermore in some embodiments, the preprocessor applies a further filtering of the captured EEG signal by using a high pass filter to remove DC components.
  • the EEG samples may be furthermore divided into equal length time segments within longer epochs where, for example there are seven time segments of equal duration within an epoch (however in other embodiments, the number and length of time segments may be altered). Furthermore in some embodiments the time segments may not be of equal duration and may or may not overlap within an epoch.
  • each epoch in some embodiments may vary dynamically depending on events in the detection system. However, in general an epoch is selected to be sufficiently long that a change in mental state, if one occurs, can be reliably determined or detected.
  • the EEG signal may be pre-processed to become a differential domain that approximates to first derivative in the EEG.
  • seven frequency bands may be output from the sub-band filter for each of the sensors with the following frequency ranges: ⁇ (2 to 4 Hz), ⁇ (4 to 8 Hz), crt (8 to 10 Hz), a2 (10 to 13 Hz), ⁇ 1 (13 to 20 Hz), ⁇ 2 (20 to 30 Hz) and ⁇ (30 to 45 Hz).
  • the power of each of these frequency bands is calculated within the pre-processor.
  • the feature extractor/classifier 405 receives in the sample-space buffer 421 the multivariate data, for example the data associated with the surface EMG power coefficient at each location monitored, and the EEG pre-processed frequency data.
  • the sample space-buffer 421 may be configured in some embodiments to synchronise the data received from the pre-processor from each of the different sensor types. For example the sample space buffer 421 may carry out an interpolation or decimation of one of the data signals received.
  • the reception of the multivariate data is shown in Figure 6c by step 571 .
  • the basis vector selector/Eigenanalyzer 425 thus receives the synchronised data from each sensor type at the same time and performs analysis to attempt to find the basis vectors such that the correlations of the projections of the variables into the basis vectors are maximised.
  • the multidimensional variable for the pre-processed muscle movement sensor data as X
  • the multidimensional variable for the brain activity (EEG) signal as Y.
  • a sample set S ((x 1 ,y,),...,(x convention,ycons)) where the Xi, Yi is the first instance sample of (X, Y) and X n , Y n is the n th instance of (X, Y).
  • the Eigenanalyzer 425 in some embodiments then defines a new set of coordinates by projecting X onto the basis vector W x and similarly projects Y onto basis vector Wy.
  • the basis vector selector 425 may in some embodiments find w 3 ⁇ 4 . and w y to maximise the correlations between the two vectors. In some embodiments this may be carried out with respect to the following equation.
  • the values of w and may for example be obtained by solving the Eigenvalue equations.
  • C xx and C yy are the autocovariance matrices of x and y respectively.
  • C xy and C yx are the crosscovariance matrices between x and y.
  • the selection of the basis vectors W x , W y to optimise correlation operation is shown in Figure 6c by step 573.
  • the selection basis vectors may then be passed to the canonical correlation sorter/reducer 427 where the canonical correlations are sorted to find the most and least correlated dimensions.
  • the canonical correlation sorter/reducer 427 performs a dimensionality reduction by discarding the least correlated dimensions.
  • kernel canonical correlation analysis KCCA
  • KCCA kernel canonical correlation analysis
  • the Eigenanalysis may be applied only to some of the dimensions variables of the sensed values.
  • not all of the EEG frequency bands may be input to the Eigenanalyzer 425.
  • the ⁇ and ⁇ frequency sub-bands of the EEG are not processed and only the low frequency a and high frequency ⁇ sub-band data are processed.
  • some regions of the face may not be input to the Eigenanalyzer 425.
  • data from the camera relating to the region of the eyes and lips may be important for emotion recognition and processed in the Eigenanalyzer 425 whilst data relating to the portions of the face associated with the nose and ear may be discarded prior to inputting.
  • the trainer/classifier 429 receives the sorted basis vectors/values and applies these values.
  • the trainer/classifier 429 determines whether or not the apparatus and the context sensing block 101 is operating in a training mode of operation or a classifying mode of operation.
  • the selection of a training/classifying mode of operation is shown in Figure 6c by step 577,
  • the trainer/classifier 429 When the trainer/classifier 429 operates in a training mode of operation, the data is linked or associated with a feature in order to determine the most/least correlated dimensionality sensor data which is associated with the feature.
  • the training may be carried out using any suitable training operation such as a gaussian mixture model (GMM) or an artificial neural network (ANN) training operation in order to train specific target classes for the feature being input at that particular time.
  • GMM gaussian mixture model
  • ANN artificial neural network
  • the training or linking a feature with the most/least correlated dimensions of the combined sensor data is shown in operation Figure 6c by step 579. If the trainer/classifier 429 is operating in a classifying mode of operation, the trainer/classifier 429 is configured to input the most/least correlated dimensions to attempt to determine a feature class output from the trained classifier.
  • the training operation may initially comprise operating the system where the user is at various levels of fatigue or stress (or simulated fatigue or stress) to produce trained target classes.
  • These feature labels may be recalled or determined by the trainer/classifier 429 when the sensor produce similar signals and thus in embodiments operate in a classifier mode of operation where the output of the feature extractor/classifier 405 thus identifies the current fatigue or stress level of the user based on the EEG and EMG information from the sensors.
  • the operation of "data mining" the previous trained database to determine the feature class is shown in Figure 6c by step 578.
  • the overall training or classifying feature operation is shown in the summary Figure 6a as step 507.
  • the human player 721 is shown monitored by a sensor bank 16 comprising a brain interface sensor 701 , a camera for gaze tracking located near to the eye 703, a muscle movement sensor (sEMG) 705, and a skin conductance sensor (Galvanic Skin Response GSR sensor) 707.
  • a sensor bank 16 comprising a brain interface sensor 701 , a camera for gaze tracking located near to the eye 703, a muscle movement sensor (sEMG) 705, and a skin conductance sensor (Galvanic Skin Response GSR sensor) 707.
  • sEMG muscle movement sensor
  • GSR sensor Skin conductance sensor
  • the multi-sensor feature classifier 751 and learning processor 735 may be configured to provide the functionality of the pre-processor 403, the Eigenanalyzer 425 and the canonical correlation sorter/reducer 427 (in the multi-sensor feature classifer 751 ) and the trainer/classifier 429 (in the learning processor 735).
  • the multi-sensor feature classifier 751 receives the sensor data and performs a pre-processing on the sensor data according to a suitable pre- processing operation (for example the EEG and sEMG sensor data may be processed in a manner described previously).
  • the pre-processing of the sensor data is shown in Figure 8 by step 603. Furthermore the multi-sensor feature classifier 751 may perform a pre-feature classification analysis to output selected eigenanalysis vectors.
  • the eigenanalysis outputs may be passed to the learning processor 735 which may also receive the output of a performance evaluator 737.
  • the performance evaluator 735 generates in some embodiments a game play evaluation parameter, for example indicating the speed at which the user is completing a specific level of the game in order that the output sensor values may be trained against a "same evaluation score".
  • the evaluation of game play for example is shown in Figure 8 by step 604. Furthermore the evaluation of the performance or game play within the performance evaluator 737 may be determined in embodiments based on information passed from the user interface and game processor 731. The generation or reception of game play data is shown in Figure 8 by step 602. Thus for example the user interface may output a points score to the performance evaluator which in some embodiments may determine how quickly the player is increasing the score and output feature labels such as "too easy", "too difficult” and "just right”.
  • the learning processor 735 in some embodiments may train or correlate the game play performance parameter label passed from the performance evaluator 737 against the multi-sensor feature classifier parameters passed by the multi-sensor feature classifier 751.
  • the operation of the performance evaluator 737 and the learning processor 735 are carried out only during the training phase of the system and may be disabled or bypassed when the learning processor 735 determines that it is sufficiently well trained.
  • the learning processor may in some embodiments be configured to output the feature most similar to the learned sensory inputs. Therefore if the learning processor determines sensor values which were similar to those experienced on learnt "too easy" level, the learning processor may output a "too easy" value on a GameDifficulty label.
  • the output of the learning processor/multi-sensor feature classifier is then passed in some embodiments to the adaption processor 733.
  • the adaptation processor 733 may in some embodiments of the application determine whether or not the game play needs to be changed.
  • the feature parameter output may indicate that based on the sensor information that the human player is fatigued or stressed at a specific level and is likely to give up or that the player is bored and the difficulty is too low to sufficiently keep the player interested in the playing of the game, in which case the adaption processor determines that the game level difficulty needs to be changed.
  • the operation of determining whether or not the game needs to be changed is shown in Figure 8 by step 609.
  • the operation passes back to the receiving of sensor data and optionally receiving game play data.
  • the adaption processor 733 may furthermore determine which change is required in the game. For example where the game play has been indicated that it is too stressful, the game play can be simplified or where the game player is determined to be too simple, the difficulty level can be increased.
  • the adaption for changing the game is then applied by the adaption processor which passes control information to the game processor 731.
  • the application of the adaption to the game is shown in Figure 8 by operation 613.
  • the multi-sensor feature computation may be used to thus attempt to determine a psychophysiological signal.
  • GSR galvanic skin response
  • the EEG for example, may also detect balance, attention and other mental states.
  • the EMG for example may otherwise detect gestures and muscle tension.
  • the camera and thus the gaze tracking sensor may detect eye movements which denote tension like or dislike.
  • the combination of all of these may be used in embodiments of the application and to produce a robust estimation of the user's mental and physical state by combining the measurements from each of the psychophysiological signals.
  • the learning processor produces an initial data set by the game processor applying an initial stimuli such as provided by the international affective picture system (IAPS) and international affective digital sound (IADS).
  • IAPS international affective picture system
  • IADS international affective digital sound
  • real tasks may also be performed to determine the users state physical and mental and using these real tasks, an initial training phase may be determined.
  • streams may be grouped to perform heterogeneous streams, for example EEG and eye movement in one stream and sEMG and GSR data in the other stream.
  • the groupings may be decided or determined based on which combination provides a maximum correlation.
  • a further example of a trainer/classifier wherein the trainer/classifier is configured to classify/train the feature using a committee of light weight classifiers.
  • the trainer/classifier 429 comprises more than one classifier configured to receive the feature vector output from the canonical correlation sorter/reducer 427.
  • the trainer classifier comprises a first classifier (Classifier 1 ) 901 , a second classifier (Classifier 2) 903 and a third classifier (Classifier 3).
  • the trainer classifier comprises a first classifier (Classifier 1 ) 901 , a second classifier (Classifier 2) 903 and a third classifier (Classifier 3).
  • the classifiers 901 , 903 and 905 may be one of any known classifier such as Gaussian Mixture Model (GMM) classifier.
  • GMM Gaussian Mixture Model
  • Each of the classifiers on receiving the feature vectors is configured to output a feature value to a Fusion Filtering processor 907.
  • the trainer/classifier in some embodiments comprises a Fusion Filtering processor 907.
  • the Fusion Filtering processor 907 is configured to on receiving the feature value carry out at least one of fusion and filtering action.
  • a Fusion action is one where the Fusion Filtering processor 907 is configured to output the feature value where there is a majority voting decision.
  • a Filtering action is one where the Fusion Filtering processor 907 is configured to output the feature value which has been median filtered.
  • the output of the trainer/classifier is thus the feature value output by the Fusion Filtering processor.
  • the inventor has produced a cognitive load feature determination where the feature vector inputs are determined from an EEG and heart rate monitoring.
  • the Classifiers may output the feature (cognitive load) as being high or low which is then Fusion Filtered to output a cognitive load state sequence shown in Figure 9 as HHHHHLLLLL where H indicates a high load and L indicates a low cognitive load.
  • user equipment may comprise a user interface such as described in embodiments above.
  • apparatus electronic device and user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
  • the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • an apparatus comprising: a signal processor configured to capture at least two data signals from at least two sensors; a feature extractor configured to determine a correlation data set for the at least two data signals; and a feature classifier configured to assign a feature class value dependent on the correlation data set.
  • the feature extractor may comprise a buffer configured to align the at least two data signals; and an eigenanalysisr configured to project at least one of the at least two data signals onto a first basis vector to generate a first sample space; and project a further of the at least two data signals onto a second basis vector to generate a second sample space; wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
  • the feature extractor may further comprise a kernel canonical correlator configured to perform a kernel canonical correlation on the at least two data signals.
  • the feature extractor may further comprise a correlation sorter configured to sort the correlation data set in order of magnitude of correlation; and discard at least one dimension of the data set dependent on the magnitude of correlation.
  • the apparatus may comprise a resource controller configured to perform a context related operation dependent on the feature class value.
  • the context related operation may comprise setting a computer game difficulty level.
  • the feature classifier may comprise a feature trainer configured to perform a training operation to associate a feature class to the correlation data set.
  • the signal processor may be configured to receive from a first sensor type at least one data signal stream; and receive from a second sensor type at least one further data signal stream.
  • the signal processor may comprise a filter configured to filter the at least two data signals prior to determining a correlation data set for the at least two data signals.
  • the embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
  • the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
  • a computer-readable medium encoded with instructions that, when executed by a computer perform: capturing at least two data signals from at least two sensors; determining a correlation data set for the at least two data signals; and assigning a feature class value dependent on the correlation data set.
  • the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
  • the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
  • Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
  • the design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate. Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules.
  • the resultant design in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.
  • the term 'circuitry' refers to all of the following:
  • circuits and software and/or firmware
  • combinations of circuits and software such as: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions and
  • circuits such as a microprocessor(s) or a portion of a microprocessors
  • a microprocessor(s) or a portion of a microprocessors that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry' applies to all uses of this term in this application, including any claims.
  • the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
  • the term 'circuitry' would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or similar integrated circuit in server, a cellular network device, or other network device.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: capturing at least two data signals from at least two sensors; determining a correlation data set for the at least two data signals; and assigning a feature class value dependent on the correlation data set.

Description

Method and apparatus for assigning a feature class value
The present invention relates to apparatus for human computer interaction. The invention further relates to, but is not limited to, apparatus for human computer interaction in mobile devices.
Electronic apparatus such as mobile phones, portable computers and audio, media players have become part of our daily life. Interaction with these apparatus have become as common as interaction with other humans. As such there has been significant research in the field of human computer interaction (HCI) where apparatus is designed to be more easily used.
Furthermore the interaction with the digital world has moved from the desktop, such as office personal computers (or desktop PC), to mobile apparatus such as telephones, portable computers, and media and audio players.
The typical user interacts with other humans using speech, gestures and other forms of communication (including non-verbal signals) in a full-duplex mode. In other words person-to-person communication, whether verbal or non-verbal, occurs in two directions simultaneously. These parallel communications are interpreted in real time based on context, sensory perception and cognition.
However traditional Human Computer Interaction (via user interfaces) are typically designed to work in a half duplex mode, in other words the user may interact with the computer or apparatus by receiving information from the apparatus then inputting a response to the apparatus which in turn prompts the apparatus to display a response to the input and so on. A practical half-duplex user interface is the common window-icon-menu-pointing (WIMP) interface, for example the WIMP interfaces used as part of desktop operating systems.
Further research in human computer interaction has been aimed at designing user interfaces to minimise the mechanics of manipulation and cognitive load between intent and the execution of that intent, in other words designing a user interface in such a way that the user may focus on the task at hand and not on the technology for specifying the task. An example of this has been the improvement of file manipulation and selection tasks using icons rather than typed commands. In previous user interfaces, significant knowledge of the system was expected. For example the opening of a folder or directory in early desktop operating systems required knowledge of specific text commands which had to be entered in a syntactically correct order whereas more modern user interfaces only require the user to "click" on a visual representation of a folder to open that folder or directory.
Recently "human centred" user interface designs for human computer interaction (HCI) has been emerging as an alternative to previous "computer centred" user interface designs to attempt to further simplify the mechanics of manipulation and cognitive load between intent and the execution of that intent. Research into human centred user interface designs focuses on the use of human communication and context models to design the user interface to be more naturally operated. Computer centred designed user interface on the other hand expect the user to adapt to the new technology. A context sensitive user interface for example is one which attempts to seamlessly adapt to various user based context, for example applications may respond to loadings placed on the user such as task, physical, cognitive and social loadings to run the device. To allow this to occur the user interface has to be designed to be multi-modal and multi-sensory, in other words is capable of being operated in various modes and furthermore has multiple sensors to detect different user inputs which allows the user interface to "sense" various contexts or loads on the user.
The different input and output modalities may allow the user interface to choose the optimal modality of a user interface for a given context without direct selection from the user. An example of adapting the user interface based on context is, for example, to change the volume of the ring tone or selection of a vibration function in a mobile communications device. By sensing the ambient audio volume level from the microphone, the ring tone volume can be adjusted so that the user is able to hear the mobile communications device when called. Similarly a further example is the enabling or disabling of a speech-to-text function based once again on the detected volume of ambient noise. Thus the context based user interface may be designed to disable the speech-to-text function if the ambient audio volume level detected from the microphone is particularly high and therefore likely to have many errors.
Further research has been made into context sensitive user interface design to attempt to reduce the cognitive load on the user by guiding the attention of the user to only the resources required to complete the current task. For example where the task is finding a railway station and timetable for a next train automatically when the user is lost and running late in an unfamiliar city, the user interface may open up a map and routing application as well as an internet browser page automatically directed to the local railway station timetable page rather than the user having to manually find the application and then search the internet for the relevant timetable information.
A potential application area for context sensitive user interfacing is in gaming. For example the level of interest or difficulty in the game may be adapted to try to attempt to keep the player "in the zone". Typically this is only attempted to "level" the difficulty so that the player does not complete the game too quickly. However in some applications the level of anxiety or boredom of the player may be detected using various bio-sensors. For example using a brain-computer interface (BCI) to monitor the bio-signals and other sensors such as heart rate (using an electrocardiogram ECG), blood pressure, posture and muscle movement (using a surface electromyogram sEMG), the user may be monitored directly and the game controlled by changing the level or difficulty of the game being played. A reference to games that take into account biofeedback can be found in "Affective Videogames and Modes of affective Gaming: Assist Me, Challenge Me, Emote Me, by: K. M. Gilleade, A. Dix, J. Allanson, In Proceedings of DiGRA 2005 Conference: Changing Views - Worlds in Play." However there is significant difficulty in the design of these user interfaces and in particular the practical context model user interface designs are limited because of the inherent complexity in terms of sensing one or only a few sensors and applying each to a limited context. Such systems are very susceptible to any noise which affects one particular sensor or sensor type as this may cause substantial errors in user interface interaction. For example a muscle monitoring gaming control interface would depending on the electrode contact quality be susceptible to electrical noise which may render the game difficult if not impossible to play as the difficulty changed suddenly in response to the poor context electrode noise.
Furthermore whilst there has been research on assessing the cognitive context of the user using psychophysiological signals and their application to task adaption, where more than one sensor or sensor variable is used, these approaches typically apply a uniform weighting to the sensors or sensor variables. For example the feature extraction or computation stage in many of the research examples uses analysis performed on the concatenated data stream vector from sensors or in some examples the analysis performed on individual data streams vectors and then concatenated afterwards. The uniform weighting approach, although simple, is also liable to produce errors and require valuable processing capacity monitoring data and in particular monitoring non-significant sensor data during interface control operations. For example, when combining speech and facial images for an emotion recognition application not all of the regions of the face are equally important for detecting an emotion. Image data from the whole of the face being processed would lead to inefficient analysis as the equal weighting applied to the image data reflecting the ears would be wasted processing.
This invention proceeds from the consideration that using multiple sensors to analyse user interface inputs that are correlated and given a context, by using a well designed pre-processing operation on such sensor data streams from which are then jointly analysed, a correlation between various relationships of sensors can be determined. Furthermore by effective ordering the results of the analysis specific features may be extracted and used to assist the control and interfacing of the apparatus. Embodiments of the present invention aim to address the above problem.
There is provided according to a first aspect of the invention a method comprising: capturing at least two data signals from at least two sensors; determining a correlation data set for the at least two data signals; assigning a feature class value dependent on the correlation data set.
Determining the correlation data set for the at least two data signals may further comprise: aligning the at least two data signals; projecting at least one of the at least two data signals onto a first basis vector to generate a first sample space; projecting a further of the at least two data signals onto a second basis vector to generate a second sample space; wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
Determining the correlation data set for the at least two data signals may comprise determining a kernel canonical correlation for the at least two data signals.
Determining the correlation data set for the at least two data signals may comprise: sorting the correlation data set in order of magnitude of correlation; and discarding at least one dimension of the data set dependent on the magnitude of correlation.
The method may further comprise performing a context related operation dependent on the feature class value.
The context related operation may comprise setting a computer game difficulty level. Assigning the feature class value dependent on the correlation data may comprise performing a training operation to associate a feature class to the correlation data set. Capturing at least two data signals from at least two sensors may comprise: receiving from a first sensor type at least one data signal stream; and receiving from a second sensor type at least one further data signal stream.
The method may further comprise filtering the at least two data signals prior to determining a correlation data set for the at least two data signals.
The at least two sensors may comprise at least two of: camera sensor for vision perception; microphone sensor for hearing perception; temperature sensor for temperature perception; humidity sensor for humidity perception; tactile sensor for touch and/or pressure perception; electromyography sensor for muscle movement perception; chemical sensor for smell perception; compass for direction perception; satellite positioning sensor for location perception; gyroscope/accelerometer for orientation perception; antenna sensor for signal strength perception; battery sensor for power perception; electroencephalograph for brain activity perception; and pulse monitor, blood pressure sensor, electrocardiograph for heart activity perception.
According to a second aspect of the invention there is provided an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: capturing at least two data signals from at least two sensors; determining a correlation data set for the at least two data signals; and assigning a feature class value dependent on the correlation data set.
The apparatus caused to at least perform determining a correlation data set for the at least two data signals is preferably further caused to perform: aligning the at least two data signals; projecting at least one of the at least two data signals onto a first basis vector to generate a first sample space; projecting a further of the at least two data signals onto a second basis vector to generate a second sample space; and wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
The apparatus caused to at least perform determining a correlation data set for the at least two data signals is preferably further caused to perform determining a kernel canonical correlation for the at least two data signals,
The apparatus caused to at least perform determining a correlation data set for the at least two data signals is preferably further caused to perform: sorting the correlation data set in order of magnitude of correlation; and discarding at least one dimension of the data set dependent on the magnitude of correlation.
The apparatus may be further caused to perform a context related operation dependent on the feature class value.
The context related operation preferably comprises setting a computer game difficulty level.
The apparatus caused to at least perform assigning a feature class value dependent on the correlation data is preferably further caused to perform a training operation to associate a feature class to the correlation data set.
The apparatus caused to at least perform capturing at least two data signals from at least two sensors is preferably caused to perform: receiving from a first sensor type at least one data signal stream; and receiving from a second sensor type at least one further data signal stream. The apparatus may be caused to at least further perform filtering the at least two data signals prior to determining a correlation data set for the at least two data signals. The at least two sensors preferably comprises at least two of: camera sensor for vision perception; microphone sensor for hearing perception; temperature sensor for temperature perception; humidity sensor for humidity perception; tactile sensor for touch and/or pressure perception; electromyography sensor for muscle movement perception; chemical sensor for smell perception; compass for direction perception; satellite positioning sensor for location perception; gyroscope/accelerometer for orientation perception; antenna sensor for signal strength perception; battery sensor for power perception; electroencephalograph for brain activity perception; and pulse monitor, blood pressure sensor, electrocardiograph for heart activity perception.
According to a third aspect of the invention there is provided an apparatus comprising: a signal processor configured to capture at least two data signals from at least two sensors; a feature extractor configured to determine a correlation data set for the at least two data signals; and a feature classifier configured to assign a feature class value dependent on the correlation data set.
The feature extractor may comprise a buffer configured to align the at least two data signals; and an eigenanalyser configured to project at least one of the at least two data signals onto a first basis vector to generate a first sample space; and project a further of the at least two data signals onto a second basis vector to generate a second sample space; wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
The feature extractor may further comprise a kernel canonical correlator configured to perform a kernel canonical correlation on the at least two data signals. The feature extractor may further comprise a correlation sorter configured to sort the correlation data set in order of magnitude of correlation; and discard at least one dimension of the data set dependent on the magnitude of correlation. The apparatus may comprise a resource controller configured to perform a context related operation dependent on the feature class value.
The context related operation may comprise setting a computer game difficulty level.
The feature classifier may comprise a feature trainer configured to perform a training operation to associate a feature class to the correlation data set.
The signal processor may be configured to receive from a first sensor type at least one data signal stream; and receive from a second sensor type at least one further data signal stream.
The signal processor may comprise a filter configured to filter the at least two data signals prior to determining a correlation data set for the at least two data signals.
The at least two sensors may comprise at least two of: camera sensor for vision perception; microphone sensor for hearing perception; temperature sensor for temperature perception; humidity sensor for humidity perception; tactile sensor for touch and/or pressure perception; electromyography sensor for muscle movement perception; chemical sensor for smell perception; compass for direction perception; satellite positioning sensor for location perception; gyroscope/accelerometer for orientation perception; antenna sensor for signal strength perception; battery sensor for power perception; electroencephalograph for brain activity perception; and pulse monitor, blood pressure sensor, electrocardiograph for heart activity perception. According to a fourth aspect of the invention there is provided an apparatus comprising: signal capture means configured to capture at least two data signals from at least two sensors; signal processing means configured to determine a correlation data set for the at least two data signals; and feature classifying means configured to assign a feature class value dependent on the correlation data set.
According to a fifth aspect of the invention there is provided a computer-readable medium encoded with instructions that, when executed by a computer perform: capturing at least two data signals from at least two sensors; determining a correlation data set for the at least two data signals; and assigning a feature class value dependent on the correlation data set.
An electronic device may comprise apparatus as described above. A chipset may comprise apparatus as described above.
Brief Description of Drawings
For better understanding of the present invention, reference will now be made by way of example to the accompanying drawings in which:
Figure 1 shows schematically an electronic device configured to implement embodiments of the application;
Figure 2 shows schematically a context sensitive user interface configuration which may employed in embodiments of the application;
Figure 3 shows a table of example sensors which may be employed in some embodiments of the application;
Figure 4 shows a table of example variables and sensed levels as may be employed in some embodiments;
Figure 5 shows schematically the context sensitive block from the context sensitive user interface shown in Figure 2 in further detail; Figures 6a, 6b and 6c show flow diagrams of the operations of the context sensitive block according to some embodiments;
Figure 7 shows schematically apparatus configured to implement of a multi- sensor context sensitive user interface in a gaming application according to some embodiments;
Figure 8 shows a flow diagram of the operation of the apparatus multi- sensor context sensitive user interface in a gaming application shown in Figure 7; and
Figure 9 shows a schematic view of a Trainer/Classifier as shown in Figure 5 in further detail.
The following describes apparatus and methods for the provision of noise insensitive context driven user interface configuration (in other words improving signal to noise in context sensitive operations in human computer interaction). In this regard reference is first made to Figure 1 which shows a schematic block diagram of an exemplary electronic device 10 or apparatus, which may incorporate enhanced signal to noise context sensitive user interface performance components and methods. The apparatus 10 may for example be a mobile terminal or user equipment for a wireless communication system. In other embodiments the electronic device may be any audio player, such as an mp3 player or media (mp4) player, equipped with suitable sensors as described below. In other embodiments the apparatus 10 may be any suitable computing equipment requiring user input. For example games consoles, personal computer, personal digital assistants, or any apparatus with displays and/or input devices.
The apparatus 10 in some embodiments comprises a processor 21 . The processor 21 may be configured to execute various program codes. The implemented program codes may comprise a context driven user interface enhancement code. The implemented program codes 23 may be stored for example in the memory 22 for retrieval by the processor 21 whenever needed. The memory 22 could further provide a section 24 for storing data, for example data that has been processed in accordance with the embodiments.
The context driven user interface enhancement code may in embodiments be implemented at least partially in hardware or firmware.
The processor 21 may comprise an audio subsystem 14 which may comprise an audio output subsystem where in some embodiments the processor 21 is linked via a digital-to-analogue converter (DAC) 32 to a speaker 33.
The digital to analogue converter (DAC) 32 may be any suitable converter. The speaker 33 may for example be any suitable audio transducer equipment suitable for producing acoustic waves for the user's ears generated from the electronic audio signal output from the DAC 32. The speaker 33 in some embodiments may in some embodiments be a headset or ear worn speaker (EWS) and may be connected to the electronic device 10 via a headphone connector. In some embodiments the speaker 33 may comprise the DAC 32. Furthermore in some embodiments the speaker 33 may connect to the electronic device 10 wirelessly 10, for example by using a low power radio frequency connection such as demonstrated by the Bluetooth A2DP profile. The apparatus 10 audio subsystem may further comprise an audio input subsystem which in some embodiments further comprises at least two microphones in a microphone array for inputting or capturing acoustic waves and outputting audio or speech signals to be processed according to embodiments of the application. This audio or speech signals may according to some embodiments be transmitted to other electronic devices via the transceiver 13 or may be stored in the data section 24 of the memory 22 for later processing. A corresponding program code or hardware to control the capture of audio signals using the at least two microphones may be activated to this end by the user via an user interface 15. The apparatus audio input subsystem in such embodiments may further comprise an analogue-to-digital converter (ADC) configured to convert the input analogue audio signals from the microphone array into digital audio signals and provide the digital audio signals to the processor 21 .
The apparatus 10 may further comprise a display subsystem 32. The display subsystem 32 may comprise any suitable display technology, for example a touch screen display, LCD (liquid crystal display), organic light emitting diode (OLED) display, or an output to a separate display.
The processor 21 is further linked to a transceiver (TX RX) 13, to a user interface (Ul) 15 and to a memory 22.
The user interface 15 may enable a user to input commands or data to the apparatus 10. Any suitable input technology may be employed by the apparatus 10. It would be understood for example the apparatus in some embodiments may employ at least one of a keypad, keyboard, mouse, trackball, touch screen, joystick and wireless controller to provide inputs to the apparatus 10. The user interface may in some embodiments be any suitable combination of input and display technology, for example a touch screen display suitable for both receiving inputs from the user and displaying information to the user may be implemented. Examples of which may be capacitive, resistive and multi-touch displays configured to react to objects in contact with and neighbouring the display surface.
The transceiver 13, may be any suitable communication technology and be configured to enable communication with other electronic devices, for example via a wireless communication network.
Furthermore the apparatus may comprise sensors or a sensor bank 16. The sensor bank 16 receives information about the environment and the operator of the apparatus in which the apparatus 10 is operating and passes this information to the processor 21 in order to affect the processing of the audio signal and in particular to affect the processor 21 . The sensor bank 16 may in some embodiments comprise at least one of the following set of sensors, as shown in Figure 3. It would be understood sensors others than shown in Figure 3 may also be implemented as part of the sensor bank.
Figure 3 is divided into three columns. The first column is the "perception" 201 , the second column is the sensor 203 associated with and monitoring the perception 201 and the third column 205 is the modality associated with the sensor and the perception. Thus, for example, the perception of vision may be sensed by a camera or camera module which has as its modality a visual mode of detection.
The sensor bank 16 may thus in some embodiments comprise a camera module. The camera module may in some embodiments comprise at least one camera having a lens for focusing an image on to a digital image capture means such as a charged coupled device (CCD). In other embodiments the digital image capture means may be any suitable image capturing device such as complementary metal oxide semiconductor (CMOS) image sensor. The camera module further comprises in some embodiments a flash lamp for illuminating an object before capturing an image of the object. The flash lamp is in such embodiments linked to a camera processor for controlling the operation of the flash lamp. In other embodiments the camera may be configured to perform infra-red and near infra-red sensing for low ambient light sensing. The at least one camera may be also linked to the camera processor for processing signals received from the at least one camera before passing the processed image to the processor. The camera processor may be linked to a local camera memory which may store program codes for the camera processor to execute when capturing an image. Furthermore the local camera memory may be used in some embodiments as a buffer for storing the captured image before and during local processing. In some embodiments the camera processor and the camera memory are implemented within the processor 21 and memory 22 respectively. The camera module may in some embodiments be configured to determine the position of the apparatus 10 with regards to the user by capturing images of the user from the device and determining an approximate position or orientation relative to the user. In some embodiments for example, the camera module 101 may comprise more than one camera capturing images at the same time at slightly different positions or orientations. In some other embodiments the camera module may be configured to monitor the user, for example monitor eye movement, or pupil dilation or some other physiological function.
The camera module may in some embodiments be further configured to perform facial recognition on the captured images and therefore may estimate the position of the mouth of the detected face. The estimation of the direction or orientation between the apparatus to the mouth of the user, may be applied when the phone is used in a hands-free mode of operation, or in a audio-video conference mode of operation where the camera image information may be used in some embodiments both as images to be transmitted but also locate the user speaking to improve the signal to noise ratio for the detection of emotion from the mouth shape. The perception of hearing may be detected via the microphone or microphone array sensor as discussed with reference to the audio input subsystem and has an auditory modality.
The perception of temperature may be detected in some embodiments by the apparatus sensor bank 16 comprising a temperature sensor such as a thermister, a thermocouple or any suitable temperature sensing apparatus.
The perception of humidity may be detected in some embodiments by the apparatus sensor bank 16 comprising a humidity sensor.
The perception of touch or pressure may in some embodiments be detected by the apparatus sensor bank 16 comprising at least one tactile sensor. In some embodiments the tactile sensor may be implemented as a touch screen and some embodiments as part of the display subsystem. The perception of touch or pressure has a modality of tactile nature. The perception of muscle movement may be detected or monitored by in some embodiments the sensor bank 16 comprising at least one electromyographical sensors (EMG sensors). It would be appreciated that in some embodiments surface EMG (sEMG) sensors are used as they are less invasive. The perception of smell may be detected in some embodiments by the sensor bank 16 comprising smell sensors such as chemical detectors configured to detect specific organic or non-organic chemicals. This perception of smell is associated with an olfactory mode of determination. The perception of direction may be detected in some embodiments by the sensor bank 16 comprising a compass such as a solid state compass.
The perception of location may be detected in some embodiments by the sensor bank 16 comprising a satellite positioning estimator or other locating device apparatus.
The perception of orientation may be detected by the sensor bank 16 comprising in some embodiments sensors such as gyroscopes or accelerometers. In some embodiments the sensor bank 16 comprises a position/orientation sensor which may be a gravity sensor configured to output the electronic device's orientation with respect to the vertical axis. The gravity sensor for example may be implemented as an array of mercury switches set at various angles to the vertical with the output of the switches indicating the angle of the electronic device with respect to the vertical axis. In some embodiments the position/orientation sensors may be implemented, as described previously, as a satellite position system such as a global positioning system (GPS) sensor whereby a receiver is able to estimate the position of the apparatus from receiving timing data from orbiting satellites. Furthermore in some embodiments the GPS information may be used to derive orientation and movement data by comparing the estimated position of the receiver at two time instances.
In some embodiments the sensor bank 16 further comprises a position/motion sensor in the form of a step counter. A step counter may in some embodiments detect the motion of the apparatus when mounted on the user as the apparatus rhythmically moves up and down as the user walks. The periodicity of the steps may themselves be used to produce an estimate of the speed of motion of the user in some embodiments. In other embodiments the step counter in combination with a known location estimate and an orientation estimate may be implemented as part of a location sensor outputting location estimates. In some embodiments the step counter may be implemented as a gravity sensor. In some further embodiments of the application, the sensor bank 16 may comprise at least one accelerometer configured to determine any change in motion of the apparatus.
In some other embodiments, the position/orientation sensor may comprise a capacitive sensor capable of determining an approximate distance from the apparatus to the user's head when the user is operating the apparatus. Thus as the apparatus is moved in position relative to the head this motion is detected.
The perception of signal strength may be detected by the sensor bank 16 in some embodiments comprising a signal power or signal strength estimator monitoring a signal received via an antenna and a receiver. The perception of power may be detected by the sensor bank 16 in some embodiments comprising a battery sensor. Thus in some embodiments the instantaneous power consumption of the apparatus may be monitored, and in some embodiments the current battery charge level estimated. Thus an expected battery life before recharge/replacement may be determined in some embodiments. The perception of brain activity may be detected in some embodiments by the sensor bank 16 comprising an electroencephalogram sensor (EEG). Any suitable ECG sensor configuration may be employed.
The perception of heart activity may be detected in some embodiments by the sensor bank 16 comprising at least one of a blood pressure sensor, a pulse monitor or electrocardiogram sensor (ECG).
It is to be understood again that the structure of the electronic device 10 could be supplemented and varied in many ways.
With respect to Figure 4 some examples of context variables which may be employed in embodiments and describe monitored characters of the apparatus, are shown. The examples shown in Figure 4 are divided into two columns. The first column shows the device context variables 301 and the second column shows the sensed level labels 303 associated with the device context variables 301.
For example a memory device context variable 305 may have sensed labels for Phone. Freespace which identifies the amount of free memory available in the apparatus memory in total, Card. Freespace which identifies the amount of free memory available in the any memory card inserted in the apparatus, and RAM. Freespace which identifies the amount of free memory in the apparatus random access memory. Furthermore the memory 305 device context variable 301 may have sensed labels for the Backup. Latest and Restore. Latest labels, which identify the date time stamp for the latest backup indicating when the last user data information backup was carried out and also the latest restore point of the operating system respectively. Furthermore Figure 4 shows device context variables 301 for Phone Settings variables 307 where in some embodiments there may be sensed labels such as Internet.Available which indicates if there are settings for internet connection on the apparatus, SMS.Available which indicates if there are settings for Short Message Service applications, Email.Available which indicates if there are settings for email applications, GPRS.Available which indicates if there are settings for General Packet Radio Service applications, and WAP.Available which indicates if there are settings for Wireless Access Protocol applications. Furthermore as shown in Figure 4 the apparatus context variables 301 may in some embodiments comprise FileExist variables which may have sensed labels for Created, Deleted and Exists. Created sense label indicates when the file was first saved or later amended. Deleted indicates when the file was deleted or if it has been deleted. Exists indicates whether or not the file currently exists or not.
As also shown in Figure 4 in some embodiments there may comprise a device context variable 301 called Process 31 1 which has a sense label of Crash which indicates whether or not the current process is in action or has crashed. In some embodiments there may be a device context variable 301 called Application which may in some embodiments have sense labels called Installed. "ApplicationName" indicating the application name as used or displayed by the apparatus, Installed .Widsets and Installed. Ngage which indicates the installed component libraries used or available.
In some further embodiments there may be device context variables 301 called Messages 315 which includes such sensed labels called SMS. Sent which indicates when (and if) the SMS has been correctly sent, SMS. Failed which indicates when (and if) the SMS message has failed to be sent. Similarly the message variable may comprise in some embodiments other sense labels such as MMS.Sent and MMS. Failed which are similar to the SMS labels but directed to multimedia message service messages. Furthermore in some embodiments there may be sense labels 303 of Email. Sent and Email. Failed which carry similar information but relate specifically to electronic mail messages.
Some further embodiments may comprise a device context variable 301 called Networklnfo 317 may have a sense label Cellld identifying the current cell identification value within which the apparatus is communicating.
The apparatus 10 may in some embodiments comprise a context variable 301 called Network!" raffic 319 which may have labels called Downlink.Bandwidth and Uplink.Bandwidth which contain values associated with the current downlink and uplink bandwidth the apparatus is currently experiencing.
As shown in Figure 4 some embodiments may comprise a device context variables 301 called GPS 321 which may have sense labels called GPS. Coordinate which indicates the apparatus current location according to the GPS positioning estimate, and GPS.SignalStrength which indicates the GPS signal strength to provide a confidence estimate of how accurate the GPS location estimate is.
In some further embodiments there may be a context variable 301 called Battery 323 which has sense labels called Level, Charger, UsagePattern which would indicate respectively the battery current level or current power consumption level, the detection of whether or not a charger is in use, and furthermore a power usage pattern which could predict the battery life. Furthermore in some embodiments there may comprise a device context variable 301 called Application. State 325 which may have sense labels called Foreground.Application indicating a current application being run in the "foreground", Background.Application indicating an application being run in the background and furthermore a Foreground.View label indicating what view is to be displayed in the foreground for the application currently being run in the foreground. It would be appreciated that the schematic structures described in Figures 2, 5, 6a and 7 and the method steps in Figures 6b, 6c and 8 represent only a part of the operation of a complete context sensitive user interface comprising some embodiments as exemplarily shown implemented in the electronic device shown in figure 1.
With respect to Figure 2 a schematic overview of components which may comprise at least part of a context sensitive user interface with respect to the apparatus embodiments as shown in Figure 1 is described in further detail. The processor 21 comprises in some embodiments the user interface configuration processor 102, which is configured to communicate with a context processor 100. The context processor 100 in some embodiments further comprises a context engine 103 which is configured to receive information from the context sensing block 101 and output data to the resource control block 105 and further may communicate with the user interface configuration processor 102. Furthermore the context sensing block 101 is configured in some embodiments to receive sensor information from the sensor bank 16 and communicate sensed context information to the context engine 103.
The resource control block 105 in some embodiments is configured to both input and output resource control information from the resources 33 in other words control the apparatus in light of the sensed information.
The sensors or sensor bank 16 thus in some embodiments provide information to the context sensing block 101 , where the context sensing block 101 may be configured in some embodiments to process the sensor information and output specific context label which may be generated (as described in further detail later) from a combination of the sensed information. The context engine 103 in some embodiments may, dependent on the context sense label parameter or value, determine an action or operation to be carried out (or stop an action from occurring). For example, as will be described later, the context engine may control the difficulty level of a computer game. The context engine 103 may interact with user interface configuration processor 102 which is configured to generate or recall configuration details required by the context engine in order to control the interaction. Using the computer game example discussed above, the user interface configuration processor 102 may comprise information relating to the various levels of difficulty and game parameter information relating to the levels of difficulty which are recalled by the context engine 103. The context engine 103 may further in embodiments be configured to output to the resource control block 105 specific information for controlling resources such as memory, display memory and audio memory. The resource control block 105 may receive this resource control information and specifically drive the resources using this information. For example using the gaming example, the resource control block controls the display and audio subsystem memory and configuration. The resources, for example the audio and video subsystems then are controlled to display to the user specific information which is context driven.
With respect to Figure 5, a schematic view of the context sensing block 101 is shown in further detail. The context sensing block 101 is shown connected to the sensor bank 16. Furthermore the context sensing block 101 is further connected to the context engine 103. The context sensing block 101 comprises a pre-processor 403 and feature extractor/classifier 405. In some embodiments there may be a pre-processor 403 allocated to each of the sensors so that each sensor is associated with a pre-processor which outputs pre-processed sensor data to at least one feature extractor/classifier 405. In some embodiments, there may be more than one feature extractor/classifier 405, each feature extractor/classifier handling at least two of the different pre-processed sensor data types.
In some embodiments the pre-processor 403 may handle more than one type of sensor data. In such embodiments the pre-processor comprises a pre-processor controller 410 configured to configure the pre-processor 403 to handle each of the types of data and output a specific sensor data type of a predefined variable configuration to the feature extractor/classifier 405. The pre-processor 403 as shown in Figure 5 may comprise components such as a digitiser 41 1 , a time-to- frequency domain converter 413, and a sub-band filter 415.
The digitiser 41 1 may be configured to receive the sensor information and convert an analogue sensor information input to a digital format. Any suitable analogue to digital conversion may be implemented within the digitiser 41 1 . Furthermore the digitiser may be configured to be controlled by the pre-processor controller 410 in order to output sensor information in a form acceptable for further pre-processing. The time to frequency domain converter 413 may in some embodiments comprise a Fast Fourier Transform (FFT), a Discrete Fourier Transform (DFT), a Discrete Cosine Transform (DCT), a Modified Discrete Cosine Transform (MDCT) or any suitable time-to-frequency domain converter. The time-to-frequency domain converter thus in these embodiments is configured to receive time domain information, for example digitised electroencephalographic (EEG) sensor signal data and output frequency coefficient values representing the relative frequency components of the EEG signals.
In some embodiments the pre-processor controller 410 may furthermore control the time to frequency domain converter 413 in terms of the length of sample period, the number of samples input, the number of frequency outputs, etc.
The sub-band filter 415 may be configured to receive from the time to frequency domain converter, the frequency coefficient values and output from the frequency coefficient values specific sub-bands activity labels. The sub-band output from the sub-band filter 415 may in some embodiments be configured to be contiguous, in other words the frequency range output by the sub-band filter may represent a continuous frequency spectrum, or in some other embodiments be overlapping or separate from each other. In some embodiments, the sub-bands may be linear and uniform in bandwidth whereas in other embodiments the sub-bands output may be non-linear or non uniform in bandwidth, for example in auditory data the output from the sub-band filter 415 may be based on a psychoacoustic model. The output of the pre-processor 403 may be passed to the feature extractor/classifier 405. The feature extractor/classifier 405 may comprise a sample space buffer 421 configured to receive pre-processed data prior to analysis, a basis vector selector/Eigenanalyzer 425 which in some embodiments receives the sample space buffer data and outputs a series of basis vectors reflecting correlation between the sensor type input data, a canonical correlation sorter/producer 427 which is configured in some embodiments to receive the information relating to the correlation of the data sets and sort, or reorder, or reduce the sensor information, the trainer/classifier 429 which in some embodiments is configured to train or classify the correlation data to indicate a specific context label. Furthermore the feature extractor/classifier may comprise a controller 420 which is configured in some embodiments to control the sample space buffer 421 , basis vector selector/Eigenanalyzer 425, the canonical correlation sorter/producer 427 and the trainer/classifier 429.
Correlations in some embodiments can be computed between psychophysiological signals and device/application status. For example, if an application crashes or phone rings, psychophysiological signals indicate a response corresponding to user registering or ignoring the event.
The operation of the context sensing block for an example where muscle movement and brain activity is described in further detail with respect to Figures 6a, 6b and 6c. Figure 6a for example, shows an overview of the example operation. The surface electromyography (sEMG) sensor captures the muscle movement electrical data which may be in the form of at least one sensed muscle movement sensor location and outputs this information to the pre-processor. The operation of sensing the muscle movement is shown in Figure 6a by step 501 a. The pre-processor 403 may as shown in Figure 5 comprise a digitiser 41 1 configured to receive the surface EMG analogue data and digitise this data. The output digitised sEMG signal may be passed to a sEMG Fast Fourier Transformer (FFT) 413. The operation of digitising the sEMG data is shown in Figure 6b by step 553.
The Time-to-Frequency domain conversion of the sEMG in this example may be a FFT. However in other embodiments the time-to-frequency domain converter may comprise a short time period Fourier Transformer or wavelet packet transformer in order to output a frequency domain version of the sEMG data. The operation of performing a FFT on the sEMG data is shown in Figure 6b by step 555.
Furthermore in some embodiments the frequency domain sEMG data may then be sub-band filtered or processed to generate a power estimation for sub-bands of the FFT sEMG data. The power estimation of FFT sEMG data is shown in Figure 6b by step 557.
Thus in some embodiments each of the sensed positions may output at least one frequency band power estimation value for the sEMG for a specific period which may indicate locations and timings of specific muscle activity.
The pre-processing of the sEMG is shown in the overview figure of Figure 6a by the step 503a. The pre-processed sEMG data, in other words an estimation of power of each of the sensed locations for at least one frequency sub-band may then be output in embodiments of the application to the feature extractor/classifier 405.
Furthermore, this example is described with respect to further sensor information from the brain activity. In this example, the brain activity perception is indicated by an electroencephalogram (EEG) signal, in other words the electrical activity of the surface of the scalp is monitored to determine the electrical activity of the cortical area. The EEG sensors in some embodiments generate EEG signals dependent on the electrical fields detected and output this analogue EEG data for each of the sensors to the pre-processor 403. The generation/capture of EEG data streams is shown in the overview figure of Figure 6a by step 501 b and in the more detailed pre-processing operations figure of Figure 6b by step 561 .
The pre-processor may in some embodiments receive the EEG signals and digitises the EEG signals using a digitiser 41 1 . The output digital EEG signals may then be output to a further Fast Fourier Transformer (or any suitable time-to- frequency domain converter) 413. The operation of digitising the EEG signals is shown in Figure 6b by step 563.
The time-to-frequency domain converter or in this example the further Fast Fourier Transformer 413 may then in some embodiments convert the time domain EEG signals from each of the sensors to a frequency domain representation of these signals. As described previously, any suitable time to frequency domain conversion may be chosen or implemented within the time to frequency domain converter 413. The operation of applying a FFT to the EEG signals to generate frequency domain EEG coefficients is shown in Figure 6b by step 565. The frequency domain EEG coefficients are then passed to the sub-band filter 415. The sub-band filter may in some embodiments then perform a sub-band analysis of each frequency domain coefficient to output values for brain activity in "known" frequency ranges. The filtered sub-band coefficient values may then be output to the feature extractor/classifier for each of the sensed locations. The sub-band filtering of the EEG data is shown in Figure 6b by step 567.
With respect to the EEG sensed data, the electrodes may be positioned according to a system where within a headset is located a plurality of electrodes configured to detect electrical fields of the surface of the scalp when the headset is applied. The EEG signals detected by the headset may have a range of characteristics, but for the purposes of illustration typical characteristics are as follows: Amplitude 10 - 4000 microvolts, frequency range 0.16-256 Hz and sampling rate 128 - 2048 Hz. The data samples may be further conditioned by the pre-processor in some embodiments to reduce possible noise including external interference introduced in signal collection, storage and retrieval. For example in some embodiments the pre-processor filters the captured EEG signals using a notch filter to remove power line signal frequencies about 50 to 60 Hz and using a low pass filter configured to remove high frequency noise originating from switching circuits within the EEG acquisition hardware. Furthermore in some embodiments, the preprocessor applies a further filtering of the captured EEG signal by using a high pass filter to remove DC components. The EEG samples may be furthermore divided into equal length time segments within longer epochs where, for example there are seven time segments of equal duration within an epoch (however in other embodiments, the number and length of time segments may be altered). Furthermore in some embodiments the time segments may not be of equal duration and may or may not overlap within an epoch. The length of each epoch in some embodiments may vary dynamically depending on events in the detection system. However, in general an epoch is selected to be sufficiently long that a change in mental state, if one occurs, can be reliably determined or detected. In some embodiments the EEG signal may be pre-processed to become a differential domain that approximates to first derivative in the EEG.
In some embodiments seven frequency bands may be output from the sub-band filter for each of the sensors with the following frequency ranges: δ (2 to 4 Hz), Θ (4 to 8 Hz), crt (8 to 10 Hz), a2 (10 to 13 Hz), β1 (13 to 20 Hz), β2 (20 to 30 Hz) and γ (30 to 45 Hz). In some embodiments the power of each of these frequency bands is calculated within the pre-processor. The operation of the feature extractor/classifier 405 is shown in further detail with respect to the overview figure of Figures 6a and the detailed operations shown in Figure 6c. The feature extractor/classifier 405 in some embodiments receives in the sample-space buffer 421 the multivariate data, for example the data associated with the surface EMG power coefficient at each location monitored, and the EEG pre-processed frequency data. The sample space-buffer 421 may be configured in some embodiments to synchronise the data received from the pre-processor from each of the different sensor types. For example the sample space buffer 421 may carry out an interpolation or decimation of one of the data signals received. The reception of the multivariate data is shown in Figure 6c by step 571 .
The basis vector selector/Eigenanalyzer 425 thus receives the synchronised data from each sensor type at the same time and performs analysis to attempt to find the basis vectors such that the correlations of the projections of the variables into the basis vectors are maximised.
It is possible to denote the multidimensional variable for the pre-processed muscle movement sensor data (sEMG) as X and the multidimensional variable for the brain activity (EEG) signal as Y. Furthermore it is possible to define a sample set S = ((x1 ,y,),...,(x„,y„)) where the Xi, Yi is the first instance sample of (X, Y) and Xn, Yn is the nth instance of (X, Y). Furthermore if we denote the muscle movement (sEMG) sensor sample set as SA. = (x15...,x„) and the brain activity (EEG) pre- processed sample data as Sy = (y1 ,...,y)1 ) then we have defined both the complete and multi-variate sample spaces.
The Eigenanalyzer 425 in some embodiments then defines a new set of coordinates by projecting X onto the basis vector Wx and similarly projects Y onto basis vector Wy. The set of these coordinates are given by the sample space S.v,w, = «w,,x, ),...,<w,,x„» and = «wJ, ,y1>>...,<w , ,yn » . The basis vector selector 425 may in some embodiments find w¾. and wy to maximise the correlations between the two vectors. In some embodiments this may be carried out with respect to the following equation.
Figure imgf000030_0001
Where p is the correlation coefficient between x & y forms. Maximising p with respect to wv and results in maximum canonical correlation.
The values of w, and may for example be obtained by solving the Eigenvalue equations.
^yy ^ν·^ X ^xV ^ ~ ^'^V
Cxx and Cyy are the autocovariance matrices of x and y respectively. Cxy and Cyx are the crosscovariance matrices between x and y. The selection of the basis vectors Wx, Wy to optimise correlation operation is shown in Figure 6c by step 573. The selection basis vectors may then be passed to the canonical correlation sorter/reducer 427 where the canonical correlations are sorted to find the most and least correlated dimensions.
In some embodiments, the canonical correlation sorter/reducer 427 performs a dimensionality reduction by discarding the least correlated dimensions. To overcome the linearity restriction of canonical correlation analysis, in some embodiments kernel canonical correlation analysis (KCCA) may be used. For example KCCA is discussed in detail in the published document "Canonical Correlation Analysis: An Overview with Application to Learning Methods, by Hardoon et al, Neural Computation, Vol. 16 issue 12 (Dec 2004) p2639-2664.
The application of the selected basis vectors and the sorting (and possible reduction) is shown in Figure 6c by step 575.
In some embodiments the Eigenanalysis may be applied only to some of the dimensions variables of the sensed values. For example in some embodiments not all of the EEG frequency bands may be input to the Eigenanalyzer 425. For example in some embodiments the β and γ frequency sub-bands of the EEG are not processed and only the low frequency a and high frequency β sub-band data are processed. In other embodiments such as facial recognition, some regions of the face may not be input to the Eigenanalyzer 425. For example, data from the camera relating to the region of the eyes and lips may be important for emotion recognition and processed in the Eigenanalyzer 425 whilst data relating to the portions of the face associated with the nose and ear may be discarded prior to inputting.
The trainer/classifier 429 in some embodiments receives the sorted basis vectors/values and applies these values.
The trainer/classifier 429 in some embodiments determines whether or not the apparatus and the context sensing block 101 is operating in a training mode of operation or a classifying mode of operation. The selection of a training/classifying mode of operation is shown in Figure 6c by step 577,
When the trainer/classifier 429 operates in a training mode of operation, the data is linked or associated with a feature in order to determine the most/least correlated dimensionality sensor data which is associated with the feature. The training may be carried out using any suitable training operation such as a gaussian mixture model (GMM) or an artificial neural network (ANN) training operation in order to train specific target classes for the feature being input at that particular time. The training or linking a feature with the most/least correlated dimensions of the combined sensor data is shown in operation Figure 6c by step 579. If the trainer/classifier 429 is operating in a classifying mode of operation, the trainer/classifier 429 is configured to input the most/least correlated dimensions to attempt to determine a feature class output from the trained classifier.
For example in the combined EEG/EMG example described above, the training operation may initially comprise operating the system where the user is at various levels of fatigue or stress (or simulated fatigue or stress) to produce trained target classes. These feature labels may be recalled or determined by the trainer/classifier 429 when the sensor produce similar signals and thus in embodiments operate in a classifier mode of operation where the output of the feature extractor/classifier 405 thus identifies the current fatigue or stress level of the user based on the EEG and EMG information from the sensors. The operation of "data mining" the previous trained database to determine the feature class is shown in Figure 6c by step 578. The overall training or classifying feature operation is shown in the summary Figure 6a as step 507.
With respect to Figures 7 and 8, an example of an embodiment of the application is shown in further detail with respect to a multimodal and multi-sensory human computer interface gaming application.
With respect to Figure 7, the human player 721 is shown monitored by a sensor bank 16 comprising a brain interface sensor 701 , a camera for gaze tracking located near to the eye 703, a muscle movement sensor (sEMG) 705, and a skin conductance sensor (Galvanic Skin Response GSR sensor) 707. Each of these sensors may output a multivariate output signal to the multi-sensor feature classifier 751 in some embodiments. The generation of the sensor data is shown in Figure 8 by step 601.
The multi-sensor feature classifier 751 and learning processor 735 may be configured to provide the functionality of the pre-processor 403, the Eigenanalyzer 425 and the canonical correlation sorter/reducer 427 (in the multi-sensor feature classifer 751 ) and the trainer/classifier 429 (in the learning processor 735). Thus in some embodiments the multi-sensor feature classifier 751 receives the sensor data and performs a pre-processing on the sensor data according to a suitable pre- processing operation (for example the EEG and sEMG sensor data may be processed in a manner described previously).
The pre-processing of the sensor data is shown in Figure 8 by step 603. Furthermore the multi-sensor feature classifier 751 may perform a pre-feature classification analysis to output selected eigenanalysis vectors. The eigenanalysis outputs may be passed to the learning processor 735 which may also receive the output of a performance evaluator 737. The performance evaluator 735 generates in some embodiments a game play evaluation parameter, for example indicating the speed at which the user is completing a specific level of the game in order that the output sensor values may be trained against a "same evaluation score".
The evaluation of game play for example is shown in Figure 8 by step 604. Furthermore the evaluation of the performance or game play within the performance evaluator 737 may be determined in embodiments based on information passed from the user interface and game processor 731. The generation or reception of game play data is shown in Figure 8 by step 602. Thus for example the user interface may output a points score to the performance evaluator which in some embodiments may determine how quickly the player is increasing the score and output feature labels such as "too easy", "too difficult" and "just right". The learning processor 735 in some embodiments may train or correlate the game play performance parameter label passed from the performance evaluator 737 against the multi-sensor feature classifier parameters passed by the multi-sensor feature classifier 751. In some embodiments, the operation of the performance evaluator 737 and the learning processor 735 are carried out only during the training phase of the system and may be disabled or bypassed when the learning processor 735 determines that it is sufficiently well trained. When not in training mode the learning processor may in some embodiments be configured to output the feature most similar to the learned sensory inputs. Therefore if the learning processor determines sensor values which were similar to those experienced on learnt "too easy" level, the learning processor may output a "too easy" value on a GameDifficulty label. The output of the learning processor/multi-sensor feature classifier is then passed in some embodiments to the adaption processor 733. The adaptation processor 733 may in some embodiments of the application determine whether or not the game play needs to be changed. For example the feature parameter output may indicate that based on the sensor information that the human player is fatigued or stressed at a specific level and is likely to give up or that the player is bored and the difficulty is too low to sufficiently keep the player interested in the playing of the game, in which case the adaption processor determines that the game level difficulty needs to be changed. The operation of determining whether or not the game needs to be changed is shown in Figure 8 by step 609.
Where the adaption processor 733 determines that the game does not need to be changed, the operation passes back to the receiving of sensor data and optionally receiving game play data. Where the adaption processor determines that the game is to be changed, the adaption processor 733 may furthermore determine which change is required in the game. For example where the game play has been indicated that it is too stressful, the game play can be simplified or where the game player is determined to be too simple, the difficulty level can be increased.
The determination of the change in the game operation is shown in Figure 8 by step 611.
Furthermore the adaption for changing the game is then applied by the adaption processor which passes control information to the game processor 731. The application of the adaption to the game is shown in Figure 8 by operation 613.
The multi-sensor feature computation may be used to thus attempt to determine a psychophysiological signal. For example the galvanic skin response (GSR) determines a signal which is related to arousal and stress. Wherein the EEG, for example, may also detect balance, attention and other mental states. The EMG, for example may otherwise detect gestures and muscle tension. The camera and thus the gaze tracking sensor, may detect eye movements which denote tension like or dislike. As such, the combination of all of these may be used in embodiments of the application and to produce a robust estimation of the user's mental and physical state by combining the measurements from each of the psychophysiological signals. In some embodiments the learning processor produces an initial data set by the game processor applying an initial stimuli such as provided by the international affective picture system (IAPS) and international affective digital sound (IADS). In some other embodiments, real tasks may also be performed to determine the users state physical and mental and using these real tasks, an initial training phase may be determined.
In some embodiments, where computational constraints are at a premium and multi-sensor streams may not be processed pair wise using the Eigenanalyzer, then streams may be grouped to perform heterogeneous streams, for example EEG and eye movement in one stream and sEMG and GSR data in the other stream. The groupings may be decided or determined based on which combination provides a maximum correlation. Although the above examples describe embodiments of the invention operating within an electronic device 10 or apparatus, it would be appreciated that the invention as described below may be implemented as part of any apparatus.
With respect to Figure 9 a further example of a trainer/classifier is shown wherein the trainer/classifier is configured to classify/train the feature using a committee of light weight classifiers. In these embodiments the trainer/classifier 429 comprises more than one classifier configured to receive the feature vector output from the canonical correlation sorter/reducer 427. In the example shown in Figure 9, the trainer classifier comprises a first classifier (Classifier 1 ) 901 , a second classifier (Classifier 2) 903 and a third classifier (Classifier 3). However it would be appreciated that there may be more than or less than 3 classifiers. Each of the classifiers 901 , 903 and 905 may be one of any known classifier such as Gaussian Mixture Model (GMM) classifier. Vector Classifier and Parzen Window Classifier. Each of the classifiers on receiving the feature vectors is configured to output a feature value to a Fusion Filtering processor 907. The trainer/classifier in some embodiments comprises a Fusion Filtering processor 907. The Fusion Filtering processor 907 is configured to on receiving the feature value carry out at least one of fusion and filtering action. A Fusion action is one where the Fusion Filtering processor 907 is configured to output the feature value where there is a majority voting decision. A Filtering action is one where the Fusion Filtering processor 907 is configured to output the feature value which has been median filtered.
The output of the trainer/classifier is thus the feature value output by the Fusion Filtering processor.
For example using such a classifier the inventor has produced a cognitive load feature determination where the feature vector inputs are determined from an EEG and heart rate monitoring. In such embodiments the Classifiers may output the feature (cognitive load) as being high or low which is then Fusion Filtered to output a cognitive load state sequence shown in Figure 9 as HHHHHLLLLL where H indicates a high load and L indicates a low cognitive load.
Thus user equipment may comprise a user interface such as described in embodiments above.
It shall be appreciated that the term apparatus electronic device and user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Thus in at least one embodiment there is an apparatus comprising: a signal processor configured to capture at least two data signals from at least two sensors; a feature extractor configured to determine a correlation data set for the at least two data signals; and a feature classifier configured to assign a feature class value dependent on the correlation data set. Furthermore the feature extractor may comprise a buffer configured to align the at least two data signals; and an eigenanalyser configured to project at least one of the at least two data signals onto a first basis vector to generate a first sample space; and project a further of the at least two data signals onto a second basis vector to generate a second sample space; wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
The feature extractor may further comprise a kernel canonical correlator configured to perform a kernel canonical correlation on the at least two data signals.
The feature extractor may further comprise a correlation sorter configured to sort the correlation data set in order of magnitude of correlation; and discard at least one dimension of the data set dependent on the magnitude of correlation.
The apparatus may comprise a resource controller configured to perform a context related operation dependent on the feature class value.
The context related operation may comprise setting a computer game difficulty level.
The feature classifier may comprise a feature trainer configured to perform a training operation to associate a feature class to the correlation data set. The signal processor may be configured to receive from a first sensor type at least one data signal stream; and receive from a second sensor type at least one further data signal stream.
The signal processor may comprise a filter configured to filter the at least two data signals prior to determining a correlation data set for the at least two data signals. The embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
In at least one embodiment there is a computer-readable medium encoded with instructions that, when executed by a computer perform: capturing at least two data signals from at least two sensors; determining a correlation data set for the at least two data signals; and assigning a feature class value dependent on the correlation data set.
The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate. Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication. As used in this application, the term 'circuitry' refers to all of the following:
(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and
(b) to combinations of circuits and software (and/or firmware), such as: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions and
(c) to circuits, such as a microprocessor(s) or a portion of a microprocessors), that require software or firmware for operation, even if the software or firmware is not physically present.
This definition of 'circuitry' applies to all uses of this term in this application, including any claims. As a further example, as used in this application, the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term 'circuitry' would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or similar integrated circuit in server, a cellular network device, or other network device. The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention as defined in the appended claims.

Claims

CLAIMS:
1 . A method comprising:
capturing at least two data signals from at least two sensors;
determining a correlation data set for the at least two data signals;
assigning a feature class value dependent on the correlation data set.
2. The method as claimed in claim 1 , wherein determining a correlation data set for the at least two data signals further comprising:
aligning the at least two data signals;
projecting at least one of the at least two data signals onto a first basis vector to generate a first sample space;
projecting a further of the at least two data signals onto a second basis vector to generate a second sample space; wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
3. The method as claimed in claims 1 and 2, wherein determining a correlation data set for the at least two data signals comprises determining a kernel canonical correlation for the at least two data signals.
4. The method as claimed in claims 1 to 3, wherein determining a correlation data set for the at least two data signals comprises:
sorting the correlation data set in order of magnitude of correlation; and discarding at least one dimension of the data set dependent on the magnitude of correlation.
5. The method as claimed in claims 1 to 4, further comprising performing a context related operation dependent on the feature class value.
6. The method as claimed in claim 5, wherein the context related operation comprises setting a computer game difficulty level.
7. The method as claimed in claims 1 to 6, wherein assigning a feature class value dependent on the correlation data comprises performing a training operation to associate a feature class to the correlation data set.
8. The method as claimed in claims 1 to 7, wherein capturing at least two data signals from at least two sensors comprises:
receiving from a first sensor type at least one data signal stream; and receiving from a second sensor type at least one further data signal stream.
9. The method as claimed in claims 1 to 8, further comprising filtering the at least two data signals prior to determining a correlation data set for the at least two data signals.
10. The method as claimed in claims 1 to 9, wherein the at least two sensors comprises at least two of:
camera sensor for vision perception;
microphone sensor for hearing perception;
temperature sensor for temperature perception;
humidity sensor for humidity perception;
tactile sensor for touch and/or pressure perception;
electromyography sensor for muscle movement perception;
chemical sensor for smell perception;
compass for direction perception;
satellite positioning sensor for location perception;
gyroscope/accelerometer for orientation perception;
antenna sensor for signal strength perception;
battery sensor for power perception;
electroencephalograph for brain activity perception; and pulse monitor, blood pressure sensor, electrocardiograph for heart activity perception.
1 1 . An apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform:
capturing at least two data signals from at least two sensors;
determining a correlation data set for the at least two data signals;
assigning a feature class value dependent on the correlation data set.
12. The apparatus as claimed in claim 1 1 , wherein the apparatus caused to at least perform determining a correlation data set for the at least two data signals is further caused to perform:
aligning the at least two data signals;
projecting at least one of the at least two data signals onto a first basis vector to generate a first sample space;
projecting a further of the at least two data signals onto a second basis vector to generate a second sample space; wherein the first and the second basis vectors are chosen to maximise the correlation data set defined by the two sample spaces.
13. The apparatus as claimed in claims 1 1 and 12, wherein the apparatus caused to at least perform determining a correlation data set for the at least two data signals comprises is further caused to perform determining a kernel canonical correlation for the at least two data signals.
14. The apparatus as claimed in claims 1 1 to 13, wherein the apparatus caused to at least perform determining a correlation data set for the at least two data signals is further caused to perform:
sorting the correlation data set in order of magnitude of correlation; and discarding at least one dimension of the data set dependent on the magnitude of correlation.
15. The apparatus as claimed in claims 1 1 to 14, further caused to perform a context related operation dependent on the feature class value.
16. The apparatus as claimed in claim 15, wherein the context related operation comprises setting a computer game difficulty level.
17. The apparatus as claimed in claims 1 1 to 16, wherein the apparatus caused to at least perform assigning a feature class value dependent on the correlation data is further caused to perform a training operation to associate a feature class to the correlation data set.
18. The apparatus as claimed in claims 1 1 to 17, wherein the apparatus caused to at least perform capturing at least two data signals from at least two sensors is caused to perform:
receiving from a first sensor type at least one data signal stream; and receiving from a second sensor type at least one further data signal stream.
19. The apparatus as claimed in claims 11 to 18, caused to at least further perform filtering the at least two data signals prior to determining a correlation data set for the at least two data signals.
20. The apparatus as claimed in claims 1 1 to 19, wherein the at least two sensors comprises at least two of:
camera sensor for vision perception;
microphone sensor for hearing perception;
temperature sensor for temperature perception;
humidity sensor for humidity perception;
tactile sensor for touch and/or pressure perception;
electromyography sensor for muscle movement perception; chemical sensor for smell perception;
compass for direction perception;
satellite positioning sensor for location perception;
gyro sco pe/accelero meter for orientation perception;
antenna sensor for signal strength perception;
battery sensor for power perception;
electroencephalograph for brain activity perception; and
pulse monitor, blood pressure sensor, electrocardiograph for heart activity perception.
PCT/IB2010/050363 2010-01-27 2010-01-27 Method and apparatus for assigning a feature class value WO2011092549A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2010/050363 WO2011092549A1 (en) 2010-01-27 2010-01-27 Method and apparatus for assigning a feature class value

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2010/050363 WO2011092549A1 (en) 2010-01-27 2010-01-27 Method and apparatus for assigning a feature class value

Publications (1)

Publication Number Publication Date
WO2011092549A1 true WO2011092549A1 (en) 2011-08-04

Family

ID=44318718

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2010/050363 WO2011092549A1 (en) 2010-01-27 2010-01-27 Method and apparatus for assigning a feature class value

Country Status (1)

Country Link
WO (1) WO2011092549A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103018181A (en) * 2012-12-14 2013-04-03 江苏大学 Soft measurement method based on correlation analysis and ELM neural network
WO2014074268A1 (en) * 2012-11-07 2014-05-15 Sensor Platforms, Inc. Selecting feature types to extract based on pre-classification of sensor measurements
WO2014205767A1 (en) * 2013-06-28 2014-12-31 Verizon Patent And Licensing Inc. Human-computer interaction using wearable device
WO2015006525A1 (en) 2013-07-12 2015-01-15 Facebook, Inc. Multi-sensor hand detection
US9622687B2 (en) 2013-09-05 2017-04-18 Qualcomm Incorporated Half step frequency feature for reliable motion classification
US9668048B2 (en) 2015-01-30 2017-05-30 Knowles Electronics, Llc Contextual switching of microphones
US9807725B1 (en) 2014-04-10 2017-10-31 Knowles Electronics, Llc Determining a spatial relationship between different user contexts
CN109497996A (en) * 2018-11-07 2019-03-22 太原理工大学 A kind of the complex network building and analysis method of micro- state EEG temporal signatures
US10275685B2 (en) 2014-12-22 2019-04-30 Dolby Laboratories Licensing Corporation Projection-based audio object extraction from audio content
TWI714688B (en) * 2015-12-22 2021-01-01 美商微晶片科技公司 System and method for reducing noise in a sensor system
EP3766553A1 (en) * 2019-07-19 2021-01-20 Sony Interactive Entertainment Inc. User interaction selection method and apparatus

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060080025A1 (en) * 2004-10-05 2006-04-13 Junmin Wang Fuel property-adaptive engine control system with on-board fuel classifier

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060080025A1 (en) * 2004-10-05 2006-04-13 Junmin Wang Fuel property-adaptive engine control system with on-board fuel classifier

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CORREA N.M. ET AL: "Fusion of FMRI, SMRI, and EEG Data Using Canonical Correlation Analysis", ACOUSTICS, SPEECH AND SIGNAL PROCESSING, 2009, IEEE INTERNATIONAL CONFERENCE ON, 19 April 2009 (2009-04-19), pages 385 - 388, XP031459247 *
LIU C. ET AL: "Dynamic Difficulty Adjustment in Computer Games Through Real-Time Anxiety-Based Affective Feedback", INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, vol. 25, no. 6, 1 August 2009 (2009-08-01), pages 506 - 529, XP003028143 *
MELZER T. ET AL: "Appearance Models Based on Kernel Canonical Correlation Analysis", PATTERN RECOGNITION, vol. 36, September 2003 (2003-09-01), pages 1961 - 1971, XP004429545 *
YANNAKAKIS G.N. ET AL: "Game adaptivity impact on affective physical interaction", AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION AND WORKSHOPS, 2009. ACII 2009. 3RD INTERNATIONAL CONFERENCE, 2009, pages 1 - 6, XP031577684 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014074268A1 (en) * 2012-11-07 2014-05-15 Sensor Platforms, Inc. Selecting feature types to extract based on pre-classification of sensor measurements
CN103018181A (en) * 2012-12-14 2013-04-03 江苏大学 Soft measurement method based on correlation analysis and ELM neural network
WO2014205767A1 (en) * 2013-06-28 2014-12-31 Verizon Patent And Licensing Inc. Human-computer interaction using wearable device
US9864428B2 (en) 2013-06-28 2018-01-09 Verizon Patent And Licensing Inc. Human computer interaction using wearable device
WO2015006525A1 (en) 2013-07-12 2015-01-15 Facebook, Inc. Multi-sensor hand detection
EP3020251A4 (en) * 2013-07-12 2017-02-22 Facebook, Inc. Multi-sensor hand detection
US9622687B2 (en) 2013-09-05 2017-04-18 Qualcomm Incorporated Half step frequency feature for reliable motion classification
US9807725B1 (en) 2014-04-10 2017-10-31 Knowles Electronics, Llc Determining a spatial relationship between different user contexts
US10275685B2 (en) 2014-12-22 2019-04-30 Dolby Laboratories Licensing Corporation Projection-based audio object extraction from audio content
US9668048B2 (en) 2015-01-30 2017-05-30 Knowles Electronics, Llc Contextual switching of microphones
TWI714688B (en) * 2015-12-22 2021-01-01 美商微晶片科技公司 System and method for reducing noise in a sensor system
CN109497996A (en) * 2018-11-07 2019-03-22 太原理工大学 A kind of the complex network building and analysis method of micro- state EEG temporal signatures
EP3766553A1 (en) * 2019-07-19 2021-01-20 Sony Interactive Entertainment Inc. User interaction selection method and apparatus
US11772000B2 (en) 2019-07-19 2023-10-03 Sony Interactive Entertainment Inc. User interaction selection method and apparatus

Similar Documents

Publication Publication Date Title
WO2011092549A1 (en) Method and apparatus for assigning a feature class value
US9720515B2 (en) Method and apparatus for a gesture controlled interface for wearable devices
US11389084B2 (en) Electronic device and method of controlling same
CN112789577B (en) Neuromuscular text input, writing and drawing in augmented reality systems
US11166104B2 (en) Detecting use of a wearable device
JP2021072136A (en) Methods and devices for combining muscle activity sensor signals and inertial sensor signals for gesture-based control
CN112739254A (en) Neuromuscular control of augmented reality systems
US20200275895A1 (en) Methods and apparatus for unsupervised one-shot machine learning for classification of human gestures and estimation of applied forces
RU2601152C2 (en) Device, method and computer program to provide information to user
CN117687477A (en) Method and apparatus for a gesture control interface of a wearable device
Zhang et al. Recognizing hand gestures with pressure-sensor-based motion sensing
US10860104B2 (en) Augmented reality controllers and related methods
CN112566553A (en) Real-time spike detection and identification
US20220291753A1 (en) Spatial Gesture Recognition using Inputs from Different Devices to Control a Computing Device
Jiang et al. Development of a real-time hand gesture recognition wristband based on sEMG and IMU sensing
EP4098182A1 (en) Machine-learning based gesture recognition with framework for adding user-customized gestures
Aggelides et al. A gesture recognition approach to classifying allergic rhinitis gestures using wrist-worn devices: a multidisciplinary case study
US20230280835A1 (en) System including a device for personalized hand gesture monitoring
Kawamoto et al. A dataset for electromyography-based dactylology recognition
JP2022522402A (en) Methods and equipment for unsupervised machine learning for gesture classification and estimation of applied force
Wu et al. Wearable computers for sign language recognition
Gupta On the selection of number of sensors for a wearable sign language recognition system
Zhao et al. Emotion recognition based on weighted kernel support vector machine using wearable inertial sensors
CN111580666B (en) Equipment control method, electronic equipment, equipment control system and storage medium
Elmatary et al. Intelligent Sign Multi-Language Real-Time Prediction System with Effective Data Preprocessing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10844498

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10844498

Country of ref document: EP

Kind code of ref document: A1