WO2014058357A1 - Methods and apparatus for providing contextually relevant data in augmented reality - Google Patents

Methods and apparatus for providing contextually relevant data in augmented reality Download PDF

Info

Publication number
WO2014058357A1
WO2014058357A1 PCT/SE2012/051072 SE2012051072W WO2014058357A1 WO 2014058357 A1 WO2014058357 A1 WO 2014058357A1 SE 2012051072 W SE2012051072 W SE 2012051072W WO 2014058357 A1 WO2014058357 A1 WO 2014058357A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
user
vehicle
location
sensor
Prior art date
Application number
PCT/SE2012/051072
Other languages
French (fr)
Inventor
Joakim Söderberg
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Priority to PCT/SE2012/051072 priority Critical patent/WO2014058357A1/en
Publication of WO2014058357A1 publication Critical patent/WO2014058357A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • Virtual reality technology is known.
  • Virtual reality technology provides computer-simulated environments. It is used in gaming environments or in simulating training such as for pilots for example. Users can typically experience being in different and sometimes remote environments which can be places or situations in the real or imaginary world. Users can wear a head mounted display for experiencing the visually simulated environment. The user can experience a highly visual, three dimensional environment. Speakers or headphone may also provide sounds to the user to enhance or make the experience more realistic.
  • users of augmented reality apparatus are provided with data that is contextually relevant to the user and user location and/or situation.
  • the information that is provided to a user is specific to the user and to his or her particular location/situation which makes the information very convenient to the user.
  • the information provided to the user is not generic - in other words, different users in a same (or, identical) location are provided with information that is "customized" to their preferences, habits, etc.
  • a network node comprises: a receiving means for receiving data from at least one sensor associated with a user; a processor for enriching the received sensor data with pre- stored data corresponding to the user to obtain data that is contextually relevant for the user; a memory for storing the received sensor data, the pre-stored data and the enriched data; and a transmitting means for transmitting the enriched data to an augmented reality apparatus associated with the user.
  • FIG. 3 illustrates a node in accordance with exemplary embodiments
  • FIG. 5 illustrates a vehicle in accordance with exemplary embodiments.
  • the invention can additionally be considered to be embodied entirely within any form of computer readable carrier, such as solid-state memory, magnetic disk, or optical disk containing an appropriate set of computer instructions that would cause a processor to carry out the techniques described herein.
  • any such form of embodiments as described above may be referred to herein as "logic configured to” perform a described action, or alternatively as “logic that” performs a described action.
  • users of augmented reality apparatus are provided with data that is contextually relevant to the user and user location and/or situation.
  • Mobile devices such as smart phones are currently being equipped with sensors.
  • the number and functionality of these sensors is ever increasing.
  • These sensors can be used to detect, inter alia, location and environmental parameters (including weather related parameters) as well as objects and/or sounds in the user's vicinity.
  • a smart phone associated with a user can therefore be used to gather information about the user's location, actions/activities and identity.
  • the information provided to the user may be customized to user preferences or situations.
  • the museum data presented to the user may be filtered by a set of pre-stored user preferences.
  • the museum exhibits may include exhibits related to paintings, photography and sculpture.
  • a user may have a preference for paintings and this preference may be pre-stored.
  • the information presented to the user then would highlight exhibit information related to paintings in this example - that is, information relating to the list of current exhibits has been filtered by the user's preference for paintings.
  • the information presented to the user is therefore specific to the user reflecting his or her preferences, condition, etc.
  • a user may be in an area where the
  • environmental sensors detect the presence of a particular element or compound in the atmosphere. Sensors can also detect levels of these elements/compounds. The presence of this particular element may not be of concern to everyone but at certain levels they may affect a user who suffers from allergies or asthma for example. As the user travels within this area, he or she may be presented not only with information about the area but also given warnings or advisories regarding the detected elements/compounds. The user may be provided with instructions on how to handle the situation.
  • System 100 includes a (user) communication device 110 and an augmented reality (AR) apparatus 120 both associated with a user.
  • Communication device 110 may be a mobile communication device such as a smartphone.
  • Communication device 110 may include a plurality of sensors 115 (only one such sensor is illustrated).
  • AR apparatus 120 may include a projection means for superimposing data onto objects within a user's field of view.
  • System 100 also includes a network node 130 and data sources 140. The user communication device 110 and AR apparatus 120 can communicate with network node 130 over a network such as a communication network.
  • a data enrichment module 136 can enrich the extracted data. Enrichment may include comparing the extracted data such as location co-ordinates with data from other sources 140 to determine a particular user location for example.
  • Data from sources 140 can include weather related data, public service announcements, traffic conditions, information for objects and areas in the user's vicinity (such as the museum example above), etc. The location as determined can be used to obtain, for example, weather related data for that particular location that can be provided to the user.
  • Node 130 can also communicate with data sources 140 via the communication interface 132.
  • the user preferences and settings may be stored in user database 135.
  • the enriched data may be provided to the communication device 110 or AR apparatus 120 via communication interface 132.
  • node 130 can also include additional processors, memory, etc.
  • node 130 can include another memory for storing at least one of: received sensor data, extracted data, enriched data, data from other sources, etc.
  • Information in user database may include, for example, user preferences, user characteristics such as health records and user account information such as banking, credit cards, e-mail, etc. The nature and extent of information may be limited by user's willingness to provide the information.
  • the data that is contextually relevant to the user is provided to the user via at least one sensory medium of an augmented reality user apparatus at 230.
  • the sensory medium can be video, audio or tactile.
  • the data can be projected onto objects within the user's field of view. This could include projecting the data onto a display within augmented reality apparatus 120. Data could also be projected onto objects being viewed by the user using projector 125 for example.
  • an audio message may be provided to the user via speaker 125.
  • the user action can be estimated at 223.
  • a user's intention of using a debit card may be estimated.
  • the user intention can also be determined from past user activity, etc.
  • Pre-stored user data corresponding to the determined user identity can be retrieved at 224. Referring to the debit card example, the pin code for the user is retrieved.
  • Node 300 may be located on a network such as a radio network, a public network, a private network or a combination thereof.
  • Node 300 may include communication interfaces 310 (for receiving data) and 340 (for transmitting data), a processor 320 and computer readable medium 330 in the form of a memory.
  • the communication interface, the processor and the computer readable medium may all be interconnected via bus 350.
  • the network node may enrich the data as described above with respect to FIGs. 1 to 3.
  • Device 400 further includes a receiver 430 for receiving the enriched data.
  • the enriched data results from processing the transmitted data and filtering the processed data with pre-stored user data.
  • Device 400 includes a projecting means 440 for projecting the received enriched data.
  • Device 400 also includes a processor 450, memory 460 and bus 470 the functionality of each of which is known and not described herein further.
  • Exemplary embodiments as described above may be implemented within a vehicle.
  • a vehicle could be, but not limited to: a motorcycle, a car, a truck, a bus, a boat, a ship, a train or an airplane.
  • a plurality of sensors can be associated with a vehicle.
  • the vehicle sensors can be similar to those associated with a user communication device.
  • the vehicle sensors could also supplement (or substitute for) sensors associated with a communication device of a user associated with or traveling in the vehicle.
  • the augmented reality apparatus can also be associated with the vehicle. Vehicle preferences may be used to filter data from sensors that has been processed.
  • Vehicle 500 can have an augmented reality apparatus 520 associated therewith.
  • vehicle 500 can also include a user communication device 510 associated with an occupant of the vehicle.
  • the user can be the driver or owner of the car.
  • a common carrier such as a bus, train or plane, the user can be a passenger or operator of the common carrier.
  • Data from sensors 515, 550 and images and the like detected by camera 525 can be processed in the manner described above to provide user(s) within vehicle 500 with contextually relevant data.
  • the data may be contextually relevant to the vehicle in some embodiments.
  • Vehicle preferences can be used to filter the data.
  • the data from vehicle 500 can be processed by a network node connected to the vehicle via a communication network.
  • the data can be processed within the vehicle.
  • the vehicle preferences can be stored within the vehicle.
  • the user or vehicle preferences can be stored within a memory device associated with the car for example.
  • the car can be connected to a network to obtain the data from other sources.
  • the user device can be utilized to provide data that is specific to the user associated with the particular user device.
  • the data provided can also be contextually relevant to the particular vehicle 500 from which the data was gathered.
  • the data may be provided by projector 527 onto a display 560 connected to AR apparatus 520.
  • An audio amplification means such as a speaker can also be included within the vehicle for presenting the data (in an audio format).
  • multiple displays may be provided with each display being available to one user or passenger.

Abstract

A method for presenting data in an augmented reality environment, the method includes receiving data from at least one sensor associated with a user, enriching the received sensor data with pre-stored data corresponding to the user to obtain data that is contextually relevant for the user and providing the contextually relevant data to the user via at least one sensory medium associated with an augmented reality user apparatus.

Description

METHODS AND APPARATUS FOR PROVIDING CONTEXTUALLY RELEVANT DATA IN AUGMENTED REALITY
TECHNICAL FIELD
[0001] The present invention relates generally to an augmented reality apparatus and more particularly to, methods and apparatus for providing contextually relevant data to users of such an apparatus.
BACKGROUND
[0002] Virtual reality technology is known. Virtual reality technology provides computer-simulated environments. It is used in gaming environments or in simulating training such as for pilots for example. Users can typically experience being in different and sometimes remote environments which can be places or situations in the real or imaginary world. Users can wear a head mounted display for experiencing the visually simulated environment. The user can experience a highly visual, three dimensional environment. Speakers or headphone may also provide sounds to the user to enhance or make the experience more realistic.
[0003] An extension of virtual reality technology is augmented reality. Virtual reality immerses the users in the virtual world and isolates them from his or her real world surroundings at least from a visual and audio point of view. Augmented reality provides the users with the ability to impose virtual objects onto the real world.
[0004] Augmented reality can also be implemented by a head mounted display which projects information or data to the users. Headphones or speakers may also be used. A camera, a projector and a microphone may also provide augmented reality functionality to a user. Real world data surrounding the user can be captured (by the camera for example) and processed to identify the location of the user or to identify the objects within a user's field of view. The identified location or objects can be used to retrieve data that is relevant to the particular location or object and provided to the user.
[0005] While augmented reality technology provides data that is relevant to a particular object or a particular location, it does not utilize contextual analysis in providing the data. It is, therefore, desirable to provide an improved augmented reality methods, systems and apparatus.
SUMMARY
[0006] It should be emphasized that the terms "comprises" and "comprising", when used in this specification, are taken to specify the presence of stated features, integers, steps or components; but the use of these terms does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
[0007] According to exemplary embodiments, users of augmented reality apparatus are provided with data that is contextually relevant to the user and user location and/or situation. Several advantages are realized. The information that is provided to a user is specific to the user and to his or her particular location/situation which makes the information very convenient to the user. The information provided to the user is not generic - in other words, different users in a same (or, identical) location are provided with information that is "customized" to their preferences, habits, etc.
[0008] Exemplary embodiments provide relevant information to individuals that may be impaired due to age, memory loss, illness, etc. Exemplary embodiments may be extended to other situations such as when a person is driving car or in assisting with disaster management, etc.
[0009] In accordance with an exemplary embodiment, a method for presenting data in an augmented reality environment is disclosed. The method comprises the steps of: receiving data from at least one sensor associated with a user; enriching the received sensor data with pre-stored data corresponding to the user to obtain data that is contextually relevant for the user; and providing the contextually relevant data to the user via at least one sensory medium associated with an augmented reality user apparatus.
[0010] In accordance with another exemplary embodiment, a network node is disclosed. The network node comprises: a receiving means for receiving data from at least one sensor associated with a user; a processor for enriching the received sensor data with pre- stored data corresponding to the user to obtain data that is contextually relevant for the user; a memory for storing the received sensor data, the pre-stored data and the enriched data; and a transmitting means for transmitting the enriched data to an augmented reality apparatus associated with the user.
[0011] In accordance with a further exemplary embodiment, a device is disclosed.
The device comprises: a plurality of sensors for identifying a location of a user of the device; a transmitter for transmitting data associated with an identification of the user and sensor data associated with the user location; a receiver for receiving enriched data wherein the enriched data results from filtering the transmitted data with pre-determined user data; and a projector for displaying the received enriched data.
[0012] In accordance with yet another exemplary embodiment, a computer program is disclosed. The computer program comprises computer readable program modules which when run on a network node causes the network node to: receive data from at least one sensor associated with a user; enrich the received sensor data with pre-stored data corresponding to the user to obtain data that is contextually relevant for the user; and present the contextually relevant data to the user via at least one sensory medium associated with an augmented reality user apparatus.
[0013] In accordance with a yet further exemplary embodiment, a vehicle is disclosed. The vehicle comprises: at least one user communication device having a plurality of sensors; a plurality of sensors associated with the vehicle for detecting data corresponding to the vehicles wherein the data detected from the sensors associated with the vehicle and from the sensors associated with the at least one user communication device includes at least one of: the vehicle location, environmental parameters in a vicinity of the vehicle and objects within a field of view of the vehicle; and an augmented reality apparatus including a projecting means for providing data that is contextually relevant to the vehicle, the contextually relevant data resulting from processing the detected data and filtering the processed data with pre-stored vehicle data.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The objects and advantages of the invention will be understood by reading the following detailed description in conjunction with the drawings in which:
[0015] FIG. 1A illustrates a system in accordance with exemplary embodiments;
[0016] FIG. IB illustrates an exemplary communication device of the system of FIG.
1A having a plurality of sensors;
[0017] FIG. 1C illustrates an exemplary augmented reality apparatus of the system of
FIG 1A;
[0018] FIGs. 2A and 2B illustrate methods in accordance with exemplary
embodiments;
[0019] FIG. 3 illustrates a node in accordance with exemplary embodiments;
[0020] FIG. 4 illustrates a device in accordance with exemplary embodiments; and
[0021] FIG. 5 illustrates a vehicle in accordance with exemplary embodiments.
DETAILED DESCRIPTION
[0022] The various features of the invention will now be described with reference to the figures, in which like parts are identified with the same reference characters or numerals. [0023] The various aspects of the invention will now be described in greater detail in connection with a number of exemplary embodiments. To facilitate an understanding of the invention, many aspects of the invention are described in terms of sequences of actions to be performed by elements of a computer system or other hardware capable of executing programmed instructions. It will be recognized that in each of the embodiments, the various actions could be performed by specialized circuits (e.g., analog and/or discrete logic gates interconnected to perform a specialized function), by one or more processors programmed with a suitable set of instructions, or by a combination of both. The term "circuitry configured to" perform one or more described actions is used herein to refer to any such embodiment (i.e., one or more specialized circuits and/or one or more programmed processors).
[0024] Moreover, the invention can additionally be considered to be embodied entirely within any form of computer readable carrier, such as solid-state memory, magnetic disk, or optical disk containing an appropriate set of computer instructions that would cause a processor to carry out the techniques described herein. Thus, the various aspects of the invention may be embodied in many different forms, and all such forms are contemplated to be within the scope of the invention. For each of the various aspects of the invention, any such form of embodiments as described above may be referred to herein as "logic configured to" perform a described action, or alternatively as "logic that" performs a described action.
[0025] According to exemplary embodiments, users of augmented reality apparatus are provided with data that is contextually relevant to the user and user location and/or situation. Mobile devices such as smart phones are currently being equipped with sensors. The number and functionality of these sensors is ever increasing. These sensors can be used to detect, inter alia, location and environmental parameters (including weather related parameters) as well as objects and/or sounds in the user's vicinity. A smart phone associated with a user can therefore be used to gather information about the user's location, actions/activities and identity.
[0026] Location sensors may provide the geographic co-ordinates of a user's location for example. The user's identity can be determined from the user's mobile device identity. Data from image capturing sensors (such as cameras for example) can be utilized to determine objects within the user's field of view. Audio capturing sensors (such as microphones for example) can also be used to determine the user's surroundings. Objects can include public places, buildings, bridges, trees, mountains, billboards, street signs, letters, numbers, etc.
[0027] The objects detected from image data analysis can be used to identify a user's location especially with respect to well known and easily recognizable landmarks such as the Eiffel tower, the golden gate bridge or the statute of liberty for example. Other local landmarks can also be recognized based on location co-ordinates. Image and/or audio recognition can be combined with location co-ordinates to more accurately determine the user's surroundings. The data received from the user or sensors associated with the user may be compared with pre-existing information to determine the user's location (and identity).
[0028] Upon determining the user's location, data relating to the user location can be provided to the user. For example, if the sensor data indicates that the user is near a particular museum (or looking at a museum in case of image data analysis), data relating to that museum can be provided to the user. The source of the information may be a database associated with the particular museum. The museum data may include information about the hours of operation, cost of tickets, discounts, list of current exhibits, etc.
[0029] In accordance with preferred exemplary embodiments, the information provided to the user may be customized to user preferences or situations. Referring to the museum example cited above, the museum data presented to the user may be filtered by a set of pre-stored user preferences. The museum exhibits may include exhibits related to paintings, photography and sculpture. A user may have a preference for paintings and this preference may be pre-stored. The information presented to the user then would highlight exhibit information related to paintings in this example - that is, information relating to the list of current exhibits has been filtered by the user's preference for paintings. The information presented to the user is therefore specific to the user reflecting his or her preferences, condition, etc.
[0030] In another exemplary scenario, a user may be in front of an ATM machine and takes out his or her debit card. Data from location sensors may associate the location with a bank. Image sensor(s) data may identify the ATM as it comes into the field of view of the user. Image sensor data may also identify a debit card as it is held to the image sensor.
Information corresponding to the debit card (which is user specific) may be retrieved and presented to the user. The information may be the pin code for the debit card. Again, the information presented to the user is specific to the user and reflects the user's preferences, etc.
[0031] In a further exemplary scenario, a user may be in an area where the
environmental sensors detect the presence of a particular element or compound in the atmosphere. Sensors can also detect levels of these elements/compounds. The presence of this particular element may not be of concern to everyone but at certain levels they may affect a user who suffers from allergies or asthma for example. As the user travels within this area, he or she may be presented not only with information about the area but also given warnings or advisories regarding the detected elements/compounds. The user may be provided with instructions on how to handle the situation.
[0032] Exemplary embodiments also perform a situation or contextual analysis to predict or estimate the user's intentions such as whether the user is visiting the museum to view exhibits, whether the user is looking at the debit card to withdraw money in the examples highlighted above. The prediction or estimation may be based on past user activity which can also be part of the pre-stored user data.
[0033] Exemplary embodiments may be described with reference to a system such as system 100 of FIG. 1A. System 100 includes a (user) communication device 110 and an augmented reality (AR) apparatus 120 both associated with a user. Communication device 110 may be a mobile communication device such as a smartphone. Communication device 110 may include a plurality of sensors 115 (only one such sensor is illustrated). AR apparatus 120 may include a projection means for superimposing data onto objects within a user's field of view. System 100 also includes a network node 130 and data sources 140. The user communication device 110 and AR apparatus 120 can communicate with network node 130 over a network such as a communication network.
[0034] Network node 130 can include a plurality of functional modules or components for communicating, processing and storing data. Network node 130, for example, can include a communication interface 132 for communicating data with communication device 110 and AR apparatus 120. A data extraction module 134 can extract data from the sensor data received from the communication device 110. Extraction may include identifying the type of data that is being received from the sensors. Based on the format of (or parameters within) the received data, the data extraction module may determine that the received data is location co-ordinate data, temperature data, environmental data, image data, etc. In case of image data, the received data may be analyzed to detect or determine the object in the image.
[0035] A data enrichment module 136 can enrich the extracted data. Enrichment may include comparing the extracted data such as location co-ordinates with data from other sources 140 to determine a particular user location for example. Data from sources 140 can include weather related data, public service announcements, traffic conditions, information for objects and areas in the user's vicinity (such as the museum example above), etc. The location as determined can be used to obtain, for example, weather related data for that particular location that can be provided to the user. Node 130 can also communicate with data sources 140 via the communication interface 132.
[0036] According to exemplary embodiments, enrichment can further include filtering the extracted data with user preferences. Referring to the museum example described above, if the user is determined to be near a museum, then information about the museum may be obtained (from data source 140 for example). The museum information may be filtered by applying the user's preference for paintings. The user is then provided with exhibit information only for paintings.
[0037] The user preferences and settings may be stored in user database 135. The enriched data may be provided to the communication device 110 or AR apparatus 120 via communication interface 132. While not illustrated, node 130 can also include additional processors, memory, etc. For example, node 130 can include another memory for storing at least one of: received sensor data, extracted data, enriched data, data from other sources, etc.
[0038] Information in user database may include, for example, user preferences, user characteristics such as health records and user account information such as banking, credit cards, e-mail, etc. The nature and extent of information may be limited by user's willingness to provide the information.
[0039] Communication device 110 can include a plurality of sensors 115 as illustrated in FIG. IB. These sensors can include, but not necessarily limited to, an image sensor or image capturing means such as a still or movie camera, a location sensor (LOC), an environmental sensor (ENV), a microphone (MIC), etc. Each of these sensors is labeled with the same reference numeral for simplicity. [0040] An augmented reality apparatus is illustrated in FIG. IB. Augmented reality apparatus 120 can include, but not limited to, a camera (image capturing means), a speaker (SPR) (audio capturing means) and a projector (image projection means) each of which is labeled as 125. While not illustrated, apparatus 120 could also include a display area.
Apparatus 120 can be a head mounted display.
[0041] A method in accordance with exemplary embodiments may be described with reference to FIG. 2A. In method 200, data from at least one sensor associated with a user is received at 210. The sensor(s) can be included or integrated within a mobile communication device such as device 110 of system 100. The received sensor data is enriched with pre- stored data corresponding to the user to obtain data that is contextually relevant for the user at 220.
[0042] The data that is contextually relevant to the user is provided to the user via at least one sensory medium of an augmented reality user apparatus at 230. The sensory medium can be video, audio or tactile. In case of video, the data can be projected onto objects within the user's field of view. This could include projecting the data onto a display within augmented reality apparatus 120. Data could also be projected onto objects being viewed by the user using projector 125 for example. In case of audio, an audio message may be provided to the user via speaker 125.
[0043] The data enriching step 220 of FIG. 2A is illustrated in further detail in FIG. 2B. A user can be identified at 221. The location of the identified user can be determined at 222. Location can be determined from geographic co-ordinates as provided by sensors associated with the user or can result from evaluation of images of user surroundings captured by sensors within the user communication device 110 or user apparatus 120.
[0044] The user action (or intention) can be estimated at 223. As described in a particular example above, a user's intention of using a debit card may be estimated. The user intention can also be determined from past user activity, etc. Pre-stored user data corresponding to the determined user identity can be retrieved at 224. Referring to the debit card example, the pin code for the user is retrieved.
[0045] Data corresponding to the determined location can be retrieved at 225. Data for a location can, for example, be museum information as described above. The retrieved data can be filtered with the user data at 226. Referring to the museum example, the information is filtered with user preferences for paintings. The filtered data is presented to the user at 227 (analogous with step 230 of FIG. 2A). The data can be projected on a display associated with a user (or with AR apparatus), announced via a speaker or projected onto an object by the projector.
[0046] A node in accordance with exemplary embodiments is illustrated in FIG. 3.
Node 300 may be located on a network such as a radio network, a public network, a private network or a combination thereof. Node 300 may include communication interfaces 310 (for receiving data) and 340 (for transmitting data), a processor 320 and computer readable medium 330 in the form of a memory. The communication interface, the processor and the computer readable medium may all be interconnected via bus 350.
[0047] Node 300 may communicate with a mobile communication device 110 and
AR apparatus 120 associated with a user as well as with data sources(s) 140 via at least one of the communication interfaces 310 and 340. Processor 320 may be a plurality of processors incorporating the processing functionality of the data extraction module and the data enrichment module. Node 300 may receive sensor data from sensors such as sensors 115 associated with mobile communication device 110. The received sensor data can be stored within memory 330.
[0048] Processor 320 can extract data from the received sensor data as described above with respect to FIG. 1A. Processor 320 can perform image analysis to identify an object within the user's field of view. In enriching the data, processor 320 can, for example, identify the user location by comparing received location co-ordinates to a pre-stored lookup table that identifies or associates a location based on the received location co-ordinates. Processor 320 can further enrich the data by applying the user preferences. The user preferences can be pre-stored in memory or they can also be obtained from the user device in real-time. The enriched data can also be stored within memory 330. Node can provide the enriched data to the user via communication interface 340.
[0049] In order for processor 320 to be able to perform the steps illustrated in FIG. 2, memory 330 comprises a computer program (CP) 335 with instructions which, when executed by the (one or more) processors 320 causes node 300 to perform all or some of the steps illustrated in FIG. 3.
[0050] According to exemplary embodiments, a device may incorporate the functionality of mobile communication device 110 and AR apparatus 120 of FIG. 1A.
Referring to FIG. 4, device 400 includes a plurality of sensors 410 for detecting data corresponding to a user of the device. The detected data can include at least one of: the user location, environmental parameters in a vicinity of the user and objects within a field of view of the user. Device 400 also includes a transmitter 420 for transmitting the detected data. The data may be transmitted to a network node for example.
[0051] The network node may enrich the data as described above with respect to FIGs. 1 to 3. Device 400 further includes a receiver 430 for receiving the enriched data. The enriched data results from processing the transmitted data and filtering the processed data with pre-stored user data. Device 400 includes a projecting means 440 for projecting the received enriched data. Device 400 also includes a processor 450, memory 460 and bus 470 the functionality of each of which is known and not described herein further. [0052] Exemplary embodiments as described above may be implemented within a vehicle. A vehicle could be, but not limited to: a motorcycle, a car, a truck, a bus, a boat, a ship, a train or an airplane. A plurality of sensors can be associated with a vehicle. These sensors can be similar to those associated with a user communication device. The vehicle sensors could also supplement (or substitute for) sensors associated with a communication device of a user associated with or traveling in the vehicle. The augmented reality apparatus can also be associated with the vehicle. Vehicle preferences may be used to filter data from sensors that has been processed.
[0053] A vehicle according to exemplary embodiments is illustrated in FIG. 5.
Vehicle 500 can have an augmented reality apparatus 520 associated therewith. In addition, vehicle 500 can also include a user communication device 510 associated with an occupant of the vehicle. In the case of a car, the user can be the driver or owner of the car. In the case of a common carrier such as a bus, train or plane, the user can be a passenger or operator of the common carrier.
[0054] In some embodiments, the AR apparatus 520 can have a camera 525 attached to an external portion of the vehicle. The camera can be attached to an internal portion of the vehicle in some embodiments. The projector 527 of AR apparatus 520 can be attached to an internal portion of the vehicle. A plurality of sensors 550 can also be associated with vehicle 500 (only one such sensor is illustrated). In the case of a vehicle traveling on a road, the sensors can detect one or more of: road conditions, traffic density, objects near the road, environmental conditions, location and the like. Some or similar conditions can also be detected on or near a railroad, a shipping channel, in the air, etc.
[0055] A plurality of sensors 515 can be associated with the user device 510 (only one such sensor is illustrated). Some examples of the sensors associated with the user device have been described above. In some embodiments, vehicle 500 need not have user devices. Vehicle 500 can also include a communication means for communicating the data from the sensors to a network node and for receiving enriched data from the network node.
[0056] Data from sensors 515, 550 and images and the like detected by camera 525 can be processed in the manner described above to provide user(s) within vehicle 500 with contextually relevant data. The data may be contextually relevant to the vehicle in some embodiments. Vehicle preferences can be used to filter the data. The data from vehicle 500 can be processed by a network node connected to the vehicle via a communication network.
[0057] In some embodiments, the data can be processed within the vehicle. The vehicle preferences can be stored within the vehicle. In case of a car, for example, the user or vehicle preferences can be stored within a memory device associated with the car for example. The car can be connected to a network to obtain the data from other sources. The user device can be utilized to provide data that is specific to the user associated with the particular user device.
[0058] The data provided can also be contextually relevant to the particular vehicle 500 from which the data was gathered. The data may be provided by projector 527 onto a display 560 connected to AR apparatus 520. An audio amplification means such as a speaker can also be included within the vehicle for presenting the data (in an audio format). In a common carrier, multiple displays may be provided with each display being available to one user or passenger.
[0059] Several advantages are realized by exemplary embodiments as described. The information that is provided to a user is specific to the user and to his or her particular location/situation which makes the information very convenient to the user. The information provided to the user is not generic - in other words, different users in a same (or, identical) location are provided with information that is "customized" to their preferences, habits, etc.
[0060] Exemplary embodiments provide relevant information to individuals that may be impaired due to age, memory loss, illness, etc. Exemplary embodiments may be extended to other situations such as when a person is driving car or in assisting with disaster management, etc.
[0061] The invention has been described with reference to particular embodiments.
However, it will be readily apparent to those skilled in the art that it is possible to embody the invention in specific forms other than those of the embodiment described above. The described embodiments are merely illustrative and should not be considered restrictive in any way.
[0062] The scope of the invention is given by the appended claims, rather than the preceding description, and all variations and equivalents which fall within the range of the claims are intended to be embraced therein.

Claims

What is claimed is
1. A method (200) for presenting data in an augmented reality environment, the method comprising the steps of:
receiving (210) data from at least one sensor associated with a user;
enriching (220) the received sensor data with pre-stored data corresponding to the user to obtain data that is contextually relevant for the user; and
providing (230) the contextually relevant data to the user via at least one sensory medium associated with an augmented reality user apparatus.
2. The method of claim 1, further comprising:
detecting data by the at least one sensor, wherein the detection includes capturing images of objects being viewed by the user.
3. The method of claim 1, further comprising:
detecting data by the at least one sensor, wherein the detection includes capturing images of objects in a vicinity of the user.
4. The method of claim 1, further comprising:
detecting data by the at least one sensor, wherein the detection includes capturing audio data from a vicinity of the user.
5. The method of claim 1, further comprising:
detecting data by the at least one sensor, wherein the detection includes capturing audio data perceivable to the user.
6. The method of claim 1, further comprising:
detecting data by the at least one sensor, wherein the detection includes a location of the sensor.
7. The method of claim 1, further comprising:
detecting data by the at least one sensor, wherein the detection includes measurement of an environmental parameter.
8. The method of claim 1, further comprising:
enriching data from the sensors with data from sources including a public service entity.
The method of claim 1, wherein the enriching of the data comprises:
identifying the user;
determining a location of the user;
estimating an action of the user;
retrieving pre-stored user data corresponding to the determined user identity;
retrieving data corresponding to the determined location;
filtering the retrieved data corresponding to the location with data corresponding user identity; and
submitting the filtered data to the user.
10. The method of claim 9, wherein the determining of the user location comprises: determining geographic co-ordinates of the user location by a location sensor associated with the user.
11. The method of claim 9, wherein the determining of the user location comprises: evaluating at least one image captured by an image capturing sensor associated with the user.
12. The method of claim 1, wherein the providing of the data to the user comprises: projecting the relevant data on a display within a visual range of an eye of the user, the display being included in a user wearable apparatus.
13. The method of claim 1, wherein the providing of the data to the user comprises: projecting the relevant data as a visual signal onto a surface of an object being viewed by the user.
14. The method of claim 1, wherein the presenting of the data to the user comprises: announcing the relevant data to the user via an audio means, the audio means being included in a user wearable apparatus.
15. A network node (300) comprising:
a receiving means (310) for receiving data from at least one sensor associated with a user;
a processor (320) for enriching the received sensor data with pre-stored data corresponding to the user to obtain data that is contextually relevant for the user;
a memory (330) for storing the received sensor data, the pre-stored data and the enriched data; and
a transmitting means (340) for transmitting the enriched data to an augmented reality apparatus associated with the user.
16. The network node of claim 15, wherein the receiving means is further for receiving data from the user associated sensor in real time.
17. The network node of claim 15, wherein the receiving means is further for receiving data from at least a second data source.
18. The network node of claim 17, wherein the receiving means is further for receiving data from the at least second data source in at least one of real time and at a pre-determined frequency.
19. The network node of claim 15, wherein the processor is further for processing the received sensor data to determine an identity and a location of the user.
20. The network node of claim 19, wherein the processor is further for retrieving data corresponding to the user location from the at least second data source.
21. The network node of claim 20, wherein the processor is further for determining a format of the data transmitted to the user, the format being at least one of an audio signal and a video signal.
22. A device (400) comprising:
a plurality of sensors (410) for detecting data corresponding to a user of the device, the detected data including at least one of: a location of the user, environmental parameters in a vicinity of the user and objects within a field of view of the user;
a transmitter (420) for transmitting the detected data;
a receiver (430) for receiving enriched data wherein the enriched data results from processing the transmitted data and filtering the processed data with pre-stored user data; and a projector (440) for projecting the received enriched data.
23. The device of claim 22, wherein the plurality of sensors include a geographic location sensor for sensing geographic coordinates associated with a physical location of the user.
24. The device of claim 22, wherein the plurality of sensors include a camera for capturing images of objects visually perceived by the user.
25. The device of claim 24, wherein the camera is a one of a still shot camera and a video camera.
26. The device of claim 22, wherein the plurality of sensors include a microphone.
27. The device of claim 22, further comprising an audio reproducing means for announcing the enriched data to the wearer of the user apparatus.
28. The device of claim 22, wherein the projector displays the enriched data in a display area of the user apparatus that is visible exclusively to the user.
29. The device of claim 22, wherein the projector displays the enriched data onto an object within a field of view of the user.
30. A non-transitory computer readable medium storing instructions (335) that, when executed, cause a network node (300) to:
receive data from at least one sensor associated with a user;
enrich the received sensor data with pre-stored data corresponding to the user to obtain data that is contextually relevant for the user; and
present the contextually relevant data to the user via at least one sensory medium associated with an augmented reality user apparatus.
31. A vehicle (500) comprising:
at least one user communication device (510) having a plurality of sensors;
a plurality of sensors (550) associated with the vehicle for detecting data
corresponding to the vehicles wherein the data detected from the sensors associated with the vehicle and from the sensors associated with the at least one user communication device includes at least one of: the vehicle location, environmental parameters in a vicinity of the vehicle and objects within a field of view of the vehicle; and
an augmented reality apparatus (520) including a projecting means for providing data that is contextually relevant to the vehicle, the contextually relevant data resulting from processing the detected data and filtering the processed data with pre-stored vehicle data.
32. The vehicle of claim 31, wherein the augmented reality apparatus includes an image capturing means attached to an external portion of the vehicle.
33. The vehicle of claim 31, further comprising:
processing means for processing the sensor data, the processing comprising:
identifying the vehicle;
determining a location of the vehicle;
retrieving pre-stored vehicle data corresponding to the determined vehicle identity;
retrieving data corresponding to the determined location;
filtering the retrieved data corresponding to the location with data
corresponding to the vehicle identity.
34. The vehicle of claim 31, further comprising:
communication means for transmitting the data from the sensors and for receiving the contextually relevant data.
35. The vehicle of claim 31, further comprising a display connected to the augmented reality apparatus for displaying data from the projector.
36. The vehicle of claim 31, wherein the vehicle can be one of a motorcycle, a car, a truck, a bus, a boat, a train, a ship and a plane.
PCT/SE2012/051072 2012-10-08 2012-10-08 Methods and apparatus for providing contextually relevant data in augmented reality WO2014058357A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SE2012/051072 WO2014058357A1 (en) 2012-10-08 2012-10-08 Methods and apparatus for providing contextually relevant data in augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2012/051072 WO2014058357A1 (en) 2012-10-08 2012-10-08 Methods and apparatus for providing contextually relevant data in augmented reality

Publications (1)

Publication Number Publication Date
WO2014058357A1 true WO2014058357A1 (en) 2014-04-17

Family

ID=50477688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2012/051072 WO2014058357A1 (en) 2012-10-08 2012-10-08 Methods and apparatus for providing contextually relevant data in augmented reality

Country Status (1)

Country Link
WO (1) WO2014058357A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210233539A1 (en) * 2018-10-15 2021-07-29 Orcam Technologies Ltd. Using voice and visual signatures to identify objects
US11430216B2 (en) 2018-10-22 2022-08-30 Hewlett-Packard Development Company, L.P. Displaying data related to objects in images

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110098056A1 (en) * 2009-10-28 2011-04-28 Rhoads Geoffrey B Intuitive computing methods and systems
US20110161076A1 (en) * 2009-12-31 2011-06-30 Davis Bruce L Intuitive Computing Methods and Systems
US20110164163A1 (en) * 2010-01-05 2011-07-07 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
EP2400733A1 (en) * 2010-06-28 2011-12-28 Lg Electronics Inc. Mobile terminal for displaying augmented-reality information
US8131118B1 (en) * 2008-01-31 2012-03-06 Google Inc. Inferring locations from an image
EP2466258A1 (en) * 2010-12-15 2012-06-20 The Boeing Company Methods and systems for augmented navigation
US20120203799A1 (en) * 2011-02-08 2012-08-09 Autonomy Corporation Ltd System to augment a visual data stream with user-specific content
US20120224060A1 (en) * 2011-02-10 2012-09-06 Integrated Night Vision Systems Inc. Reducing Driver Distraction Using a Heads-Up Display

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131118B1 (en) * 2008-01-31 2012-03-06 Google Inc. Inferring locations from an image
US20110098056A1 (en) * 2009-10-28 2011-04-28 Rhoads Geoffrey B Intuitive computing methods and systems
US20110161076A1 (en) * 2009-12-31 2011-06-30 Davis Bruce L Intuitive Computing Methods and Systems
US20110164163A1 (en) * 2010-01-05 2011-07-07 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
EP2400733A1 (en) * 2010-06-28 2011-12-28 Lg Electronics Inc. Mobile terminal for displaying augmented-reality information
EP2466258A1 (en) * 2010-12-15 2012-06-20 The Boeing Company Methods and systems for augmented navigation
US20120203799A1 (en) * 2011-02-08 2012-08-09 Autonomy Corporation Ltd System to augment a visual data stream with user-specific content
US20120224060A1 (en) * 2011-02-10 2012-09-06 Integrated Night Vision Systems Inc. Reducing Driver Distraction Using a Heads-Up Display

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
AJANKI A ET AL.: "Contextual information access with Augmented Reality", MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2010 IEEE INTERNATIONAL WORKSHOP, 29 August 2010 (2010-08-29), pages 95 - 100, XP031765933 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210233539A1 (en) * 2018-10-15 2021-07-29 Orcam Technologies Ltd. Using voice and visual signatures to identify objects
US11430216B2 (en) 2018-10-22 2022-08-30 Hewlett-Packard Development Company, L.P. Displaying data related to objects in images

Similar Documents

Publication Publication Date Title
US9418481B2 (en) Visual overlay for augmenting reality
JP6456610B2 (en) Apparatus and method for detecting a driver's interest in advertisements by tracking the driver's eye gaze
US20060009702A1 (en) User support apparatus
CN103914139B (en) Message processing device, information processing method and program
KR101229078B1 (en) Apparatus And Method for Mixed Reality Content Operation Based On Indoor and Outdoor Context Awareness
CN108027652A (en) Information processing equipment, information processing method and program
US20150094118A1 (en) Mobile device edge view display insert
CN102375867A (en) Apparatus and method for recognizing objects using filter information
US20140241585A1 (en) Systems, methods, and apparatus for obtaining information from an object attached to a vehicle
US20150088637A1 (en) Information processing system, information processing method, and non-transitory computer readable storage medium
US20120092370A1 (en) Apparatus and method for amalgamating markers and markerless objects
JPWO2017163514A1 (en) Glasses-type wearable terminal, control method thereof, and control program
US20220129942A1 (en) Content output system, terminal device, content output method, and recording medium
CN107305561B (en) Image processing method, device and equipment and user interface system
CN110348463A (en) The method and apparatus of vehicle for identification
TW201944324A (en) Guidance system
CN116710878A (en) Context aware augmented reality system
CN110998409A (en) Augmented reality glasses, method of determining the pose of augmented reality glasses, motor vehicle adapted to use the augmented reality glasses or method
CN114096996A (en) Method and apparatus for using augmented reality in traffic
WO2014058357A1 (en) Methods and apparatus for providing contextually relevant data in augmented reality
KR20150045465A (en) Method for provisioning a person with information associated with an event
CN113168643A (en) Method and device for monitoring an occupant of a vehicle and system for analyzing perception of an object
CN110321854A (en) Method and apparatus for detected target object
KR20120070888A (en) Method, electronic device and record medium for provoding information on wanted target
JP2011197276A (en) Apparatus and method for displaying of advertisement image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12886193

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12886193

Country of ref document: EP

Kind code of ref document: A1