WO2017149520A1 - Vision device for augmented reality - Google Patents

Vision device for augmented reality Download PDF

Info

Publication number
WO2017149520A1
WO2017149520A1 PCT/IB2017/051284 IB2017051284W WO2017149520A1 WO 2017149520 A1 WO2017149520 A1 WO 2017149520A1 IB 2017051284 W IB2017051284 W IB 2017051284W WO 2017149520 A1 WO2017149520 A1 WO 2017149520A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
images
vision device
environment
camera
Prior art date
Application number
PCT/IB2017/051284
Other languages
French (fr)
Inventor
Massimo SPAGGIARI
Original Assignee
Spaggiari Massimo
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spaggiari Massimo filed Critical Spaggiari Massimo
Publication of WO2017149520A1 publication Critical patent/WO2017149520A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0129Head-up displays characterised by optical features comprising devices for correcting parallax
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • the present invention relates to a device for augmented reality.
  • Vision devices for augmented reality are known, such as head mounted displays, for example.
  • Such vision devices are provided with a screen, e.g. a display, on which virtual images are reproduced.
  • a screen e.g. a display
  • One of the main disadvantages of such devices is due to the fact that the user sees the real world through the screen and when the virtual images are reproduced on the screen they are simply superimposed on the user's natural vision. In this manner, the parallax error is considerable and it is very difficult, if not impossible, to align specific virtual images with specific real objects arranged in the real world seen by the user.
  • the known vision devices for augmented reality can be improved.
  • An object of the present invention is to provide a vision device for augmented reality which makes it possible to position virtual contents without parallax errors with respect to the real environment, in particular with respect to the object with which they are associated.
  • the present invention achieves this and other objects which will be apparent in light of the present description by providing a vision device for augmented reality according to claim 1 configured to associate virtual contents with objects arranged in an environment surrounding the vision device, wherein said virtual contents are stored in a database,
  • the vision device comprising an electronic control unit adapted to interface with said database and connected to: a first display; a second display, different from the first display; a first camera, adapted to shoot first images of said environment and to transmit said first images to the first display by means of the electronic control unit; at least one second camera, different from the at least one first camera, adapted to shoot second images of at least one object in said environment and to transmit them to the electronic control unit, which is adapted to extract virtual contents related to said at least one object from the database and to transmit them to the second display; wherein the second display is at least partially transparent;
  • first display and the second display are arranged in stacked order; wherein synchronization means are provided in said electronic control unit to synchronize the virtual contents of the at least one object with the first images so that they are always placed at said at least one object contained in said first images.
  • the invention also provides a method for associating virtual contents with objects arranged in an environment by means of such vision device, comprising the following steps:
  • the vision device of the invention is advantageously applied to industrial control and workflow management systems and to sectors, such as occupational safety, public security, cultural heritage, design, maintenance and health care.
  • the device of the invention is of the total immersion type.
  • the vision device is a head mounted display, in particular a helmet.
  • helmet can be used as personal protection equipment or PPE.
  • the vision device of the invention can be easily integrated in existing helmets.
  • the vision device of the invention can be easily integrated in existing PPE, e.g. a helmet.
  • a communication bus adapted to allow easy integration in PPE is provided.
  • the bus has housings which allow the insertion of the elements or sensors chosen by the user for the particular scene.
  • the device of the invention comprises a TETRA sensor.
  • the device can be used, for example, as a helmet by the police or by the personnel of an industrial system or as an emergency helmet by fire fighters.
  • the TETRA sensor can be used in particular in emergency situations. For example, in the case in which many communications, such as cellular networks, immediately fall. TETRA networks always remain active.
  • the TETRA network also allows group calls making it possible for remote users to reach the crews in motion at the same time with simultaneous calls.
  • a helmet with TETRA sensor can also be used in industrial systems in which TETRA networks are present.
  • the helmet of the invention preferably comprises a WiMax communication device, which allows communications in industrial systems also when transmission antennas are not in line of sight (NLOS).
  • NLOS line of sight
  • a further communication means preferably comprised in the helmet of the invention is the Wi-Fi Mesh, which also allows the connection between different devices according to the invention.
  • the Mesh channel transmission frequency is at 5 GHz, which does not cause interferences with possible Wi-Fi networks.
  • Fig. 1 shows a partially exploded diagrammatic view of an embodiment of the device of the invention. Detailed description of a preferred embodiment of the invention
  • the invention provides a vision device 1 for augmented reality configured to associate virtual contents with objects arranged in an environment surrounding the vision device, wherein said virtual contents are stored in a database,
  • the vision device 1 comprising an electronic control unit 2 adapted to interface with said database and connected to:
  • a first camera 5 adapted to shoot first images of said environment and to transmit said first images to the first display 3 by means of the electronic control unit 2;
  • At least one second camera 6a, 6b different from the at least one first camera 5, adapted to shoot second images of at least one object in said environment and to transmit them to the electronic control unit 2, which is adapted to extract virtual contents related to said at least one object from the database and to transmit them to the second display 4;
  • the second display 4 is at least partially transparent
  • first display 3 and the second display 4 are arranged in stacked order; wherein synchronization means are provided in said electronic control unit 2 to synchronize the virtual contents of the at least one object with the first images so that they are always placed at said at least one object contained in said first images.
  • display 3 and display 4 are either aligned or stacked or superimposed. In other words, display 3 and display 4 are arranged in stacked order.
  • one face of the display 3 is substantially parallel or at least partially parallel to a face of the display 4.
  • the vision device 1 is configured so that the display 3 is distal from the eyes of a user, while the display 4 is proximal to the eyes of a user. More in detail, the display 3 and the display 4 are aligned and positioned so that when in use the display 3 is distal from the user's eyes and the display 4 is proximal to the user's eyes.
  • display 3, or "scene display”, and display 4, or “virtual display”, are advantageously mutually distinct elements.
  • display 3 and display 4 are mutually physically separate. More in particular, display 3 and display 4 are mutually distanced and preferably there is no intermediate element between display 3 and display 4.
  • display 3 and display 4 are not two layers which belong to the same display.
  • the scene display 3 and the virtual display 4 are adapted to receive mutually distinct data flows.
  • display 4 is at least partially transparent, so that the virtual contents can be viewed in manner either superimposed or near one or more corresponding objects, e.g. one corresponding object.
  • said one or more objects, e.g. one object are "real" objects shot by the camera 5 and viewed on the display 3.
  • the second display 3 is also at least partially transparent.
  • a totally or partially opaque element or in other terms a filter, is provided. Said element is associated with the display 3 and covers its surface. In this manner, it is possible to provide a more immersive augmented reality.
  • the first camera and the at least one second camera are configured to substantially shoot the same scene.
  • a vision device 1 for augmented realty is provided.
  • the vision device is configured as a head mounted display.
  • the device 1 comprises a helmet 8 possibly provided with one or more fixing means 9 to constrain the device 1 to head of a user.
  • the vision device of the invention can also be provided in form of helmet 8.
  • An electronic control unit 2 is provided to which, by means of a communication bus, are connected a first display 3, also said scene display,; a second display 4, also said virtual content display, or simply virtual display; a first camera 5, also said scene camera; two second cameras 6a, 6b, also said tracking cameras.
  • the scene camera 5 and the two tracking cameras 6a, 6b either are integrated in or fixed to the helmet 8, in particular at a front part thereof.
  • the scene display 3 and the virtual display 4 are preferably substantially visors arranged at the front part of the helmet 8.
  • the CPU 2 and the communication bus are preferably either integrated in or fixed onto the helmet 8, e.g. laterally.
  • the scene camera 5 is configured to shoot the images of an environment surrounding the vision device, also named scene.
  • the images are preferably continuous shots.
  • the images are shot at a frequency comprised between 10 and 24 fps.
  • the images shot by the scene camera 5 are sent to the CPU 2.
  • the CPU 2 transmits the shot, and possibly processed, images to the scene display 3.
  • Such transmission is preferably continuous, and thus a stream of transmitted images, or video stream, occurs. Therefore, the user who wears the device 1 , by watching the scene display 3, sees the surrounding environment as framed by the scene camera 5.
  • the two tracking cameras 6a, 6b and CPU 2 cooperate to recognize at least one object, typically a plurality of objects, in the scene.
  • the objects to be recognized are those with which one or more virtual contents are associated.
  • the objects are recognized by means of an interface of the device 1 with a database (not shown) on a computer medium, e.g. a mass storage device.
  • the database can be connected by wire to CPU 2 or can be connected to the CPU by means of wireless connection, e.g. Wi-Fi.
  • device 1 also comprises a Wi-Fi antenna.
  • Information is stored in the database related to the objects to be recognized.
  • the information can be related to the shape of the object, such as one or more images of the object to be recognized, or when it is desired to recognize measurement instruments, the information can be related to the basic values of the instruments, such as full-scale, extreme values, digital numeric values.
  • Such information can be generated by means of a preceding previous mapping of the scene and of the objects with which virtual contents are desired to be associated.
  • CPU 2 is configured to recognize each framed object by means of the tracking cameras 6a, 6b comparing it with the information on the objects contained in the database.
  • the recognition is preferably implemented by means of one or more software programs implemented on the CPU.
  • virtual contents are associated with each object.
  • Such virtual contents are pre-stored in the database and may be sent to the device 1 , in particular to the CPU 2, remotely.
  • the CPU 2 can extract the contents from the database, which may be the same database where the object recognition information is stored or a different database.
  • the CPU 2 is also adapted to associate the virtual contents with the real objects.
  • Non-limitative examples of virtual contents are: images, texts, numeric values, position coordinates, audio etc.
  • the virtual contents are synchronized with the images of the scene and then transmitted by the CPU 2 to the virtual display 4.
  • the images of the scene and the virtual contents are synchronized before being transmitted to the respective displays.
  • Synchronization means are provided, in particular one or more software programs implemented on the CPU, configured to perform this synchronization.
  • the synchronization occurs by comparing the image stream of the scene with the virtual contents and then adapting the virtual contents to the image stream of the scene.
  • the comparison may occur by means of pixel mapping, in particular by comparing sample groups or zones of pixels of the image stream of the scene and of the virtual contents.
  • the virtual contents may be adapted to the image stream of the scene, for example by copying one or more frames of the scene image stream onto the virtual layer.
  • VLC Player VLC Player and VIDYO, preferably used singularly.
  • the virtual contents are arranged in specific positions of the scene images and, in particular, the virtual contents are always arranged at the respective objects contained in the scene images. Substantially, the virtual contents are aligned to the observer's point of view of the scene. This occurs even if the user moves in the environment.
  • the scene display 3 and the virtual display 4 are of the transparent OLED type.
  • the images of the scene are not transmitted to the first display until an object, with which one or more virtual contents is associated, is recognized.
  • the scene display 3 remains transparent.
  • the scene display 3 returns transparent, i.e. no images are transmitted to it.
  • the scene display 3 is configured to allow a binocular stereoscopic vision, in particular up to 127°, e.g. comprised between 0 and 127°. Sub-ranges of such range may also be provided.
  • the size of the scene display is such that when a user wears the device 1 , the scene display 3 is in the user's range of vision.
  • the scene display is designed to allow a 60° vision, it is sized so that its ends are respectively at +30° and -30° with respect to the standard line of vision (which is a well-known entity). The same applies to the other angles of the aforesaid range from 0 to 127°, as well as for the range described below referred to the virtual display.
  • the virtual display 4 is formed by two sectors 4a and 4b, or parts, mutually distinct and preferably also mutually separate.
  • the device 1 and the sectors 4a, 4b are shaped so that each sector is substantially at each eye of the wearer of the device 1 .
  • each sector allows a vision comprised between 0 and 60°, so that the second display 4 is configured to allow a binocular stereoscopic vision. Sub-ranges of such range may also be provided.
  • Each tracking camera 6a, 6b is configured to transmit virtual contents to a respective sector 4a, 4b by means of the CPU.
  • the virtual display can have only one sector or two sectors.
  • the vision device 1 also comprises a gyroscope and an accelerometer connected to the CPU by means of the communication bus.
  • the gyroscope and the accelerometer are adapted to respectively determine an orientation and a displacement of the vision device 1 in the surrounding environment or in space in general.
  • the device 1 preferably also comprises a GPS positioning device connected to the CPU 2 and adapted to perform a localization of the vision deice 1 in the surrounding environment or in space in general.
  • the gyroscope, accelerometer and the GPS device cooperate with one another.
  • device 1 comprises, either additionally or alternatively to the GPS positioning system, a third camera 7 configured to recognize the environment, e.g. a room, where the user wearing the device 1 is located.
  • the recognition occurs preferably by performing a preliminary mapping of one or more environments and storing information on such environments in the database.
  • the third camera 7 is particularly useful when the GPS signal is weak or missing.
  • Such camera 7 preferably allows to take images at 360°, and it is also preferable that camera 7 is arranged on the top of the helmet 8.
  • the device also comprises a fourth camera (not shown) configured to allow a remote reading of measuring devices or instruments arranged in the environment.
  • measuring devices are pressure gages, valves or other devices provided with digital and/or analogue display.
  • Variants of the device also comprise one or more of the following elements connected to the communication bus:
  • the communication bus can be customized according to the elements which are connected to it.
  • the communication bus is written in C++ language.
  • the device 1 of the invention allows tracking operations.
  • a tracking can also be performed substantially allowing the localization of the objects in the scene with respect to the observer's position.
  • an inertial tracking by means of one or more gyroscopic and accelerometric sensors;
  • a magnetic tracking by means of RFID system preferably in high mode;
  • an optical tracking by means of a 360° camera which shoots the previously mapped surrounding environment.
  • the invention also provides a method for associating virtual contents with objects arranged in an environment by means of such vision device according to the invention, comprising the following steps:
  • the device 1 does not necessarily need to be helmet-shaped.
  • the device may be substantially shaped as glasses.
  • the control unit and the other elements other than displays may be fixed to other parts of the user's body.

Abstract

A vision device (1) for augmented reality configured to associate virtual contents with objects arranged in an environment, comprising a CPU (2) to which are connected a first display (3), a second display (4); a first camera (5) adapted to shoot first images of said environment and to transmit them to the first display (3) by means of the CPU (2); at least one second camera (6a, 6b), adapted to shoot images of objects in said environment and to transmit them to the CPU (2), which is adapted to extract virtual contents related to said objects from the database and to transmit them to the second display (4)wherein synchronization means are provided in the CPU (2) to synchronize the virtual contents of the objects with the images of the environment, so that the virtual contents are always placed at a respective object contained in the images of the environment.

Description

VISION DEVICE FOR AUGMENTED REALITY
Field of the invention
The present invention relates to a device for augmented reality.
Prior art
Vision devices for augmented reality are known, such as head mounted displays, for example. Such vision devices are provided with a screen, e.g. a display, on which virtual images are reproduced. One of the main disadvantages of such devices is due to the fact that the user sees the real world through the screen and when the virtual images are reproduced on the screen they are simply superimposed on the user's natural vision. In this manner, the parallax error is considerable and it is very difficult, if not impossible, to align specific virtual images with specific real objects arranged in the real world seen by the user.
The known vision devices for augmented reality can be improved.
Summary of the invention
An object of the present invention is to provide a vision device for augmented reality which makes it possible to position virtual contents without parallax errors with respect to the real environment, in particular with respect to the object with which they are associated.
The present invention achieves this and other objects which will be apparent in light of the present description by providing a vision device for augmented reality according to claim 1 configured to associate virtual contents with objects arranged in an environment surrounding the vision device, wherein said virtual contents are stored in a database,
the vision device comprising an electronic control unit adapted to interface with said database and connected to: a first display; a second display, different from the first display; a first camera, adapted to shoot first images of said environment and to transmit said first images to the first display by means of the electronic control unit; at least one second camera, different from the at least one first camera, adapted to shoot second images of at least one object in said environment and to transmit them to the electronic control unit, which is adapted to extract virtual contents related to said at least one object from the database and to transmit them to the second display; wherein the second display is at least partially transparent;
wherein the first display and the second display are arranged in stacked order; wherein synchronization means are provided in said electronic control unit to synchronize the virtual contents of the at least one object with the first images so that they are always placed at said at least one object contained in said first images.
According to one aspect, the invention also provides a method for associating virtual contents with objects arranged in an environment by means of such vision device, comprising the following steps:
- providing a database in which virtual contents related to said objects are stored;
- shooting first images of said environment by means of the first camera and transmitting them to the electronic control unit;
- shooting second images of at least one object in said environment by means of the at least one second camera and transmitting them to the electronic control unit;
- extracting virtual contents related to the at least one object from the database by means of the electronic control unit;
- synchronizing the virtual contents of the at least one object with the first images, by means of the synchronization means, so as to transmit the first images to the first display and said virtual contents to the second display so that the virtual contents are always placed at said at least one object contained in said first images.
Advantageously, the vision device of the invention is advantageously applied to industrial control and workflow management systems and to sectors, such as occupational safety, public security, cultural heritage, design, maintenance and health care.
Preferably, the device of the invention is of the total immersion type.
According to an embodiment, the vision device is a head mounted display, in particular a helmet. Such helmet can be used as personal protection equipment or PPE.
According to an advantageous aspect, the vision device of the invention can be easily integrated in existing helmets.
In particular, the vision device of the invention can be easily integrated in existing PPE, e.g. a helmet. In particular, a communication bus adapted to allow easy integration in PPE is provided. The bus has housings which allow the insertion of the elements or sensors chosen by the user for the particular scene.
Preferably, the device of the invention comprises a TETRA sensor. In this manner, the device can be used, for example, as a helmet by the police or by the personnel of an industrial system or as an emergency helmet by fire fighters. The TETRA sensor can be used in particular in emergency situations. For example, in the case in which many communications, such as cellular networks, immediately fall. TETRA networks always remain active. The TETRA network also allows group calls making it possible for remote users to reach the crews in motion at the same time with simultaneous calls. A helmet with TETRA sensor can also be used in industrial systems in which TETRA networks are present.
Furthermore, the helmet of the invention preferably comprises a WiMax communication device, which allows communications in industrial systems also when transmission antennas are not in line of sight (NLOS).
A further communication means preferably comprised in the helmet of the invention is the Wi-Fi Mesh, which also allows the connection between different devices according to the invention. Advantageously, the Mesh channel transmission frequency is at 5 GHz, which does not cause interferences with possible Wi-Fi networks.
The dependent claims describe preferred embodiments of the invention.
Brief description of the figures
Further features and advantages of the present invention will be more apparent in light of the detailed description of preferred, but not exclusive embodiments of a vision device for augmented reality.
The description will be provided by way of non-limitative example, with reference to the accompanying drawing, also provided by way of non-limitative example, in which:
Fig. 1 shows a partially exploded diagrammatic view of an embodiment of the device of the invention. Detailed description of a preferred embodiment of the invention
In general, the invention provides a vision device 1 for augmented reality configured to associate virtual contents with objects arranged in an environment surrounding the vision device, wherein said virtual contents are stored in a database,
the vision device 1 comprising an electronic control unit 2 adapted to interface with said database and connected to:
- a first display 3;
- a second display 4, different from the first display 3;
- a first camera 5, adapted to shoot first images of said environment and to transmit said first images to the first display 3 by means of the electronic control unit 2;
- at least one second camera 6a, 6b, different from the at least one first camera 5, adapted to shoot second images of at least one object in said environment and to transmit them to the electronic control unit 2, which is adapted to extract virtual contents related to said at least one object from the database and to transmit them to the second display 4;
wherein the second display 4 is at least partially transparent;
wherein the first display 3 and the second display 4 are arranged in stacked order; wherein synchronization means are provided in said electronic control unit 2 to synchronize the virtual contents of the at least one object with the first images so that they are always placed at said at least one object contained in said first images.
Typically, display 3 and display 4 are either aligned or stacked or superimposed. In other words, display 3 and display 4 are arranged in stacked order. By way of non- limitative example only, one face of the display 3 is substantially parallel or at least partially parallel to a face of the display 4.
Preferably, the vision device 1 is configured so that the display 3 is distal from the eyes of a user, while the display 4 is proximal to the eyes of a user. More in detail, the display 3 and the display 4 are aligned and positioned so that when in use the display 3 is distal from the user's eyes and the display 4 is proximal to the user's eyes. In general, display 3, or "scene display", and display 4, or "virtual display", are advantageously mutually distinct elements. Preferably, display 3 and display 4 are mutually physically separate. More in particular, display 3 and display 4 are mutually distanced and preferably there is no intermediate element between display 3 and display 4.
In particular, display 3 and display 4 are not two layers which belong to the same display.
Preferably, the scene display 3 and the virtual display 4 are adapted to receive mutually distinct data flows.
Typically, display 4 is at least partially transparent, so that the virtual contents can be viewed in manner either superimposed or near one or more corresponding objects, e.g. one corresponding object. Typically, said one or more objects, e.g. one object, are "real" objects shot by the camera 5 and viewed on the display 3.
Preferably, the second display 3 is also at least partially transparent. Preferably, when the display 3 is at least partially transparent, a totally or partially opaque element, or in other terms a filter, is provided. Said element is associated with the display 3 and covers its surface. In this manner, it is possible to provide a more immersive augmented reality.
Preferably, the first camera and the at least one second camera are configured to substantially shoot the same scene.
With reference to Fig. 1 , a vision device 1 for augmented realty is provided. In this embodiment, the vision device is configured as a head mounted display.
The device 1 comprises a helmet 8 possibly provided with one or more fixing means 9 to constrain the device 1 to head of a user. The vision device of the invention can also be provided in form of helmet 8.
An electronic control unit 2, or CPU, is provided to which, by means of a communication bus, are connected a first display 3, also said scene display,; a second display 4, also said virtual content display, or simply virtual display; a first camera 5, also said scene camera; two second cameras 6a, 6b, also said tracking cameras.
Preferably, the scene camera 5 and the two tracking cameras 6a, 6b either are integrated in or fixed to the helmet 8, in particular at a front part thereof. Furthermore, the scene display 3 and the virtual display 4 are preferably substantially visors arranged at the front part of the helmet 8. The CPU 2 and the communication bus are preferably either integrated in or fixed onto the helmet 8, e.g. laterally.
The scene camera 5 is configured to shoot the images of an environment surrounding the vision device, also named scene. In particular, the images are preferably continuous shots. By way of example, the images are shot at a frequency comprised between 10 and 24 fps. The images shot by the scene camera 5 are sent to the CPU 2. In turn, the CPU 2 transmits the shot, and possibly processed, images to the scene display 3. Such transmission is preferably continuous, and thus a stream of transmitted images, or video stream, occurs. Therefore, the user who wears the device 1 , by watching the scene display 3, sees the surrounding environment as framed by the scene camera 5.
The two tracking cameras 6a, 6b and CPU 2 cooperate to recognize at least one object, typically a plurality of objects, in the scene. The objects to be recognized are those with which one or more virtual contents are associated.
Preferably, the objects are recognized by means of an interface of the device 1 with a database (not shown) on a computer medium, e.g. a mass storage device. The database can be connected by wire to CPU 2 or can be connected to the CPU by means of wireless connection, e.g. Wi-Fi. According to a variant, indeed, device 1 also comprises a Wi-Fi antenna. Information is stored in the database related to the objects to be recognized. For example, the information can be related to the shape of the object, such as one or more images of the object to be recognized, or when it is desired to recognize measurement instruments, the information can be related to the basic values of the instruments, such as full-scale, extreme values, digital numeric values.
Such information can be generated by means of a preceding previous mapping of the scene and of the objects with which virtual contents are desired to be associated.
CPU 2 is configured to recognize each framed object by means of the tracking cameras 6a, 6b comparing it with the information on the objects contained in the database. The recognition is preferably implemented by means of one or more software programs implemented on the CPU.
As previously mentioned, virtual contents are associated with each object. Such virtual contents are pre-stored in the database and may be sent to the device 1 , in particular to the CPU 2, remotely. In all cases, the CPU 2 can extract the contents from the database, which may be the same database where the object recognition information is stored or a different database. The CPU 2 is also adapted to associate the virtual contents with the real objects.
Non-limitative examples of virtual contents are: images, texts, numeric values, position coordinates, audio etc.
By means of the CPU 2, the virtual contents are synchronized with the images of the scene and then transmitted by the CPU 2 to the virtual display 4.
According to the invention, the images of the scene and the virtual contents are synchronized before being transmitted to the respective displays. Synchronization means are provided, in particular one or more software programs implemented on the CPU, configured to perform this synchronization.
Preferably, the synchronization occurs by comparing the image stream of the scene with the virtual contents and then adapting the virtual contents to the image stream of the scene. For example, the comparison may occur by means of pixel mapping, in particular by comparing sample groups or zones of pixels of the image stream of the scene and of the virtual contents.
The virtual contents may be adapted to the image stream of the scene, for example by copying one or more frames of the scene image stream onto the virtual layer.
By way of non-limitative example only, known software adapted to perform the synchronization are VLC Player and VIDYO, preferably used singularly.
Advantageously, by virtue of the invention, the virtual contents are arranged in specific positions of the scene images and, in particular, the virtual contents are always arranged at the respective objects contained in the scene images. Substantially, the virtual contents are aligned to the observer's point of view of the scene. This occurs even if the user moves in the environment.
The problem of parallax error between scene and virtual contents is thus solved. In this embodiment, the scene display 3 and the virtual display 4 are of the transparent OLED type. Preferably, the images of the scene are not transmitted to the first display until an object, with which one or more virtual contents is associated, is recognized. When the images of the scene are not transmitted, the scene display 3 remains transparent. Preferably, after a predetermined time, e.g. comprised between 5 seconds and 10 seconds in which no object is recognized, the scene display 3 returns transparent, i.e. no images are transmitted to it.
Preferably, the scene display 3 is configured to allow a binocular stereoscopic vision, in particular up to 127°, e.g. comprised between 0 and 127°. Sub-ranges of such range may also be provided. In other words, the size of the scene display is such that when a user wears the device 1 , the scene display 3 is in the user's range of vision. By way of non-limitative example only, when the scene display is designed to allow a 60° vision, it is sized so that its ends are respectively at +30° and -30° with respect to the standard line of vision (which is a well-known entity). The same applies to the other angles of the aforesaid range from 0 to 127°, as well as for the range described below referred to the virtual display.
Preferably, the virtual display 4 is formed by two sectors 4a and 4b, or parts, mutually distinct and preferably also mutually separate. The device 1 and the sectors 4a, 4b are shaped so that each sector is substantially at each eye of the wearer of the device 1 . Preferably, each sector allows a vision comprised between 0 and 60°, so that the second display 4 is configured to allow a binocular stereoscopic vision. Sub-ranges of such range may also be provided. Each tracking camera 6a, 6b is configured to transmit virtual contents to a respective sector 4a, 4b by means of the CPU.
According to a variant, there is provided only one tracking camera. In this case, the virtual display can have only one sector or two sectors.
Preferably, the vision device 1 also comprises a gyroscope and an accelerometer connected to the CPU by means of the communication bus. The gyroscope and the accelerometer are adapted to respectively determine an orientation and a displacement of the vision device 1 in the surrounding environment or in space in general. Furthermore, the device 1 preferably also comprises a GPS positioning device connected to the CPU 2 and adapted to perform a localization of the vision deice 1 in the surrounding environment or in space in general. Typically, the gyroscope, accelerometer and the GPS device cooperate with one another.
Preferably, device 1 comprises, either additionally or alternatively to the GPS positioning system, a third camera 7 configured to recognize the environment, e.g. a room, where the user wearing the device 1 is located. The recognition occurs preferably by performing a preliminary mapping of one or more environments and storing information on such environments in the database. The third camera 7 is particularly useful when the GPS signal is weak or missing.
Furthermore, such camera 7 preferably allows to take images at 360°, and it is also preferable that camera 7 is arranged on the top of the helmet 8.
Preferably, the device also comprises a fourth camera (not shown) configured to allow a remote reading of measuring devices or instruments arranged in the environment. Non-limitative examples of measuring devices are pressure gages, valves or other devices provided with digital and/or analogue display.
Variants of the device also comprise one or more of the following elements connected to the communication bus:
- a TETRA sensor,
- an atmospheric pressure sensor,
- a temperature sensor,
- a man-down sensor,
- a WiMAX communication device,
- a Wi-Fi Mesh communication device,
- a RFID reader which interacts, for example, with RFID tags associated with each object to which the virtual contents are assigned,
- an electronic nose,
- a perfume dispenser,
- a thermal camera.
Preferably, the communication bus can be customized according to the elements which are connected to it. For example, the communication bus is written in C++ language. Advantageously, the device 1 of the invention allows tracking operations.
In particular, it is possible to identify the observer's position with respect to the scene. In other words, such tracking process provides the position of the observer's point of view, i.e. the user who is wearing the device, in real time with respect to a global reference system conventionally associated with the real environment where the observer is found.
Furthermore, a tracking can also be performed substantially allowing the localization of the objects in the scene with respect to the observer's position.
For example, the following are possible according to the equipment of the device 1 :
an inertial tracking: by means of one or more gyroscopic and accelerometric sensors;
a magnetic tracking: by means of RFID system preferably in high mode;
an optical tracking: by means of a 360° camera which shoots the previously mapped surrounding environment.
The invention also provides a method for associating virtual contents with objects arranged in an environment by means of such vision device according to the invention, comprising the following steps:
- providing a database in which virtual contents related to the objects in the environment are stored;
- shooting first images of the environment by means of the first camera 5 and transmitting them to the electronic control unit 2;
- shooting second images of at least one object in the environment by means of the tracking cameras 6a, 6b and transmitting them to the electronic control unit 2; - extracting virtual contents related to the at least one object from the database by means of the electronic control unit 2;
- synchronizing the virtual contents of the at least one object with the first images, by means of the synchronization means, so as to transmit the first images to the scene display 3 and said virtual contents to the virtual display 4 so that the virtual contents are always placed at said at least one object contained in said first images. It is worth noting that the device 1 does not necessarily need to be helmet-shaped. For example, the device may be substantially shaped as glasses. In this case, the control unit and the other elements other than displays may be fixed to other parts of the user's body.

Claims

1 . A vision device (1 ) for augmented reality, configured to associate virtual contents with objects arranged in an environment surrounding the vision device, wherein said virtual contents are stored in a database,
the vision device (1 ) comprising an electronic control unit (2) adapted to interface with said database and connected to:
- a first display (3);
- a second display (4), different from the first display (3);
- a first camera (5), adapted to shoot first images of said environment and to transmit said first images to the first display (3) by means of the electronic control unit (2);
- at least one second camera (6a, 6b), different from the at least one first camera (5), adapted to shoot second images of at least one object in said environment and to transmit them to the electronic control unit (2), which is adapted to extract virtual contents related to said at least one object from the database and to transmit them to the second display (4);
wherein the second display (4) is at least partially transparent;
wherein the first display (3) and the second display (4) are arranged in stacked order;
wherein synchronization means are provided in said electronic control unit (2) to synchronize the virtual contents of the at least one object with the first images so that they are always placed at said at least one object contained in said first images.
2. A vision device (1 ) according to claim 1 , wherein said electronic control unit (2) is configured to recognize said at least one object by comparing the second images with third images stored in the database.
3. A vision device (1 ) according to claim 1 or 2, comprising a gyroscope and an accelerometer adapted to determine the orientation and a displacement of the vision device (1 ) in the surrounding environment, respectively.
4. A vision device (1 ) according to any one of the preceding claims, comprising a GPS location device connected to the electronic control unit (2) and adapted to localize the vision system (1 ) in the surrounding environment.
5. A vision device (1 ) according to any one of the preceding claims, comprising a mass storage device connected to the electronic control unit (2) and wherein said database is stored in said mass storage device.
6. A vision device (1 ) according to any one of the preceding claims, wherein the first display (3) is configured to allow a binocular stereoscopic vision up to 127°.
7. A vision device (1 ) according to any one of the preceding claims, wherein the second viewer (4) comprises two distinct sectors (4a, 4b) to allow a binocular stereoscopic vision, each distinct sector (4a, 4b) being configured to allow a vision up to 60°.
8. A vision device (1 ) according to any one of the preceding claims, wherein there are provided two second cameras (6a, 6b).
9. A vision device (1 ) according to any one of the preceding claims, wherein the first display (3) and the second display (4) are of the transparent OLED type.
10. A vision device (1 ) according to any one of the preceding claims, wherein there is provided a third camera configured to allow a remote reading of measuring devices arranged in the environment.
1 1 . A helmet (8) comprising a vision device (1 ) according to any one of the preceding claims, wherein the first display (3), the second display (4); the first camera (5) and the at least one second camera (6a, 6b) are fixed to the helmet (8).
12. A method for associating virtual contents with objects arranged in an environment by means of a vision device (1 ) according to any one of the preceding claims, the method comprising the following steps of:
- providing a database in which virtual contents related to said objects are stored; - shooting first images of said environment by means of the first camera (5) and transmitting them to the electronic control unit (2);
- shooting second images of at least one object in said environment by means of the at least one second camera (6a, 6b) and transmitting them to the electronic control unit (2);
- extracting virtual contents related to the at least one object from the database, by means of the electronic control unit (2); - synchronizing the virtual contents of the at least one object with the first images, by means of the synchronization means, so as to transmit the first images to the first display (3) and said virtual contents to the second display (4) so that the virtual contents are always placed at said at least one object contained in said first images.
PCT/IB2017/051284 2016-03-04 2017-03-06 Vision device for augmented reality WO2017149520A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102016000022788 2016-03-04
ITUA2016A001350A ITUA20161350A1 (en) 2016-03-04 2016-03-04 VISION DEVICE FOR INCREASED REALITY

Publications (1)

Publication Number Publication Date
WO2017149520A1 true WO2017149520A1 (en) 2017-09-08

Family

ID=56203488

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2017/051284 WO2017149520A1 (en) 2016-03-04 2017-03-06 Vision device for augmented reality

Country Status (2)

Country Link
IT (1) ITUA20161350A1 (en)
WO (1) WO2017149520A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10318811B1 (en) 2018-04-22 2019-06-11 Bubbler International Llc Methods and systems for detecting objects by non-visible radio frequencies and displaying associated augmented reality effects
WO2022240319A1 (en) * 2021-05-11 2022-11-17 Хальдун Саид Аль-Зубейди Panoramic view receiving device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1168033A1 (en) * 2000-06-19 2002-01-02 Aerospatiale Matra Missiles Helmet mounted viewing device
US20110164047A1 (en) * 2010-01-06 2011-07-07 Apple Inc. Transparent electronic device
WO2016014234A1 (en) * 2014-07-22 2016-01-28 Sony Computer Entertainment Inc. Virtual reality headset with see-through mode

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1168033A1 (en) * 2000-06-19 2002-01-02 Aerospatiale Matra Missiles Helmet mounted viewing device
US20110164047A1 (en) * 2010-01-06 2011-07-07 Apple Inc. Transparent electronic device
WO2016014234A1 (en) * 2014-07-22 2016-01-28 Sony Computer Entertainment Inc. Virtual reality headset with see-through mode

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10318811B1 (en) 2018-04-22 2019-06-11 Bubbler International Llc Methods and systems for detecting objects by non-visible radio frequencies and displaying associated augmented reality effects
WO2022240319A1 (en) * 2021-05-11 2022-11-17 Хальдун Саид Аль-Зубейди Panoramic view receiving device

Also Published As

Publication number Publication date
ITUA20161350A1 (en) 2017-09-04

Similar Documents

Publication Publication Date Title
US10614581B2 (en) Deep image localization
JP6160154B2 (en) Information display system using head-mounted display device, information display method using head-mounted display device, and head-mounted display device
US8780178B2 (en) Device and method for displaying three-dimensional images using head tracking
US8912980B2 (en) Image processing device, image processing method, and image processing system
US7817104B2 (en) Augmented reality apparatus and method
CN108156441A (en) Visual is stablized
US20210257084A1 (en) Ar/xr headset for military medical telemedicine and target acquisition
CN108605166A (en) Enhancing is presented in personalized using augmented reality
EP3629309A2 (en) Drone real-time interactive communications system
JP2017513434A (en) Automatic definition of system behavior or user experience by recording, sharing, and processing information related to wide-angle images
US11119567B2 (en) Method and apparatus for providing immersive reality content
CN106537233A (en) Thermal imaging accessory for a head-mounted smart device
KR20200106547A (en) Positioning system for head-worn displays including sensor integrated circuits
CN109964481B (en) Experience sharing system
TW201341848A (en) Telescopic observation for virtual reality system and method thereof using intelligent electronic device
WO2017149520A1 (en) Vision device for augmented reality
CN105528065A (en) Displaying custom positioned overlays to a viewer
KR101906560B1 (en) Server, method, wearable device for supporting maintenance of military apparatus based on correlation data between object in augmented reality
KR101790755B1 (en) Navigation service providing method though interworking with user device and smart glass thereof
JP2012108793A (en) Information display system, device, method and program
JP2017046233A (en) Display device, information processor, and control method of the same
CN110488489B (en) Eye registration for a head-mounted housing
CN104239877B (en) The method and image capture device of image procossing
CN105866966A (en) Panoramic three-dimensional stereoscopic glasses automatically switching visual angle
US20140285484A1 (en) System of providing stereoscopic image to multiple users and method thereof

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17719318

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17719318

Country of ref document: EP

Kind code of ref document: A1