CN104603865A - A system worn by a moving user for fully augmenting reality by anchoring virtual objects - Google Patents

A system worn by a moving user for fully augmenting reality by anchoring virtual objects Download PDF

Info

Publication number
CN104603865A
CN104603865A CN201280074650.XA CN201280074650A CN104603865A CN 104603865 A CN104603865 A CN 104603865A CN 201280074650 A CN201280074650 A CN 201280074650A CN 104603865 A CN104603865 A CN 104603865A
Authority
CN
China
Prior art keywords
virtual objects
real world
virtual
world
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201280074650.XA
Other languages
Chinese (zh)
Inventor
丹尼尔·格瑞贝格
加比·萨鲁斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imagine Mobile Augmented Reality Ltd
Original Assignee
Imagine Mobile Augmented Reality Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imagine Mobile Augmented Reality Ltd filed Critical Imagine Mobile Augmented Reality Ltd
Publication of CN104603865A publication Critical patent/CN104603865A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • G09G2370/022Centralised management of display operation, e.g. in a server instead of locally
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/12Use of DVI or HDMI protocol in interfaces along the display data pipeline
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/16Use of wireless transmission of display information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • H04N5/2723Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement

Abstract

A system to anchor virtual objects to real world objects, visually, functionally and behaviorally, to create an integrated, comprehensive, rational augmented reality environment, the environment comprising at least the relative location, perspective and viewing angle of the virtual objects in the real world, and the interaction between the virtual objects with the real world and with other virtual objects. The system includes an input device having a built-in interface, which receives data from an High-Definition Multimedia Interface (HDMI) adapter, or any other communication device, and returns images to a microprocessor, an HDMI compact audio/video adapter for transferring encrypted uncompressed digital audio/video data from an HDMI-compliant device and a head-mounted display worn by a user, housing at least one micro-camera and an inertial movement unit (IMU). The system also includes a microprocessor/software unit, which provides data input from the at least one micro-camera and the IMU and a power source.

Description

A kind of by the user in movement wear for the system by the abundant augmented reality of grappling virtual objects
Technical field
Present invention relates in general to a kind of system of augmented reality, particularly relate to a kind of by virtual objects functionally with behavior on anchor to real world objects can not to cause can move everywhere in fixing position and user/observer the position of environmental loss create one comprehensive, widely, the system of actual environment that strengthens of rationality, it is included in relative position, 3 dimension solid and the visual angles of the virtual objects in real world, and the interaction between the virtual objects of real world and between multiple virtual objects.The virtual objects of multi-user also can be interactive, and to be wherein provided in multi-user each for interchange to each other for this system.
Background technology
Augmented reality (AR) be a kind of in real time, direct or indirect physics, the visual angle of real world environments, its element being strengthened by the sensing input of Practical computer teaching by such as sound, video, image or gps data.AR relates to more general mediation reality (MR) concept, and wherein real vision is non-reinforcing by computing machine correction.Therefore, this technology is felt to run by promoting realities of the day.By contrast, a virtual reality world of simulating completely instead of real world.
Strengthen real-time and have environmental factor the meaning of one's words environment in be usual, such as current and outside on TV in play sports score.Under the help of the AR technology of advanced person, such as, increase computer vision and object identification, the information about the real world around user becomes interactive and can digital manipulation.Can cover about the artificial information of environment and its object and be added on real world.
Scientific research explores and computer-generated image is applied to live video stream with the sensation in the augmented reality world.AR technology comprises head mounted display and for the Virtual Retinal Display of imagery and the foundation of controlled environment of being enabled by sensor and driver.
Perspective glasses are prior aries, and it comprises: electro-optic device; And a secondary transparent glasses, the given display screen that its projection user eyesight can be seen just looks like the true display screen that there is unlimited focal length in real world, so, although its arrange with glasses closely, the image of display still can be in sight.Because perspective glasses screen is independent for each eyes, shown image can be very real three-dimensional holography.Due to black, reflection ray is not so it is counted as printing opacity in perspective glasses, and the object in blank screen is isolated out also usual in sight as the appearance that they exist.
Total Immersion is the company of an augmented reality, and D ' the Fusion technology of the said firm uses black surround feature to be incorporated in real-time video source by the 3D rendering of real-time, interactive.
Therefore, there is provided a kind of worn solution also overcoming the limited practicality of augmented reality system to be favourable to retain actuality in the moving process of user/observer, and thus make the integration of the virtual and display elements in user environment more true and abundant.
Summary of the invention
Correspondingly, fundamental purpose of the present invention is that the integration of the virtual and reality factor made in user/observer's environment is more true and abundant.
Another fundamental purpose of the present invention visually selected virtual objects functionally with in behavior is anchored to real world so that create one fixed position and mobile in all complete, widely, rational augmented reality environment.
Another fundamental purpose of the present invention is system (hardware) and the method (algorithm/software) of the user/observer's wearing provided in a kind of movement, to pass through perspective glasses by the imaging importing of Practical computer teaching to real world, the integration that data from this system input is provided, undertaken improving by the method and generate result and export, be shown to user/observer by perspective glasses.
Another fundamental purpose of the present invention is also that using compensation formula provides IMU stability-in black surround mobile CGI to particular orientation in a coordinate system.(soft grappling)
Another fundamental purpose of the present invention is to utilize the real world objects as label to provide the computer vision of a kind of image of Practical computer teaching (CGI) (CV) dynamically 3D integration (hard grappling) to real world, relevant to make virtual objects and the dynamic 3D of this computer vision (CV) integrate reality.
Further fundamental purpose of the present invention is the processing based on visual angle computer vision and image.
Other a object of the present invention is to pass through set upone true with the remainder in the world interactivevirtual objects dynamic data base with enable software application.
One group of logic rules, it is according to the image of restriction interactive character processing Practical computer teaching comprising location, visual angle, function and behavior.
One group of logic rules, it for sharing virtual world according to each individuality and its associated visual angle and visual angle in the individuality creating a series of behavior and its respective image procossing.
SDK (Software Development Kit) (SDK) comprises the application for allowing any developer to create the integration of real world and any type of virtual world.
This source images is the image (CGI) of Practical computer teaching, and it is contrary with the image that black surround shows.The video image received by the video camera installed on fluoroscopic apparatus is real world reference, therefrom, use computer vision application and corresponding algorithm, software identification real object as label so that by CGI " hard grappling " (that is, the relevant connection closely between virtual objects and said real object) to real world.This source images (CGI) is different from the reference picture using computer vision to carry out grappling.
Soft grappling anchors to specified point, independent of any distortion in real world and environment.By contrast, hard grappling is the object anchored in real world, carries out pin mark fixed (pin-pointed) by label, and label comprises distortion in space and conversion, visual angle, interruption etc., instead of the special object of real world.
Because this system (health and head move) should enable soft hard grappling in movement, can be undertaken by the IMU equipment that facilities for observation is installed measuring and compensating so this moves.
Another fundamental purpose of the present invention is included in the description of real world the relative position of virtual objects, visual angle and visible angle, and virtual objects and real world and the interaction between virtual objects and other virtual objects.
Virtual objects anchors in real world for using perspective glasses by further fundamental purpose of the present invention.
Some examples:
3D rendering view and grappling: if observer walks while watch around a sculpture, the visual angle relevant to sculpture according to it, he will arrive sculpture from a different perspective.
Change visual angle in movement: if see virtual sign when observer on the way drives his vehicle, when close to this symbol, the size of this symbol will increase along with the function of observer and intersymbol distance.
The physical characteristics of virtual objects: if tennis has effectively impacted one glass of virtual water, due to impact cup by fragmentation and water will gush out.If but light spitball impacts same one glass of virtual water, although impacted by spitball, this glass of water will be kept upright.That is, the ball of virtual glass cylinder and two types is by the specific reference information in online access " Wei Ji-kind " solution (result of current event), and these information are about relative weight, size, angle of attack and other relevant physical datas.
Further fundamental purpose of the present invention is for providing a kind of display of wear-type, and it comprises perspective glasses, virtual retinal display device or allows the image (CGI) of Practical computer teaching to be superimposed upon any other the equipment in the observation of real world or technology.
Head mounted display (HMD) is worn on head or the part as the helmet, and it has the little display eyeglass be positioned at before eyes (simple eye formula HMD) or two eyes (eyes formula HMD).
Typical HMD has with being embedded into the helmet, the eyeglass in glasses (also known as data glasses) or face shield and one or two little display of semitransparent mirror.This display unit is miniaturized and can comprises cathode-ray tube (CRT) (CRT), liquid crystal display (LCD), liquid crystal on silicon (LCoS) or Organic Light Emitting Diode (OLED).In a preferred embodiment, multiple micro-display is implemented to increase whole resolution and the visual field.
This equipment makes computer generated image (CGI) can be superimposed in real world.The visual field of real world and superposing by CGI being cast through partially reflecting mirror and directly observing real world of CGI.This method is often called as optical perspective.Electronically can also mix superposing of the visual field of having carried out real world and CGI by carrying out electronics with CGI from video camera receiver, video and by it.This method is often called as video perspective.
Virtual retinal display (VRD), also Retinal Scanning Display (RSD) or retinal projection's instrument (RP) can be called, be a kind of by raster display, normally the raster display of TV is drawn directly into the display technique on the retina of eyes of user.This user sees the display of the routine be seemingly suspended in its space at the moment.
The invention provides a kind of computer generated image be integrated in the real world visual field, seen on its see-through display glasses worn at that time by observer.This virtual objects will be in sight around observer, as the real object in the real world be presented on glasses that he wore at that time, only can be seen by his eyesight.
Further fundamental purpose of the present invention is that third party can be developed to be related to application integrated for any place in virtual world and real world and the world.The method relates to and limits each virtual objects relevant to application-specific, and the essence of remainder interaction with the world.
Further fundamental purpose of the present invention is to provide the software solution relating to and come from the data input that the hardware device be integrated on glasses provides.The software of this solution utilizes data to input to use diverse ways to be anchored in real world by the virtual objects seen by glasses.
The further fundamental purpose of the preferred embodiments of the present invention uses the combination of the twin camera be arranged on display unit and the inertia mobile unit (IMU) be integrated on display unit virtual objects to be anchored to specific point in space.
Inertial Measurement Unit, or IMU, for using accelerometer and gyrostatic multiple measurement and reporting the electronic equipment of the speed of airship, orientation and gravity.IMU is typically for operate aircraft, and it is included in unmanned vehicle in other aircraft many (UAV ' S) and comprises the such space ship of space shuttle, satellite and lander.Recent development allows to produce the GPS device being equipped with IMU.When gps signal is not available, such as in tunnel, interior of building etc. time, or when electronic interferences occurs, IMU allows GPS work.It is known that wireless IMU is called as WIMU.
This IMU is the primary clustering of the inertial navigation system of navigate missile for vehicle in aerial, space and water and wherein.Therefore, the data of collecting from the sensor of IMU allow the computing machine using dead reckoning to carry out vehicle location to follow the trail of.IMU uses accelerometer to detect current accelerated speed, and uses the change of such as tilting, roll and going off course in one or more gyroscope detection rotatable property.
In order to by the image stabilization of display to certain point in space, the size of source images is reduced and is projected onto in the full-scale black surround that source images suspends wherein.This suspension characteristic of black surround can be restricted to the characteristic of " black hydraulic pressure frame ".This black surround comprising source images is sent to glasses projector for display.Due to black not reflected light thus its in sight in glasses be printing opacity.Object in blank screen is isolated also in sight as itself, that is, observer only can see source images and can not see black surround.
Source images using compensation computing formula is inserted in black surround according to the input of IMU data, just as oppositely moving to forward moves.In this way, certain point in the space of the source images seen of transmitted through glasses in the user visual field is stable.Virtual objects uses computer vision application and the video camera be integrated on glasses visually and to be functionally anchored into real world.
The combination of video camera and IMU makes virtual and hard grappling that is real-world object is feasible, is separated because object can move to move with observer by this system simultaneously, thus observer can be in mobile in.
Computer vision (CV) is one and contains acquisition, processes and understand the field of method of image.Generally, CV obtains multi-dimensional data so that the numeral produced based on decision-making or symbolic information from real world.Nearest CV development is by electronics perception with understood image and doubled human visual ability.This image understanding can be counted as using the auxiliary lower model built by geometry, physics, statistics and the theories of learning to understand symbolic information from view data.
As scientific principles, computer vision is associated with the manual system theory behind of information extraction from image.This view data can adopt various ways, such as from video sequence, the vision of multiple video camera, or from the multi-dimensional data of medical scanners.The subdomain of computer vision comprises scene rebuilding, event detection, video frequency tracking, object identification, study, index, moves and estimate and Postprocessing technique.In mostly actual CV application, computing machine is programmed to solve specific task, but is just becoming now more and more common based on the method for study.
According to the preferred embodiments of the present invention, CV is used in real world, identify structure and object, use relevant to characteristic that is each and each virtual objects, and the one group of rule be associated with the behavior of its vision and function in the real world comprising different visible angles and visual angle, label is created in real world and virtual objects is anchored into label.
For example:
Virtual teacup can not be suspended in aerial and should be placed in solid state surface.
If virtual teacup is positioned on real desk, and someone rotates real desk, then teacup should do corresponding rotation relative to desk, and should see from all visual angles that 3D teacup rotates.
Two kinds of behavior needs are distinguished:
1. user is observing specific gaze area.His head moves naturally reposefully.Virtual objects in sight in certain gaze area is fixing, just as they are real-world object.This is because their (source CGI images) are shown and suspend in black surround.
2. user sees towards surrounding, and he sees desk.Desk has virtual cuppa.This is because this specific cuppa of this algorithm identification should to be positioned on specific desk and to have very consistent geographic position.When he sees to surrounding, algorithm by Computer Vision Recognition as this specific desk of label and show this specific cuppa, i.e. this CGI.
The present invention depends on 2D information, 3D static information and dynamic computer check vision (CV).Use eyes computer vision and two video cameras be integrated on glasses to be anchored to by virtual objects and move the real world relevant with visual angle, vision and function to 3D.In some respects, CV is the upset of computer picture.When computer picture generation comes from the view data of 3D model, CV often produces the 3D model coming from view data.
In eyes CV, two video cameras are used together.Two video cameras are used to have some benefits of a relative video camera.First, which show the wider visual field.For example, the mankind have the maximum horizontal visual zone of two about 200 degree, and wherein about 120 degree form the binocular visual field seen by eyes, and eyes both sides have two and are approximately 40 degree of monocular vision regions (only being watched by eyes).The second, its binocular summation provided improves the ability detecting fuzzy objective.3rd, it can provide solid mapping, and the parallax wherein provided by two video cameras of diverse location gives the accurate depth of field.
Implement optical character identification (OCR) application
Optical character identification (OCR) be hand-written in machine code text, key in or the machinery of scan image of text of printing or the conversion of electronics.It is widely used as a kind of data entry modality of the original paper data source from some type, is no matter other hard copies of file, sales bill, mail or any amount.Vital by printed text computer, so as its electronically can be searched for, stored more compressively, by Real time displaying and in Machine Method, such as mechanical translation, phonetic synthesis and text mining.OCR is the research and development field in pattern-recognition, artificial intelligence and computer vision.
Enforcement gesture stability is applied
Gesture recognition is a theme in computer science and language technology field, and its target is to describe mankind's attitude by mathematical algorithm.Attitude can come from any health and move or state, but usually comes from face, hand and sound.At present the focus in this field comprises identify emotion from face and hand gestures identification.The result that descriptive markup language has achieved enhancing is carried out by using video camera and CV algorithm.But, the identification of attitude, gait, close nature and human behavior and identify and also conform to gesture recognition technology.
Gesture recognition can regard a kind of mode that computing machine starts to understand human body language as, therefore set up between machine and the mankind a kind of compared with still limit most keyboards and mouse input urtext user interface or even graphic user interface (GUI ' s) abundanter communication.
Gesture recognition makes the mankind exchange with machine (HMI) also naturally interactive and not use other plant equipment.Use the concept of gesture recognition, can with Fingers to computer screen to make the corresponding movement of cursor.This can make usual input equipment potentially, and such as mouse, keyboard and touch screen become redundancy.Gesture recognition can be realized by the technology of CV and image procossing.
The present invention can use any smart mobile phone as computing machine, and any other computer system or use cloud computing are implemented.
Cloud computing is the transmission calculated as service, instead of product, by shared resource, software and information being supplied to computing machine and other equipment as public, such as covering network, typically covering the electric power networks of the Internet.Cloud computing typically depends on the centralized services with data, software and the calculating on the Application Program Interface (API) of disclosed overlay network.It is a large amount of overlapping that it has with software service (SaaS).
Terminal user passes through the application based on cloud of web browser or portable bench-type computer or Mobile solution acquisition, and this business software and data are stored on the server of remote address simultaneously.Cloud application vendor provide as possible to install on the end user computer with software program this locality compared with identical or better serve and performance.The more wide in range concept of fusion type architecture (CI) and shared service is based on the basis of cloud computing.This data type center environment allow enterprise the manipulation more simplified and more reduced-maintenance basis promotes its apply and make it run faster, and make infotech (IT) IT resource can be adjusted more quickly, such as server, storer and network, fluctuation with uncertain business demand to tackle.
The invention provides the execution of single application, the execution of multi-user's application, and the use to wireless technology and self energizing technology.
The invention provides a kind of SDK (Software Development Kit) (SDK).This SDK comprises the database containing specific definitions, and this database, for one and all to its vision, functionally relevant with the feature of feature in behavior virtual objects, comprises itself and remaining interaction feature that is virtual and real world.This SDK comprises one group of logic rules, and it is associated to by the characteristic of any type relevant for application and development integrated to virtual world and real world.
Information sharing is defined to produce the common virtual world integrated with real world between individuals.For example:
The imagination watches tennis tournament from the side.Observer sees his left side and sees that sportsman A bats.Then he sees his the right and sees that sportsman B bats.The reaction of sportsman B by the action of sportsman A cause etc.That mutual and each in them between two sportsmen causes in turn and the ball affected by the previous behavior of his opponent is fast relevant with orientation.This tennis match is a series of interactions between two sportsmen.
If two sportsmen are by two different computer operations, then the interaction between them will become the result of the interaction between the two methods that performed by two different computing machines.This tennis tournament then will become a series of interactions between two independent utility.
In the specific example of tennis tournament, this ball should be presented in the two methods of the order reflecting the action relevant to speed and orientation and reaction, result as prior actions calculates in each application, and independently creates mutual logic between application at two.In Virtual Intelligent, this mutual logical representation " intelligence of ball ".
According to preferred embodiment of the present invention, in order to implement the augmented reality of movement, this is basic demand.This " intelligence of ball " logic enables the augmented reality of movement as the individual solution of each user, and as the common solution of all users, a lot of user is allowed to perform common experience, respectively by viewpoint each of his uniqueness and by with the interactive order of his uniqueness of overall experience and the single interactive order with each in other overall experience.
The present invention makes special microprocessor to drive and performs application, comprises IMU grappling, computer vision grappling, external computer system emulation, application interface connects, OCR applies, gesture stability is applied and other application related to the present invention.
Above all will being further appreciated by the description of the following illustrative of preferred embodiment and indefiniteness with other feature and advantage of the present invention.
Accompanying drawing explanation
In order to understand the present invention and see how it performs in practice, the mode with reference to accompanying drawing and by means of only non-restrictive example describes preferred embodiment, in the accompanying drawings:
Fig. 1 is exemplary elaboration virtual objects being anchored to the system of real world of constructed according to the principles of the present invention;
Fig. 2 is functional structure schematic diagram virtual objects being anchored to the system of real world of constructed according to the principles of the present invention;
Fig. 3 is total block diagram virtual objects being anchored to the system of real world of constructed according to the principles of the present invention;
Fig. 4 is wireless connections block diagram virtual objects being anchored to the system of real world of constructed according to the principles of the present invention.
Embodiment
According to reference accompanying drawing and appended instructions, side's ratio juris of the present invention and operation can better be understood, and should be understood that providing these accompanying drawings is only illustrative object and is not intended to limit.
Fig. 1 is constructed according to the principles of the present invention, virtual objects is anchored to the exemplary illustration of the system of real world.An exemplary embodiment comprises the smart mobile phone 110 of user, P.e.c. hardware 120 and the intelligent glasses 130 worn by user.Smart mobile phone 110 provides image to input 111 to P.e.c. hardware 120, its comprise process software 122 and from the bearing data of the inertia mobile unit (IMU) of intelligent glasses 130 together with carrying out one or two (2D and 3D) micro-video camera computer vision video input 123 integrated on comfortable intelligent glasses 130.
Mobile augmented reality exports 124 and is turned back to intelligent glasses 130 from P.e.c. hardware 120.Intelligent glasses 130 comprises 3D aspect sensor 131 (inputting data for IMU), inserts mold pressing prism and lens 132, electron device 133, battery 134 and display 135.
Fig. 2 is constructed according to the principles of the present invention, virtual objects is anchored to functional structure Figure 200 of the system of real world.HDMI (High Definition Multimedia Interface) (HDMI) 212 is audio/video adapter of compression, and it is for the unpressed digital audio/video data from HDMI suitable device (" source " or " input " smart mobile phone digital audio-frequency apparatus, graphoscope or video-projection " box " 220) transfer encoding.Smart mobile phone source 210 comprises embedding interface 211, and it receives data from HDMI adapter 212 and image is turned back to microprocessor 221.In the exemplary embodiment, the intelligent glasses 230 that a pair is worn by user is packaged with micro-video camera 231 and IMU 232, and the two provides data to input to microprocessor/software unit 221 in box 220, and it goes back packaged battery 222.Intelligent glasses 230 also comprises left screen 233 and right screen 233, and its reception is exported by the display image that user watches from microprocessor 221.
Fig. 3 is constructed according to the principles of the present invention, virtual objects is anchored to total block diagram 300 of the system of real world.Connection between smart mobile phone 310 (or in other embodiments other calculate source) and glasses is illustrated.Smart mobile phone 310 module comprises video injection application 311, display interface device application 312, the third-party application 313 of application programming interfaces (API), right side video flowing pulls application 314, left-side video stream pulls application 315 and Inertial Measurement Unit (IMU) communication interface 316.
Interface between smart mobile phone 310 and glasses comprises video clip display command interface 321, video flowing interface, right side 322 and left-side video stream interface 323.Glasses module comprises right displays 324, left side display 325, right camera 326, left side camera 327 and IMU328.These glasses also have bias regulator 329.
Video and command interface 321 receive the video injecting application 311 from video by video display channel 317, video flowing interface, right side 322 to the right video pull application 314 transmit right camera video flowing 318 and left-side video stream interface 323 to the left video flowing pull application 315 to transmit left side camera video flowing 319.
Fig. 4 is constructed according to the principles of the present invention, virtual objects is anchored to the wireless connections block diagram 400 of the system of real world.Wireless connections between smart mobile phone 410 (or other calculate source) and glasses are illustrated.Smart mobile phone 410 module comprises video input application 411, display interfaces application 412, the third-party application 413 of application programming interfaces (API), right side stream input application 414, left side stream input application 415 and Inertial Measurement Unit (IMU) communication interface 416.
Video clip and display order 421 show transmission 441 by WiFi and inject application 411 receiver, video from video.Glasses module comprises right displays 424, left side display 425, right side WiFi IP video camera 426, left side WiFi IP video camera 427 and the IMU 428 with WiFi impact damper 440, and also display 424 and left side display 425 transmit information to the right.These glasses also have the bias regulator 429 of being powered by battery 450.
Right camera WiFi IP426 effluent input to the right application 414 is transmitted right side WiFi 442 and left side camera WiFi IP 427 effluent input left application 415 and is transmitted left side WiFi443.
Wherein describe about some specific embodiment of the present invention, be understandable that this instructions is not intended to limit, owing to further improving, the technician in technical field will easily be expected, and it is intended to these improvement and will drops in the coverage of appended claim.

Claims (21)

1. a system, its position for the virtual objects observed by grappling is with mobile as the virtual image in the real-world object of the real world seen by user, and described system comprises:
Be worn on the perspective projection equipment of user's head, it comprises Inertial Measurement Unit (IMU), and described Inertial Measurement Unit comprises:
3 axle gyrostats; And
3 axle accelerometers;
Described user wears the computing machine generating described virtual image; And
At least one video camera embedded;
Wherein said virtual objects is Practical computer teaching (source) image (CGI), it is configured to may be superimposed on the object of described real world, and wherein said virtual objects is anchored in the object of described real world, it is consistent with the relative movement of described object and the relative movement of described user, and wherein in such relative movement, described virtual objects changes in size, orientation and light and shade to major general, just as they are real object.
2. the method for grappling of a system according to claim 1, described method comprises the soft grappling providing virtual objects position, be regarded as the virtual image in real world object, wherein said equipment is fixing with static state, and wherein said virtual objects is in sight as fixture.
3. a method for system according to claim 1, described method provides the location of virtual objects, is regarded as the virtual image in real world object, and described method comprises:
User moves its head, and therefore moves described system;
The quantity of the relative movement of IMU gyroscope and accelerometer, direction and speed is read according to the software algorithm stored on computers;
Recorded by the video camera of the source CGI of visual zone; And
By the conversion source CGI of in following manner:
Stablize movement or the vibration of described image;
Change the motion direction of described virtual objects; And
Once leave described visual zone, remove described virtual objects, thus the position of virtual objects described in grappling.
4. method as claimed in claim 3, wherein stablize the point that stationary source CGI is limiting in space, as in sight in fluoroscopic apparatus as described in passing through, and described stabilizing step comprises further:
Reduce the size of described source CGI; And
In full-scale black liquor press box, project source CGI, wherein said CGI suspends.
5. method as claimed in claim 4, comprise the constant movement of source CGI further, it is as the Output rusults by treatment and analysis software algorithm.
6. method as claimed in claim 5, wherein said constant movement is moved relative to the inertia received from described IMU, and the compensation calculation formula wherein given tacit consent to comes from and to take measurement of an angle change by described IMU and/or by described video camera, to make the mobile inertia Mobile data with receiving from described IMU in described source (CGI) just the opposite.
7. method as claimed in claim 6, comprises the described formula of adjustment further so that by described source CGI position matching to different environment and scene.
8. method as claimed in claim 7, comprises transmission black surround further, and it comprises described source images is transferred to described perspective projection equipment for display.
9. method as claimed in claim 8, wherein black be counted as printing opacity to make described source images be isolated and to be counted as unique image.
10. the method for claim 1, wherein said perspective projection equipment is perspective glasses.
11. the method for claim 1, wherein said perspective projection equipment is perspective head mounted display.
12. the method for claim 1, the computing machine being wherein worn on head by described user is smart mobile phone.
13. methods as claimed in claim 3, comprise further and limit and adjust described movement or vibration.
14. methods as claimed in claim 3, comprise further:
Utilize computer vision analysis by described cameras capture and input to the described real world image of described process software; And
On described real object, produce locator markers by algorithm so that in three-dimensional virtual objects described in grappling.
15. methods as claimed in claim 3, wherein said display device is equipped with IMU, video camera and microphone, and described method comprises further:
Completed the attitude controlling described virtual objects by following at least one by user:
Hand;
Head; And
Sound.
16. methods as claimed in claim 3, described virtual objects functionally with in behavior is integrated into described real world by wherein said software, and wherein said software at least comprises:
Dynamic data base, which provides the definition of the interaction of the remainder of each virtual objects and virtual and real world, function and behavior essence; And
The catalogue of real-world object.
17. methods as claimed in claim 3, the related definition of the essence wherein according to one group of logic rules and according to the interaction limited in described database, described process software relates to described virtual objects, and it is shown as the real-world object that the cameras observe by being arranged on described glasses arrives.
18. methods as claimed in claim 3, comprise further and share common virtual world between individuals, allow individual to share about containing visually by software application, functionally with grappling in behavior and the virtual objects being integrated into real world, and to information relevant with visual angle relative to the visible angle of each individuality of described virtual and real world respectively.
19. methods as claimed in claim 3, comprise the information of the interaction of the multiple users shared about some application or situation relating to described virtual world further, and produce a series of to the common virtual world event interactive relevant with real world, wherein sequence described in each events affecting.
20. the system as claimed in claim 1, comprise software development kit (SDK) further, it allows to be included in refining and improving of described database in the software of solution and logic rules, thus can produce and utilize perspective glasses or other head mounted displays (HMD) by virtual world grappling and the application being integrated into any kind that real world is correlated with.
21. systems as claimed in claim 10, wherein said computing machine and described glasses integrated.
CN201280074650.XA 2012-05-16 2012-05-16 A system worn by a moving user for fully augmenting reality by anchoring virtual objects Pending CN104603865A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IL2012/050173 WO2013171731A1 (en) 2012-05-16 2012-05-16 A system worn by a moving user for fully augmenting reality by anchoring virtual objects

Publications (1)

Publication Number Publication Date
CN104603865A true CN104603865A (en) 2015-05-06

Family

ID=49583229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280074650.XA Pending CN104603865A (en) 2012-05-16 2012-05-16 A system worn by a moving user for fully augmenting reality by anchoring virtual objects

Country Status (4)

Country Link
EP (1) EP2850609A4 (en)
CN (1) CN104603865A (en)
HK (1) HK1207918A1 (en)
WO (1) WO2013171731A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371571A (en) * 2015-11-30 2017-02-01 北京智谷睿拓技术服务有限公司 Information processing method, information processing apparatus and user equipment
CN106997295A (en) * 2016-01-25 2017-08-01 罗伯特·博世有限公司 Method and apparatus for making software visualization
WO2017152600A1 (en) * 2016-03-11 2017-09-14 Effire Universal Limited Smartphone with a vr content capturing assembly
CN107168619A (en) * 2017-03-29 2017-09-15 腾讯科技(深圳)有限公司 User-generated content treating method and apparatus
CN107615214A (en) * 2015-05-21 2018-01-19 日本电气株式会社 Interface control system, interface control device, interface control method and program
CN107657574A (en) * 2017-10-06 2018-02-02 杭州昂润科技有限公司 It is a kind of based on the underground utilities asset management system of AR technologies and method
WO2018077176A1 (en) * 2016-10-26 2018-05-03 北京小鸟看看科技有限公司 Wearable device and method for determining user displacement in wearable device
CN108427194A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of display methods and equipment based on augmented reality
WO2018149266A1 (en) * 2017-02-14 2018-08-23 深圳梦境视觉智能科技有限公司 Information processing method and device based on augmented reality
WO2018209515A1 (en) * 2017-05-15 2018-11-22 上海联影医疗科技有限公司 Display system and method
CN109343815A (en) * 2018-09-18 2019-02-15 上海临奇智能科技有限公司 A kind of implementation method of virtual screen device and virtual screen
CN110249379A (en) * 2017-01-24 2019-09-17 隆萨有限公司 The method and system of industrial maintenance is carried out using the display of virtual or augmented reality
CN110546836A (en) * 2017-04-21 2019-12-06 利塔尔两合公司 Method and system for the automated support of a connection process of components, in particular components arranged in a switchgear cabinet or on an assembly system
CN110850977A (en) * 2019-11-06 2020-02-28 成都威爱新经济技术研究院有限公司 Stereoscopic image interaction method based on 6DOF head-mounted display
CN110869980A (en) * 2017-05-18 2020-03-06 Pcms控股公司 System and method for distribution and presentation of content as a spherical video and 3D portfolio
CN110908503A (en) * 2018-09-14 2020-03-24 苹果公司 Tracking and drift correction
CN111164544A (en) * 2017-10-02 2020-05-15 Arm有限公司 Motion sensing
CN111352503A (en) * 2018-12-21 2020-06-30 秀铺菲公司 E-commerce platform with augmented reality application for display of virtual objects
CN111527523A (en) * 2018-02-02 2020-08-11 三星电子株式会社 Apparatus and method for sharing virtual reality environment
CN112684883A (en) * 2020-12-18 2021-04-20 上海影创信息科技有限公司 Method and system for multi-user object distinguishing processing
CN113302578A (en) * 2019-01-22 2021-08-24 惠普发展公司,有限责任合伙企业 Mixed reality presentations
US11287292B2 (en) 2017-02-13 2022-03-29 Lockheed Martin Corporation Sensor system
CN111527523B (en) * 2018-02-02 2024-03-15 三星电子株式会社 Apparatus and method for sharing virtual reality environment

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103900473A (en) * 2014-03-31 2014-07-02 浙江大学 Intelligent mobile device six-degree-of-freedom fused pose estimation method based on camera and gravity inductor
US9600743B2 (en) 2014-06-27 2017-03-21 International Business Machines Corporation Directing field of vision based on personal interests
US9471837B2 (en) 2014-08-19 2016-10-18 International Business Machines Corporation Real-time analytics to identify visual objects of interest
US9697383B2 (en) 2015-04-14 2017-07-04 International Business Machines Corporation Numeric keypad encryption for augmented reality devices
US10799792B2 (en) 2015-07-23 2020-10-13 At&T Intellectual Property I, L.P. Coordinating multiple virtual environments
US10908419B2 (en) 2018-06-28 2021-02-02 Lucyd Ltd. Smartglasses and methods and systems for using artificial intelligence to control mobile devices used for displaying and presenting tasks and applications and enhancing presentation and display of augmented reality information
USD900920S1 (en) 2019-03-22 2020-11-03 Lucyd Ltd. Smart glasses
USD900206S1 (en) 2019-03-22 2020-10-27 Lucyd Ltd. Smart glasses
USD899500S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD900204S1 (en) 2019-03-22 2020-10-27 Lucyd Ltd. Smart glasses
USD900203S1 (en) 2019-03-22 2020-10-27 Lucyd Ltd. Smart glasses
USD899493S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD899495S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD899499S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD899497S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD899496S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD900205S1 (en) 2019-03-22 2020-10-27 Lucyd Ltd. Smart glasses
USD899498S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD899494S1 (en) 2019-03-22 2020-10-20 Lucyd Ltd. Smart glasses
USD954136S1 (en) 2019-12-12 2022-06-07 Lucyd Ltd. Smartglasses having pivot connector hinges
USD954135S1 (en) 2019-12-12 2022-06-07 Lucyd Ltd. Round smartglasses having flat connector hinges
USD958234S1 (en) 2019-12-12 2022-07-19 Lucyd Ltd. Round smartglasses having pivot connector hinges
USD955467S1 (en) 2019-12-12 2022-06-21 Lucyd Ltd. Sport smartglasses having flat connector hinges
USD954137S1 (en) 2019-12-19 2022-06-07 Lucyd Ltd. Flat connector hinges for smartglasses temples
USD974456S1 (en) 2019-12-19 2023-01-03 Lucyd Ltd. Pivot hinges and smartglasses temples
US11282523B2 (en) 2020-03-25 2022-03-22 Lucyd Ltd Voice assistant management

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040080467A1 (en) * 2002-10-28 2004-04-29 University Of Washington Virtual image registration in augmented display field
US20070035511A1 (en) * 2005-01-25 2007-02-15 The Board Of Trustees Of The University Of Illinois. Compact haptic and augmented virtual reality system
CN101059717A (en) * 2006-04-21 2007-10-24 佳能株式会社 Information-processing method and device for presenting haptics received from a virtual object
US20110213664A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
US20110216002A1 (en) * 2010-03-05 2011-09-08 Sony Computer Entertainment America Llc Calibration of Portable Devices in a Shared Virtual Space
CN102419631A (en) * 2010-10-15 2012-04-18 微软公司 Fusing virtual content into real content
CN102906623A (en) * 2010-02-28 2013-01-30 奥斯特豪特集团有限公司 Local advertising content on an interactive head-mounted eyepiece

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5742264A (en) * 1995-01-24 1998-04-21 Matsushita Electric Industrial Co., Ltd. Head-mounted display
US7002551B2 (en) * 2002-09-25 2006-02-21 Hrl Laboratories, Llc Optical see-through augmented reality modified-scale display
IL172797A (en) * 2005-12-25 2012-09-24 Elbit Systems Ltd Real-time image scanning and processing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040080467A1 (en) * 2002-10-28 2004-04-29 University Of Washington Virtual image registration in augmented display field
US20070035511A1 (en) * 2005-01-25 2007-02-15 The Board Of Trustees Of The University Of Illinois. Compact haptic and augmented virtual reality system
CN101059717A (en) * 2006-04-21 2007-10-24 佳能株式会社 Information-processing method and device for presenting haptics received from a virtual object
US20110213664A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
CN102906623A (en) * 2010-02-28 2013-01-30 奥斯特豪特集团有限公司 Local advertising content on an interactive head-mounted eyepiece
US20110216002A1 (en) * 2010-03-05 2011-09-08 Sony Computer Entertainment America Llc Calibration of Portable Devices in a Shared Virtual Space
CN102419631A (en) * 2010-10-15 2012-04-18 微软公司 Fusing virtual content into real content

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10698535B2 (en) 2015-05-21 2020-06-30 Nec Corporation Interface control system, interface control apparatus, interface control method, and program
CN107615214A (en) * 2015-05-21 2018-01-19 日本电气株式会社 Interface control system, interface control device, interface control method and program
CN106371571B (en) * 2015-11-30 2019-12-13 北京智谷睿拓技术服务有限公司 Information processing method, information processing device and user equipment
CN106371571A (en) * 2015-11-30 2017-02-01 北京智谷睿拓技术服务有限公司 Information processing method, information processing apparatus and user equipment
US10338674B2 (en) 2015-11-30 2019-07-02 Beijing Zhigu Rui Tuo Tech Co., Ltd. Information processing method, information processing apparatus, and user equipment
CN106997295A (en) * 2016-01-25 2017-08-01 罗伯特·博世有限公司 Method and apparatus for making software visualization
WO2017152600A1 (en) * 2016-03-11 2017-09-14 Effire Universal Limited Smartphone with a vr content capturing assembly
WO2018077176A1 (en) * 2016-10-26 2018-05-03 北京小鸟看看科技有限公司 Wearable device and method for determining user displacement in wearable device
CN110249379B (en) * 2017-01-24 2024-01-23 隆萨有限公司 Method and system for industrial maintenance using virtual or augmented reality displays
CN110249379A (en) * 2017-01-24 2019-09-17 隆萨有限公司 The method and system of industrial maintenance is carried out using the display of virtual or augmented reality
US11287292B2 (en) 2017-02-13 2022-03-29 Lockheed Martin Corporation Sensor system
CN108427194A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of display methods and equipment based on augmented reality
WO2018149266A1 (en) * 2017-02-14 2018-08-23 深圳梦境视觉智能科技有限公司 Information processing method and device based on augmented reality
CN107168619A (en) * 2017-03-29 2017-09-15 腾讯科技(深圳)有限公司 User-generated content treating method and apparatus
CN107168619B (en) * 2017-03-29 2023-09-19 腾讯科技(深圳)有限公司 User generated content processing method and device
CN110546836B (en) * 2017-04-21 2021-08-24 利塔尔两合公司 Method and system for automated support of a connection process
CN110546836A (en) * 2017-04-21 2019-12-06 利塔尔两合公司 Method and system for the automated support of a connection process of components, in particular components arranged in a switchgear cabinet or on an assembly system
WO2018209515A1 (en) * 2017-05-15 2018-11-22 上海联影医疗科技有限公司 Display system and method
CN110869980A (en) * 2017-05-18 2020-03-06 Pcms控股公司 System and method for distribution and presentation of content as a spherical video and 3D portfolio
CN110869980B (en) * 2017-05-18 2024-01-09 交互数字Vc控股公司 Distributing and rendering content as a spherical video and 3D portfolio
CN111164544A (en) * 2017-10-02 2020-05-15 Arm有限公司 Motion sensing
CN107657574A (en) * 2017-10-06 2018-02-02 杭州昂润科技有限公司 It is a kind of based on the underground utilities asset management system of AR technologies and method
CN111527523B (en) * 2018-02-02 2024-03-15 三星电子株式会社 Apparatus and method for sharing virtual reality environment
CN111527523A (en) * 2018-02-02 2020-08-11 三星电子株式会社 Apparatus and method for sharing virtual reality environment
CN110908503A (en) * 2018-09-14 2020-03-24 苹果公司 Tracking and drift correction
CN110908503B (en) * 2018-09-14 2022-04-01 苹果公司 Method of tracking the position of a device
CN109343815A (en) * 2018-09-18 2019-02-15 上海临奇智能科技有限公司 A kind of implementation method of virtual screen device and virtual screen
CN111352503A (en) * 2018-12-21 2020-06-30 秀铺菲公司 E-commerce platform with augmented reality application for display of virtual objects
US11842385B2 (en) 2018-12-21 2023-12-12 Shopify Inc. Methods, systems, and manufacture for an e-commerce platform with augmented reality application for display of virtual objects
CN113302578A (en) * 2019-01-22 2021-08-24 惠普发展公司,有限责任合伙企业 Mixed reality presentations
CN110850977B (en) * 2019-11-06 2023-10-31 成都威爱新经济技术研究院有限公司 Stereoscopic image interaction method based on 6DOF head-mounted display
CN110850977A (en) * 2019-11-06 2020-02-28 成都威爱新经济技术研究院有限公司 Stereoscopic image interaction method based on 6DOF head-mounted display
CN112684883A (en) * 2020-12-18 2021-04-20 上海影创信息科技有限公司 Method and system for multi-user object distinguishing processing

Also Published As

Publication number Publication date
EP2850609A4 (en) 2017-01-11
WO2013171731A1 (en) 2013-11-21
HK1207918A1 (en) 2016-02-12
EP2850609A1 (en) 2015-03-25

Similar Documents

Publication Publication Date Title
CN104603865A (en) A system worn by a moving user for fully augmenting reality by anchoring virtual objects
US9210413B2 (en) System worn by a moving user for fully augmenting reality by anchoring virtual objects
CN113168007B (en) System and method for augmented reality
CN109313495B (en) Six-degree-of-freedom mixed reality input integrating inertia handheld controller and manual tracking
CN103180893B (en) For providing the method and system of three-dimensional user interface
Scarfe et al. Using high-fidelity virtual reality to study perception in freely moving observers
CN105264548B (en) For generating the label inconspicuous of augmented reality experience
CN102981616B (en) The recognition methods of object and system and computer in augmented reality
CN105607255A (en) Head-mounted display device, method of controlling head-mounted display device, and computer program
CN105212418A (en) Augmented reality intelligent helmet based on infrared night viewing function is developed
US20090322671A1 (en) Touch screen augmented reality system and method
US20130169682A1 (en) Touch and social cues as inputs into a computer
WO2016073986A1 (en) Visual stabilization system for head-mounted displays
CN103472909A (en) Realistic occlusion for a head mounted augmented reality display
CN104808340B (en) Head mounted display and control method thereof
US20240037880A1 (en) Artificial Reality System with Varifocal Display of Artificial Reality Content
CN106484115A (en) For strengthening the system and method with virtual reality
CN104995583A (en) Direct interaction system for mixed reality environments
WO2013028813A1 (en) Implicit sharing and privacy control through physical behaviors using sensor-rich devices
CN102959616A (en) Interactive reality augmentation for natural interaction
US10701346B2 (en) Replacing 2D images with 3D images
TWI453462B (en) Telescopic observation for virtual reality system and method thereof using intelligent electronic device
CN203746012U (en) Three-dimensional virtual scene human-computer interaction stereo display system
CN104464414A (en) Augmented reality teaching system
US10701347B2 (en) Identifying replacement 3D images for 2D images via ranking criteria

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1207918

Country of ref document: HK

WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150506

WD01 Invention patent application deemed withdrawn after publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1207918

Country of ref document: HK