CN104392045A - Real-time enhanced virtual reality system and method based on intelligent mobile terminal - Google Patents
Real-time enhanced virtual reality system and method based on intelligent mobile terminal Download PDFInfo
- Publication number
- CN104392045A CN104392045A CN201410688094.6A CN201410688094A CN104392045A CN 104392045 A CN104392045 A CN 104392045A CN 201410688094 A CN201410688094 A CN 201410688094A CN 104392045 A CN104392045 A CN 104392045A
- Authority
- CN
- China
- Prior art keywords
- information
- virtual reality
- rgb
- mobile terminal
- depth information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention relates to a real-time enhanced virtual reality system and method based on an intelligent mobile terminal. The real-time enhanced virtual reality system based on the intelligent mobile terminal comprises a processor, a depth information gathering card, an RGB information gathering card, a storage unit and a display screen. The real-time enhanced virtual reality method based on the intelligent mobile terminal includes that using the depth information gathering card to obtain the three-dimensional depth information of an objective environment; using the RGB information gathering card to obtain the basic information of the objective environment; inputting the three-dimensional depth information and the basic information of the objective environment to the processor to process and create a three-dimensional model of the objective world; storing the processed result data (three-dimensional model data of the objective world) in the storage unit; using an interface circuit to enhance the virtual reality display to realize man-machine interaction. The real-time enhanced virtual reality system based on the intelligent mobile terminal is reasonable in design, simple in structure, high in timeliness, few in needed equipment, less in scene information loss, strong in maneuverability, broad in application range, low in cost and capable of enabling a user to really realize that the enhanced virtual reality brings endless charm to people; the error between the constructed three-dimensional model and an actual objective environment model is small.
Description
Technical field
The present invention relates to a kind of virtual reality system generation technique, specifically a kind of real time enhancing virtual reality system based on intelligent mobile terminal and method.
Background technology
Augmented reality (Augmented Reality, AR), be by computer technology by virtual Information application to real world, be added in real time same picture or space of real environment and virtual object exists simultaneously.Augmented reality provide generate a kind of true to nature depending on, listen, power, to touch and the virtual environment of the sensation such as dynamic, be different from the appreciable information of the mankind.It not only presents the information of real world, and virtual information is shown simultaneously, and two kinds of information are mutually supplemented, superposed.Augmented reality computer graph technology and visualization technique produce non-existent virtual objects in actual environment, and incorporate in true environment by sensing technology by organic for virtual objects, by display device, virtual objects and true environment are combined together, realize user and environment directly carries out natural interaction.It is a kind of brand-new human-computer interaction technology, utilizes so a kind of technology, can on-the-spot view that is virtually reality like reality, and it is with interactivity and the computer advanced man-machine interface being contemplated that essential characteristic.User can not only experience the verisimilitude of " on the spot in person " experienced in the objective physical world by virtual reality system, and can break through space, time and other objective restriction, and experiencing in real world cannot the experience of personal experience.Therefore, AR has three fundamentals, i.e. reality and virtual combination, real-time interactive and three-dimensional localization.
The actual situation reaching AR combines, and user needs to be watched by certain device.Technology popular at present mainly contains transparent unit and opaque device two class, the former utilizes transparent unit, as devices such as 3D glasses and 3D projections, virtual image is projected on device, user is made to experience the scene strengthening virtual reality, but need multiple specific installation with the use of, the place, environment etc. of use are all restricted, can not meet gradually people carry out whenever and wherever possible strengthen virtual reality require needs; The latter is the image utilizing the processors such as computing machine the virtual reality handled well to be combined, as the intelligent mobile terminal such as smart mobile phone, panel computer.Due to intelligent mobile terminal there is complicated calculations ability, make video recording, image display, also have the multiple function of GPS, networking line, touch-control, degree of tilt detecting etc., price also reduces gradually, so be that the AR research of platform is more and more at intelligent mobile terminal.
Enhancing virtual reality technology main stream approach at present based on opaque device is carried out the collection of objective environment information, mainly contain the collection that monocular, binocular and multi-lens camera carry out Same Scene, then image processing techniques is utilized to carry out feature extraction to the Same Scene that different cameras collects, recycling characteristic matching scheduling algorithm carries out scene matching, obtain the three-dimensional information of objective environment, carry out the three-dimensional model that three-dimensional reconstruction obtains objective environment, finally carry out the realization strengthening virtual reality again.Such method shortcoming is: calculated amount is huge, need to carry out a large amount of cloud data processings and calculating, the acquisition of three-dimensional information is very high for the Capability Requirement of processor, and the method almost can not carry out real-time three-dimensional modeling, and information useful in a lot of objective environment to be lost in scene feature extraction and three-dimensionalreconstruction process.
In order to carry out the foundation of real-time three-dimensional model, the present invention adopts RGB+D model to carry out three-dimensional model structure, depth information capture card is directly utilized to obtain the three-dimensional depth information (Depth Information) of objective environment, CCD video capture card is utilized to obtain the essential information of objective environment (as color (RGB, Red, Green and Blue), texture information, half-tone information and strength information etc.), like this calculating of the three-dimensional depth information in objective environment information is freed from processor, substantially increase the real-time strengthening virtual reality.
In the environment of augmented reality, user around seeing while true environment, can also see the enhancing information that computing machine produces.A bridge block is played because the gully of augmented reality between virtual reality and real world is added, therefore, the application potential of augmented reality is quite huge, it can be widely used in intelligent robot navigation with keep away hinder, military affairs, the various fields such as pattern drives, three-dimensional navigation, medical science, manufacture and maintenance, game and amusement.
Summary of the invention
For three-dimensional information in prior art acquisition and calculate very high for the Capability Requirement of processor and, almost can not carry out real-time three-dimensional modeling, and the deficiencies such as information useful in a lot of objective environment can be lost in scene feature extraction and three-dimensionalreconstruction process, the technical problem to be solved in the present invention is to provide a kind of calculating of the three-dimensional depth information in objective environment information can being freed from processor, greatly improves the real time enhancing virtual reality system based on intelligent mobile terminal and the method for the real-time strengthening virtual reality.
For solving the problems of the technologies described above, the technical solution used in the present invention is:
A kind of real time enhancing virtual reality system based on intelligent mobile terminal of the present invention comprises:
Processor, depth information capture card, RGB information capture card, storage unit and display screen, wherein processor receives and is processed by the three-dimensional depth information of the objective environment of depth information capture card and the collection of RGB information capture card and objective environment essential information
,three-dimensional model builds
,the result data of process is stored in storage unit; Processor connects display screen by interface circuit.
Described depth information capture card is the camera with the direct arithmetic capability of depth information, directly can read the three-dimensional depth information of objective environment.
A kind of real time enhancing virtual reality method based on intelligent mobile terminal of the present invention comprises the following steps:
Depth information capture card is utilized to obtain the three-dimensional depth information of objective environment;
RGB information capture card is utilized to obtain objective environment essential information;
Above-mentioned three-dimensional depth information and objective environment essential information input processor are processed, creates the three-dimensional model of objective world;
The three-dimensional modeling data of the result data of process and objective world is stored in the memory unit;
Utilize interface circuit to carry out strengthening the display of virtual reality, realize man-machine interaction.
The three-dimensional model creating objective world comprises the following steps:
Information filter: namely simultaneously carry out filtering to three-dimensional depth information and objective environment essential information and RGB+D image, removes noise image;
The track algorithm of information: full detail based on the RGB+ D information after denoising carries out next image
estimate;
Three-dimensionalreconstruction based on RGB+ D information: utilize the estimated parameter of three-dimensional world information and typical triangle splicing method pair
carry out interpolation splicing, obtain final objective world three-dimensional model.
Carry out filtering to RGB+D image to be realized by following formula simultaneously:
(1)
Wherein,
for the RGB+D image after filtering and noise reduction;
for image pixel
neighborhood,
for the weight coefficient of wave filter;
for the RGB+D image of Noise,
m,
nbe respectively neighborhood
the coordinate figure of middle every bit.
Adopt representative Gaussian wave filter, namely
(2)
Wherein,
for each pixel coordinate of image,
for wave filter is at pixel
adopt neighborhood
set,
for the standard deviation of Gaussian function.
Full detail for the three-dimensional world information after denoising carries out being estimated as of next image
:
Adopt Maximum-likelihood estimation, namely
(3)
In formula,
(4)
(5)
Wherein,
for current RGB+D image;
for estimating new RGB+D image;
for the Lie group algebra operator of present image and estimated image;
for pictorial element,
;
for pose estimation model;
for the depth information that depth information capture card obtains;
for
standard deviation;
for norm, and definition
(6)
Wherein,
for parameter,
sfor variable, in formula,
.
The three-dimensionalreconstruction based on RGB+ D information is carried out by following formula:
(7)
Wherein,
(
p) be the minimum unit of three-dimensionalreconstruction;
for energy function, and
meet
minimum;
for the ground level of intelligent mobile terminal three-dimensionalreconstruction is estimated, adopt simple low-pass filter here, what namely utilize RGB+D information carries out ground level estimation;
for energy function
estimated parameter,
for
gradient,
for RGB+D information aggregate.
The inventive method is further comprising the steps of:
If the data-handling capacity of the processor of mobile terminal is not enough, then by mobile network or wifi whole processing procedure uploaded in the webserver or personal PC machine and process, then the data obtained after process are downloaded to mobile intelligent terminal by mobile network or wifi again.
The inventive method is further comprising the steps of:
If multiple user uses mobile intelligent terminal, then user utilizes network savvy sharing A R data, multiple user in any place, enjoy any time the application strengthening virtual reality.
The present invention has following beneficial effect and advantage:
1. present system is reasonable in design, and structure is simple, and real-time is high, and equipment needed thereby is few, workable, applied range, and cost is lower, user can be made really to realize strengthen virtual reality to people with infinite glamour.
2. the present invention adopts RGB+D model to carry out three-dimensional model structure, depth information capture card is directly utilized to obtain the three-dimensional depth information (Depth Information) of objective environment, CCD video capture card is utilized to obtain the essential information of objective environment, the calculating of the three-dimensional depth information in objective environment information is freed from processor, real-time is high, scene information is lost few, and three-dimensional model and the actual objective environment model error of structure are little.
3. the present invention can be widely used in intelligent robot navigation with keep away hinder, military affairs, the various fields such as pattern drives, three-dimensional navigation, medical science, manufacture and maintenance, game and amusement.
Accompanying drawing explanation
Fig. 1 is present system structural representation;
Fig. 2 is the real time enhancing virtual reality system functional block diagram based on intelligent mobile terminal;
Fig. 3 is the inventive method main flow chart;
Fig. 4 is three-dimensionalreconstruction algorithm flow chart in the inventive method.
Wherein, 101 is mobile terminal; 102 is processor; 103 is storage unit; 104 is interface circuit; 105 is rgb video image pick-up card; 106 is depth information capture card; 107 is light source; 108 is display screen.
Embodiment
Below in conjunction with Figure of description, the present invention is further elaborated.
As shown in Figure 1, 2, the real time enhancing virtual reality system that the present invention is based on intelligent mobile terminal comprises: processor, depth information capture card, RGB information capture card, storage unit and display screen, and wherein processor receives and accepts and believe the three-dimensional depth information of the objective environment of breath truck and the collection of RGB information capture card by the degree of depth and objective environment essential information processes
,three-dimensional model builds, and is stored in storage unit by the result data of process; Processor connects display screen by interface circuit.
In the present embodiment, mobile terminal 101 can be smart mobile phone, panel computer, palm PC etc., processor 102, storage unit 103, interface circuit 104 adopt the inner member of intelligent terminal, the Capability Requirement of the present invention to data processing and calculating is not high, therefore, the arithmetic capability of current most of intelligent terminal processor all reaches requirement.105 is rgb video image pick-up card, and common CCD formula camera can meet the demands, and depth information capture card 106 adopts the Kinect camera with the direct arithmetic capability of depth information, directly can read the three-dimensional depth information of objective environment.Light source 107 is for compensating the not enough impact on objective environment information of available light.
The present invention utilizes processor 102 to carry out the computing of data, real-time constructing environment three-dimensional model, and realize the application strengthening virtual reality, the result data of process is stored in storage unit 103, utilize interface circuit 104 to carry out strengthening the display screen 108 of virtual reality, realize man-machine interaction.
As shown in Figure 3, a kind of real time enhancing virtual reality method based on intelligent mobile terminal of the present invention comprises the following steps:
Depth information capture card is utilized to obtain the three-dimensional depth information of objective environment;
RGB information capture card is utilized to obtain objective environment essential information;
Above-mentioned three-dimensional depth information and objective environment essential information input processor are processed, creates the three-dimensional model of objective world;
The three-dimensional modeling data of the result data of process and objective world is stored in the memory unit;
Utilize interface circuit to carry out strengthening the display of virtual reality, realize man-machine interaction.
In order to carry out the foundation of real-time three-dimensional model, the present invention adopts RGB+D model to carry out three-dimensional model structure, depth information capture card is directly utilized to obtain the three-dimensional depth information (Depth Information) of objective environment, CCD video capture card is utilized to obtain the essential information of objective environment (as color (RGB, Red, Green and Blue), texture information, half-tone information and strength information etc.), like this calculating of the three-dimensional depth information in objective environment information is freed from processor, improve the real-time strengthening virtual reality.
The key step of the inventive method has:
Step 1: utilize depth information capture card and RGB information capture card in Fig. 2 to obtain three-dimensional world information, adopt RGB+D to represent later;
Step 2: RGB+D information input intelligent mobile terminal processor is processed, the objective world RGB+D information collected is processed, finally obtain the three-dimensional model of objective world by depth information capture card and RGB information capture card, as shown in Figure 4:
(1) RGB+D information filter: filtering and noise reduction is carried out to intensity image and depth image simultaneously:
(1)
Wherein,
for the RGB+D image after filtering and noise reduction;
for image pixel
neighborhood,
for the weight coefficient of wave filter;
for Noise RGB+D image,
for image
the coordinate set of neighborhood,
m,
nbe respectively neighborhood
the coordinate figure of middle every bit.Here representative Gaussian wave filter is adopted, namely
(2)
Wherein,
for each pixel coordinate of image,
for wave filter is at pixel
adopt neighborhood
set,
for the standard deviation of Gaussian function.
(2) based on the track algorithm of RGB+D: utilize depth information capture card, estimation that present image RGB+D full detail that RGB information capture card collects carries out next image, Maximum-likelihood estimation is adopted, namely
(3)
In formula,
(4)
(5)
Wherein,
for current RGB+D image;
for estimating new RGB+D image;
for the Lie group algebra operator of present image and estimated image;
for pictorial element,
for the information aggregate of RGB+D image, namely
;
for pose estimation model;
for the depth information that depth information capture card obtains;
for
standard deviation;
for norm, and definition
(6)
Wherein,
for parameter,
sfor variable, in formula,
By above-mentioned algorithm, capture card depth information capture card can be estimated, whether the objective world frame of video that RGB information capture card obtains in real time is key frame, if key frame, then by this frame RGB+D image update in three-dimensionalreconstruction model, if not key frame, then again follow the tracks of, until capture key frame, finally, the present invention utilizes depth information capture card, the objective world key frame that RGB information capture card captures carries out three-dimensional model reconfiguration, and what comprise in key frame is whole objective worlds information, i.e. RGB+D information, and then ensure objective world Reconstruction of three-dimensional model information integrality.Like this, utilize this algorithm can estimate the three-dimensional information of objective world, it is characterized in that the full detail adopting objective world, and do not carry out feature extraction and characteristic matching, contain all information in objective world, its operation efficiency is high, real-time is high, and the algorithm of current trend is based on characteristics of image, this needs to carry out the process such as feature extraction, characteristic matching to image, its shortcoming is that operand is large, efficiency is low, poor real, can lose a lot of objective worlds information in addition in characteristic extraction procedure.
(3) based on the three-dimensionalreconstruction algorithm of RGB+D information:
(7)
Wherein,
for the minimum unit of three-dimensionalreconstruction;
for energy function, and
meet
minimum;
for the ground level of intelligent mobile terminal three-dimensionalreconstruction is estimated, adopt simple low-pass filter here, namely utilize the carrying out of RGB+D information to estimate that ground level is estimated;
for being energy function
estimated parameter,
for
gradient,
for RGB+D information aggregate.
Typical triangle is finally utilized to splice method pair
carry out interpolation splicing, obtain final three-dimensional objective world model.
Step 3: can storage unit be stored in by based on the data obtained after step 1,2 process.
Step 4: the real time enhancing virtual reality applications based on intelligent mobile terminal is shown at intelligent mobile terminal display screen.
If the data-handling capacity of the processor of mobile terminal is not enough, then can by mobile network or wifi by step 1,2, the processing procedure of 3 uploads in the webserver or personal PC machine and processes, then the data obtained after process are downloaded to mobile intelligent terminal by mobile network or wifi again, the data-handling capacity of system can be improved like this, as shown in the extension storage unit in Fig. 2 and extensible processor (as the webserver or PC).
If the storage unit finite capacity of mobile intelligent terminal, then data processed result is uploaded in extension storage unit by mobile network or wifi, improve information storage capacity.
If the mobile intelligent terminal of multiple user, then user can utilize network savvy sharing A R data, and like this, multiple user, can enjoy the application strengthening virtual reality in any place any time.
Claims (10)
1., based on a real time enhancing virtual reality system for intelligent mobile terminal, it is characterized in that comprising:
Processor, depth information capture card, RGB information capture card, storage unit and display screen, wherein processor receive undertaken processing by the three-dimensional depth information of the objective environment of depth information capture card and the collection of RGB information capture card and objective environment essential information, three-dimensional model builds
,the result data of process is stored in storage unit; Processor connects display screen by interface circuit.
2., by the real time enhancing virtual reality system based on intelligent mobile terminal according to claim 1, it is characterized in that: described depth information capture card is the camera with the direct arithmetic capability of depth information, directly can read the three-dimensional depth information of objective environment.
3., based on a real time enhancing virtual reality method for intelligent mobile terminal, it is characterized in that comprising the following steps:
Depth information capture card is utilized to obtain the three-dimensional depth information of objective environment;
RGB information capture card is utilized to obtain objective environment essential information;
Above-mentioned three-dimensional depth information and objective environment essential information input processor are processed, creates the three-dimensional model of objective world;
The three-dimensional modeling data of the result data of process and objective world is stored in the memory unit;
Utilize interface circuit to carry out strengthening the display of virtual reality, realize man-machine interaction.
4. by the real time enhancing virtual reality method based on intelligent mobile terminal according to claim 1, it is characterized in that: the three-dimensional model creating objective world comprises the following steps:
Information filter: namely simultaneously carry out filtering to three-dimensional depth information and objective environment essential information and RGB+D image, removes noise image;
The track algorithm of information: full detail based on the RGB+ D information after denoising carries out next image
estimate;
Three-dimensionalreconstruction based on RGB+ D information: utilize the estimated parameter of three-dimensional world information and typical triangle splicing method pair
carry out interpolation splicing, obtain final objective world three-dimensional model.
5., by the real time enhancing virtual reality method based on intelligent mobile terminal according to claim 4, it is characterized in that carry out filtering to RGB+D image is realized by following formula simultaneously:
(1)
Wherein,
for the RGB+D image after filtering and noise reduction;
for image pixel
neighborhood,
for the weight coefficient of wave filter;
for the RGB+D image of Noise,
m,
nbe respectively neighborhood
the coordinate figure of middle every bit.
6. by the real time enhancing virtual reality method based on intelligent mobile terminal according to claim 5, it is characterized in that: adopt representative Gaussian wave filter, namely
(2)
Wherein,
for each pixel coordinate of image,
for wave filter is at pixel
adopt neighborhood
set,
for the standard deviation of Gaussian function.
7., by the real time enhancing virtual reality method based on intelligent mobile terminal according to claim 4, it is characterized in that: the full detail for the three-dimensional world information after denoising carries out being estimated as of next image:
Adopt Maximum-likelihood estimation, namely
(3)
In formula,
(4)
(5)
Wherein,
for current RGB+D image;
for estimating new RGB+D image;
for the Lie group algebra operator of present image and estimated image;
for pictorial element,
;
for pose estimation model;
for the depth information that depth information capture card obtains;
for
standard deviation;
for norm, and definition
(6)
Wherein,
for parameter,
sfor variable, in formula,
.
8., by the real time enhancing virtual reality method based on intelligent mobile terminal according to claim 4, it is characterized in that: carry out the three-dimensionalreconstruction based on RGB+ D information by following formula:
(7)
Wherein,
(
p) be the minimum unit of three-dimensionalreconstruction;
for energy function, and
meet
minimum;
for the ground level of intelligent mobile terminal three-dimensionalreconstruction is estimated, adopt simple low-pass filter here, what namely utilize RGB+D information carries out ground level estimation;
for energy function
estimated parameter,
for
gradient,
for RGB+D information aggregate.
9., by the real time enhancing virtual reality method based on intelligent mobile terminal according to claim 4, it is characterized in that: further comprising the steps of:
If the data-handling capacity of the processor of mobile terminal is not enough, then by mobile network or wifi whole processing procedure uploaded in the webserver or personal PC machine and process, then the data obtained after process are downloaded to mobile intelligent terminal by mobile network or wifi again.
10., by the real time enhancing virtual reality method based on intelligent mobile terminal according to claim 4, it is characterized in that: further comprising the steps of:
If multiple user uses mobile intelligent terminal, then user utilizes network savvy sharing A R data, multiple user in any place, enjoy any time the application strengthening virtual reality.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410688094.6A CN104392045B (en) | 2014-11-25 | 2014-11-25 | A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410688094.6A CN104392045B (en) | 2014-11-25 | 2014-11-25 | A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104392045A true CN104392045A (en) | 2015-03-04 |
CN104392045B CN104392045B (en) | 2018-01-09 |
Family
ID=52609948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410688094.6A Active CN104392045B (en) | 2014-11-25 | 2014-11-25 | A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104392045B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105093522A (en) * | 2015-07-08 | 2015-11-25 | 清华大学 | Phase-based synthetic method for converting binocular virtual image to multi-ocular virtual image |
CN106095114A (en) * | 2016-06-29 | 2016-11-09 | 宁波市电力设计院有限公司 | Electric power industry based on VR technology expands engineering aid system and method for work thereof |
CN106355647A (en) * | 2016-08-25 | 2017-01-25 | 北京暴风魔镜科技有限公司 | Augmented reality system and method |
CN106371609A (en) * | 2016-09-21 | 2017-02-01 | 平越 | VR (virtual reality) entertainment system with time-length markers and method thereof |
CN106485782A (en) * | 2016-09-30 | 2017-03-08 | 珠海市魅族科技有限公司 | Method and device that a kind of reality scene is shown in virtual scene |
CN107343192A (en) * | 2017-07-20 | 2017-11-10 | 武汉市陆刻科技有限公司 | A kind of 3D solids interpolation model and VR mobile terminal interaction methods and system |
CN107441706A (en) * | 2017-08-17 | 2017-12-08 | 安徽迪万科技有限公司 | The sense of reality scene of game constructing system that virtual reality is combined with oblique photograph |
CN108109207A (en) * | 2016-11-24 | 2018-06-01 | 中安消物联传感(深圳)有限公司 | A kind of visualization solid modelling method and system |
CN108140259A (en) * | 2015-08-18 | 2018-06-08 | 奇跃公司 | virtual and augmented reality system and method |
WO2018188088A1 (en) * | 2017-04-14 | 2018-10-18 | 广州千藤玩具有限公司 | Clay toy system based on augmented reality and digital image processing and method therefor |
CN109922331A (en) * | 2019-01-15 | 2019-06-21 | 浙江舜宇光学有限公司 | A kind of image processing method and device |
CN110073313A (en) * | 2016-12-12 | 2019-07-30 | 微软技术许可有限责任公司 | Using female equipment and at least one with equipment and environmental interaction |
CN110267029A (en) * | 2019-07-22 | 2019-09-20 | 广州铭维软件有限公司 | A kind of long-range holographic personage's display technology based on AR glasses |
CN110266939A (en) * | 2019-05-27 | 2019-09-20 | 联想(上海)信息技术有限公司 | Display methods and electronic equipment, storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103035135A (en) * | 2012-11-27 | 2013-04-10 | 北京航空航天大学 | Children cognitive system based on augment reality technology and cognitive method |
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
US20130300637A1 (en) * | 2010-10-04 | 2013-11-14 | G Dirk Smits | System and method for 3-d projection and enhancements for interactivity |
-
2014
- 2014-11-25 CN CN201410688094.6A patent/CN104392045B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130300637A1 (en) * | 2010-10-04 | 2013-11-14 | G Dirk Smits | System and method for 3-d projection and enhancements for interactivity |
CN103035135A (en) * | 2012-11-27 | 2013-04-10 | 北京航空航天大学 | Children cognitive system based on augment reality technology and cognitive method |
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
Non-Patent Citations (1)
Title |
---|
陈晓明: "基于Kinect深度信息的实时三维重建和滤波算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105093522A (en) * | 2015-07-08 | 2015-11-25 | 清华大学 | Phase-based synthetic method for converting binocular virtual image to multi-ocular virtual image |
CN108140259B (en) * | 2015-08-18 | 2022-06-14 | 奇跃公司 | Virtual and augmented reality systems and methods |
CN108140259A (en) * | 2015-08-18 | 2018-06-08 | 奇跃公司 | virtual and augmented reality system and method |
CN106095114A (en) * | 2016-06-29 | 2016-11-09 | 宁波市电力设计院有限公司 | Electric power industry based on VR technology expands engineering aid system and method for work thereof |
CN106355647A (en) * | 2016-08-25 | 2017-01-25 | 北京暴风魔镜科技有限公司 | Augmented reality system and method |
CN106371609A (en) * | 2016-09-21 | 2017-02-01 | 平越 | VR (virtual reality) entertainment system with time-length markers and method thereof |
CN106485782A (en) * | 2016-09-30 | 2017-03-08 | 珠海市魅族科技有限公司 | Method and device that a kind of reality scene is shown in virtual scene |
CN108109207A (en) * | 2016-11-24 | 2018-06-01 | 中安消物联传感(深圳)有限公司 | A kind of visualization solid modelling method and system |
CN110073313A (en) * | 2016-12-12 | 2019-07-30 | 微软技术许可有限责任公司 | Using female equipment and at least one with equipment and environmental interaction |
CN110073313B (en) * | 2016-12-12 | 2022-05-24 | 微软技术许可有限责任公司 | Interacting with an environment using a parent device and at least one companion device |
WO2018188088A1 (en) * | 2017-04-14 | 2018-10-18 | 广州千藤玩具有限公司 | Clay toy system based on augmented reality and digital image processing and method therefor |
CN110741327B (en) * | 2017-04-14 | 2023-06-23 | 广州千藤文化传播有限公司 | Mud toy system and method based on augmented reality and digital image processing |
CN110741327A (en) * | 2017-04-14 | 2020-01-31 | 广州千藤玩具有限公司 | Clay toy system based on augmented reality and digital image processing and method thereof |
CN107343192A (en) * | 2017-07-20 | 2017-11-10 | 武汉市陆刻科技有限公司 | A kind of 3D solids interpolation model and VR mobile terminal interaction methods and system |
CN107441706A (en) * | 2017-08-17 | 2017-12-08 | 安徽迪万科技有限公司 | The sense of reality scene of game constructing system that virtual reality is combined with oblique photograph |
CN109922331A (en) * | 2019-01-15 | 2019-06-21 | 浙江舜宇光学有限公司 | A kind of image processing method and device |
CN109922331B (en) * | 2019-01-15 | 2021-12-07 | 浙江舜宇光学有限公司 | Image processing method and device |
CN110266939B (en) * | 2019-05-27 | 2022-04-22 | 联想(上海)信息技术有限公司 | Display method, electronic device, and storage medium |
CN110266939A (en) * | 2019-05-27 | 2019-09-20 | 联想(上海)信息技术有限公司 | Display methods and electronic equipment, storage medium |
CN110267029A (en) * | 2019-07-22 | 2019-09-20 | 广州铭维软件有限公司 | A kind of long-range holographic personage's display technology based on AR glasses |
Also Published As
Publication number | Publication date |
---|---|
CN104392045B (en) | 2018-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104392045B (en) | A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal | |
CN107820593B (en) | Virtual reality interaction method, device and system | |
US20220075453A1 (en) | Ar scenario-based gesture interaction method, storage medium, and communication terminal | |
CN110517319B (en) | Method for determining camera attitude information and related device | |
CN107622524A (en) | Display methods and display device for mobile terminal | |
Chen et al. | Real-time 3D unstructured environment reconstruction utilizing VR and Kinect-based immersive teleoperation for agricultural field robots | |
CN107222468A (en) | Augmented reality processing method, terminal, cloud server and edge server | |
US20180276882A1 (en) | Systems and methods for augmented reality art creation | |
CN111161422A (en) | Model display method for enhancing virtual scene implementation | |
CN108304075A (en) | A kind of method and apparatus carrying out human-computer interaction in augmented reality equipment | |
TW201835723A (en) | Graphic processing method and device, virtual reality system, computer storage medium | |
CN113706699B (en) | Data processing method and device, electronic equipment and computer readable storage medium | |
CN104781849A (en) | Fast initialization for monocular visual simultaneous localization and mapping (SLAM) | |
CN108668108B (en) | Video monitoring method and device and electronic equipment | |
CN109688343A (en) | The implementation method and device of augmented reality studio | |
CN110138831A (en) | A kind of method and apparatus carrying out remote assistance | |
CN108230384A (en) | Picture depth computational methods, device, storage medium and electronic equipment | |
Hoberman et al. | Immersive training games for smartphone-based head mounted displays | |
CN106569591A (en) | Tracking method and system based on computer vision tracking and sensor tracking | |
CN116057577A (en) | Map for augmented reality | |
WO2011123710A1 (en) | Synthesizing panoramic three-dimensional images | |
CN108205822B (en) | Picture pasting method and device | |
CN115690382A (en) | Training method of deep learning model, and method and device for generating panorama | |
CN116917949A (en) | Modeling objects from monocular camera output | |
CN110349269A (en) | A kind of target wear try-in method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |