CN105447906A - Method for calculating lighting parameters and carrying out relighting rendering based on image and model - Google Patents

Method for calculating lighting parameters and carrying out relighting rendering based on image and model Download PDF

Info

Publication number
CN105447906A
CN105447906A CN201510771082.4A CN201510771082A CN105447906A CN 105447906 A CN105447906 A CN 105447906A CN 201510771082 A CN201510771082 A CN 201510771082A CN 105447906 A CN105447906 A CN 105447906A
Authority
CN
China
Prior art keywords
image
model
illumination
light source
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510771082.4A
Other languages
Chinese (zh)
Other versions
CN105447906B (en
Inventor
耿卫东
黄倩妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201510771082.4A priority Critical patent/CN105447906B/en
Publication of CN105447906A publication Critical patent/CN105447906A/en
Application granted granted Critical
Publication of CN105447906B publication Critical patent/CN105447906B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Abstract

The invention discloses a method for calculating lighting parameters and carrying out relighting rendering based on an image and a model, comprising the following steps: (1) using a 3D point cloud model of a scene to calculate the normal vector of each 3D point; (2) calculating the light source position and direction vector of an image according to the coordinates and the normal vector of each 3D point and the pixel value of the image; (3) by supposing that the lighting model of the image is a Phong model, calculating the parameters of an energy function in the model based on the light source information obtained in the previous step; (4) calculating the shadow and highlight areas of the image; and (5) giving a target RGB image to be rendered, fusing and rendering the scene and lighting information of the original image into the target image, and outputting a final rendering result. By using the method, the relighting process of special-effects production in film and television post-production is simplified, the relighting rendering result can be output quickly, and a user can preliminarily judge whether an input image is applicable to later virtual-real combination. The problem that rework is caused as the lens and post-production inconsistency cannot be found in the existing film and television production process is solved.

Description

Calculate illumination parameter based on image and model and carry out the method that heavily illumination plays up
Technical field
The present invention relates to a kind of heavy irradiation rendering method, especially relate to a kind of computed image illumination parameter based on single width RGB image and image scene three-dimensional point cloud model and carry out the method that heavily illumination plays up.
Background technology
Based on the achievement in research that the heavy lighting (IBRL) of image is existing a large amount of in graphics and image processing field, but owing to affecting by complicated illumination variation, as the formation of shade, multiple light courcess are interfered mutually, existing research major part be only limitted to known 3D model or to 3D model modeling after more environmentally light carry out heavily illumination.The method of the existing heavily illumination based on image at present from realizing principle is generally divided into three major types, based on bidirectional reflectance function, based on basis function and based on the method for plenoptic function.
Game and film in the illumination of synthetic technology counterweight have particular/special requirement.Early stage film shooting all can place environment ball for recording the illumination condition of surrounding environment at floor usually, and this is one of method of carrying out heavily illumination based on reflective function.Although have many restrictions based on the heavily illumination of reflective function, foundation as reflection model directly affect heavily illumination effect, all illumination conditions etc. can not be imitated, but processing time required for the heavy lighting based on reflective function is relatively less and adopted by the researcher that majority pursues efficient heavy lighting.Bidirectional reflectance distribution function (BRDF) contains the reflective information of illumination in scene, accordingly, the people such as Oskar [Oskar2007] first propose to calculate in radiative transfer technology in expectation to use light source to cut technology, cut and bidirectional reflectance function according to observability precomputation light source, a large amount of calculating is placed on pre-computation step, thus with mutual speed, heavily illumination is carried out to static scene and play up.The heavy lighting in early stage is all realize when unknown object light source, for the situation that target light source can control, then has another kind of processing scheme.Heavily the various image collecting devices that propose of the researchers of illumination exactly for control light source position, towards etc. make light source parameters known, thus simplify heavy illumination algorithm.As the people such as Xuehong [Xuehong2014] fix on outdoor scene target, under different illumination conditions, obtain the image series under Same Scene with same video camera, set up BRDF model extraction Lighting information and carry out heavily illumination.The people such as Tze [Tze2009] then calculate its RBF to the image sequence obtained with same method and carry out heavily illumination, with reference to BRDF and spherical harmonic function under the analysis Phong illumination model that the people such as Mahajan [Mahajan2008] propose and carry out the achievement in research of heavily illumination.
Based on the heavy lighting of basis function mainly for static scene.Because basic image contains the display effect of object or scene under multiple illumination condition, therefore the rendering result that linear combination can draw object or scene under target illumination condition is carried out to the basic image of a series of acquisition.Owing to needing a large amount of basic image, the heavy lighting based on basis function is applied to raising accuracy of face identification more, carries out the existing great amount of images such as aftertreatment or Video processing and the heavily illumination aspect had higher requirements to illumination to three-dimensional reconstruction.Up-to-date research be the people such as Amr [Amr2014] with facial image training set for basic image, spherical harmonics basis function in extraction target facial image is compared with training dataset with after illumination parameter, select the combination image close to target image and weight, avoid the requirement of rebuilding face 3D model and strict illumination condition, reduce in recognition of face by the barely satisfactory identification error brought of illumination.
Plenoptic function have recorded in any direction, wavelength and the light of optional position under the time, contains the data of a brush dimension.Plenoptic function may be used for the complex scene effect under simulation multiple light courcess or any light source condition.But also due to the complicacy that plenoptic function calculates, utilize plenoptic function to carry out the research of heavily illumination relatively less.Guangwei and Yebin [Guangwei2009] is for the image set of the many illumination of multiple views, be mixed with multi views vision technique (MVS) and heavily illumination (IBL) technology based on image, full light figure is used to reconstruct 3D model and paste texture with the illumination pattern that back video camera obtains to object, under new photoenvironment figure (employing the photodetection picture library of debevec in article) is decomposed 31 base illumination, obtain the intensity of reflected light under new illumination finally by synthesis, complete heavily illumination.
Summary of the invention
The object of the invention is to for the deficiencies in the prior art, provide a kind of and calculate illumination parameter based on image and model and carry out the method that heavily illumination plays up.
The object of the invention is to be achieved through the following technical solutions: a kind ofly calculate illumination parameter based on image and model and carry out the method that heavily illumination plays up, the method comprises the steps:
(1) pending RGB image and corresponding image scene three-dimensional point cloud model is read in, the relation under light source direction and between image rgb value according to scene point cloud and normal vector, set up system of linear equations, utilize the direction vector of Least Square Method light source;
(2) suppose that in scene, object is Lambert body, consider illumination model, set up the energy function of Phong illumination model in conjunction with light source direction and normal vector;
(3) utilize optimization method minimization of energy function, tried to achieve the surround lighting A of each object in image scene by the partial derivative of minimization of energy function iwith diffuse reflection numerical value D i;
(4) the surround lighting A calculated by step 3 iwith diffuse reflection numerical value D ithe shadow and highlight region of computed image, and preserve into image as intermediate result;
(5) calculate the direction of illumination of target RGB image, according to the illumination model of hypothesis the three-dimensional model of original image joined in target image and carry out fusion and play up, export the image of final rendering.
Further, in described step 1, when normal vector is unknown, normal vector solve as follows: due to the three-dimensional point cloud One's name is legion of whole model, for when preventing from calculating, internal memory overflows, a cloud will be divided into multiple part, every part gets contiguous k point (k gets 3000) here and the covariance matrix put of the every part of note wherein in every part k point, for requiring the point of normal vector, PCA decomposition being carried out to covariance matrix and tries to achieve a little normal vector
Further, in described step 1, light source direction solve as follows: the model of input is obtained by common pure 3 D visual method for reconstructing, this model have recorded the coordinate that the three-dimensional point cloud of image scene and each three-dimensional point correspond to certain pixel on image, and namely in model, each three-dimensional point can find a pixel to correspond in the picture.Due to the more than object of possibility in image scene, the illumination tensor of each object is different, before asking light source direction, first to Image Segmentation Using, isolates each different object area and is designated as i.Suppose in scene, to only have a light source, select simple illumination model here wherein I is the rgb value of image pixel, and ρ is the constant value illumination tensor relevant with object, for light source direction, P is the volume coordinate of three-dimensional point, be a P normal vector, T represents the matrix transpose operation of vector.To each pixel in region each in image, set up system of linear equations by above-mentioned illumination model, solve this system of equations and then can obtain light source direction
Further, the illumination model selected in described step 2 is:
I = I a k a + I p S p ( k d < l &RightArrow; , n &RightArrow; > + k s H p )
Wherein I is the rgb value of image pixel.I afor ambient light color, I pfor the brightness of surround lighting.K a, k dand k sbe respectively surround lighting, diffuse reflection and specularity factor.S p∈ [0,1] is shade, H pfor high backscatter extinction logarithmic ratio. for light source direction, for normal vector.Definition energy function for:
E ( l &RightArrow; ) = &Sigma; r &RightArrow; ( I ( r &RightArrow; ) - A i - D i < l &RightArrow; , n &RightArrow; ( r &RightArrow; ) > ) 2
for image pixel, represent pixel on rgb value, for pixel the normal vector of corresponding model three-dimensional point.Note ρ is the constant value illumination tensor relevant with object, then correspond to the ambient light value A of each region i in image i=I aρ, diffuse reflectance D i=I pρ.
Further, the shadow and highlight coefficient of described step 4 Scene calculates according to following scattering model:
Shade
Gao Guang
Wherein parametric t sand t nthe positive threshold value regulating image light slippery, manual setting.A iand D ifor surround lighting and the diffuse reflection numerical value of region i in image, represent pixel in image rgb value.
Further, in described step 5.Final playing up still utilizes at first for solving the illumination model of light source direction.When after the light source direction having scene and the light source information such as shade, Gao Guang, according to illumination model, just the object in this scene can be joined in another target scene, only need to obtain the light source information in target scene again, be updated in illumination model to obtain rendering result finally.
The invention has the beneficial effects as follows: the present invention is extraction model and Lighting information from the RGB image of single width, and be fused on target RGB image and play up, realize the basic fusion process of heavily illumination quickly and easily, facilitate the deficiency in Timeliness coverage camera lens in movies-making process, revise then and there or retake.Traditional heavy lighting is all play up existing image or video in post-processed, the problem being not suitable for special effect making existed in image and video can not be found in real time, cause the later stage to retake camera lens, extend whole production of film and TV process, add cost simultaneously.The present invention borrows the illumination parameter in the three-dimensional model computed image of image scene, guaranteeing whole heavy illumination render process fast simultaneously, show the effect after simple heavily illumination fusion as far as possible exactly, for shooting provides reference, improve the efficiency of whole shooting process.
Accompanying drawing explanation
Fig. 1 is the overview flow chart of the inventive method;
Fig. 2 is the original RGB image of input;
Fig. 3 is the scene three-dimensional point cloud model that the image of input is corresponding;
Fig. 4 is the image shadow region solving gained;
Fig. 5 is the image highlight area solving gained.
Embodiment
The core of the inventive method is the RGB image zooming-out scene three-dimensional point cloud model wherein according to input, utilizes these information set up energy function and minimize it, thus tries to achieve illumination parameter, finally original image model of place is rendered in target image.
Below utilize an embodiment to describe the embodiment of idiographic flow, step following (see Fig. 1):
(1) original RGB image (see Fig. 2) of single width and three-dimensional scenic point cloud model (see Fig. 3) corresponding to image is read in, the relation under light source direction and between image rgb value according to scene point cloud and normal vector, set up system of linear equations, utilize the direction vector of Least Square Method light source;
(2) suppose that in scene, object is Lambert body, consider illumination model, set up the energy function of Phong illumination model in conjunction with light source direction and normal vector;
(3) utilize optimization method minimization of energy function, tried to achieve the surround lighting A of each object in image scene by the partial derivative of minimization of energy function iwith diffuse reflection numerical value D i;
(4) the surround lighting A calculated by step 3 iwith diffuse reflection numerical value D ithe shadow region (see Fig. 4) of computed image and highlight area (see Fig. 5), and preserve into image as intermediate result;
(5) calculate the direction of illumination of target RGB image, according to the illumination model of hypothesis the three-dimensional model of original image joined in target image and carry out fusion and play up, export the image of final rendering.

Claims (6)

1. calculate illumination parameter based on image and model and carry out the method that heavily illumination plays up, it is characterized in that, the method comprises the steps:
(1) pending RGB image and corresponding image scene three-dimensional point cloud model is read in, the relation under light source direction and between image rgb value according to scene point cloud and normal vector, set up system of linear equations, utilize the direction vector of Least Square Method light source;
(2) suppose that in scene, object is Lambert body, consider illumination model, set up the energy function of Phong illumination model in conjunction with light source direction and normal vector;
(3) utilize optimization method minimization of energy function, tried to achieve the surround lighting A of each object in image scene by the partial derivative of minimization of energy function iwith diffuse reflection numerical value D i;
(4) the surround lighting A calculated by step 3 iwith diffuse reflection numerical value D ithe shadow and highlight region of computed image, and preserve into image as intermediate result;
(5) calculate the direction of illumination of target RGB image, according to the illumination model of hypothesis the three-dimensional model of original image joined in target image and carry out fusion and play up, export the image of final rendering.
2. according to claim 1ly a kind ofly calculate illumination parameter based on image and model and carry out the method that heavily illumination plays up, it is characterized in that, in described step 1, when normal vector is unknown, normal vector solve as follows: due to the three-dimensional point cloud One's name is legion of whole model, for when preventing from calculating, internal memory overflows, a cloud will be divided into multiple part, every part gets contiguous k point (k gets 3000) here and the covariance matrix put of the every part of note wherein in every part k point, for requiring the point of normal vector, PCA decomposition being carried out to covariance matrix and tries to achieve a little normal vector
3. according to claim 1ly a kind ofly calculate illumination parameter based on image and model and carry out the method that heavily illumination plays up, it is characterized in that, in described step 1, light source direction solve as follows: the model of input is obtained by common pure 3 D visual method for reconstructing, this model have recorded the coordinate that the three-dimensional point cloud of image scene and each three-dimensional point correspond to certain pixel on image, and namely in model, each three-dimensional point can find a pixel to correspond in the picture.Due to the more than object of possibility in image scene, the illumination tensor of each object is different, before asking light source direction, first to Image Segmentation Using, isolates each different object area and is designated as i.Suppose in scene, to only have a light source, select simple illumination model here wherein I is the rgb value of image pixel, and ρ is the constant value illumination tensor relevant with object, for light source direction, P is the volume coordinate of three-dimensional point, be a P normal vector, T represents the matrix transpose operation of vector.To each pixel in region each in image, set up system of linear equations by above-mentioned illumination model, solve this system of equations and then can obtain light source direction
4. according to claim 1ly a kind ofly calculate illumination parameter based on image and model and carry out the method that heavily illumination plays up, it is characterized in that, the illumination model selected in described step 2 is:
I = I a k a + I p S p ( k d < l &RightArrow; , n &RightArrow; > + k s H p )
Wherein I is the rgb value of image pixel.I afor ambient light color, I pfor the brightness of surround lighting.K a, k dand k sbe respectively surround lighting, diffuse reflection and specularity factor.S p∈ [0,1] is shade, H pfor high backscatter extinction logarithmic ratio. for light source direction, for normal vector.Definition energy function for:
E ( l &RightArrow; ) = &Sigma; r &RightArrow; ( I ( r &RightArrow; ) - A i - D i < l &RightArrow; , n &RightArrow; ( r &RightArrow; ) > ) 2
for image pixel, represent pixel on rgb value, for pixel the normal vector of corresponding model three-dimensional point.Note ρ is the constant value illumination tensor relevant with object, then correspond to the ambient light value A of each region i in image i=I aρ, diffuse reflectance D i=I pρ.
5. according to claim 1ly a kind ofly calculate illumination parameter based on image and model and carry out the method that heavily illumination plays up, it is characterized in that, the shadow and highlight coefficient of described step 4 Scene calculates according to following scattering model:
Shade
Gao Guang
Wherein parametric t sand t nthe positive threshold value regulating image light slippery, manual setting.A iand D ifor surround lighting and the diffuse reflection numerical value of region i in image, represent pixel in image rgb value.
6. according to claim 1ly a kind ofly calculate illumination parameter based on image and model and carry out the method that heavily illumination plays up, it is characterized in that, in described step 5.Final playing up still utilizes at first for solving the illumination model of light source direction.When after the light source direction having scene and the light source information such as shade, Gao Guang, according to illumination model, just the object in this scene can be joined in another target scene, only need to obtain the light source information in target scene again, be updated in illumination model to obtain rendering result finally.
CN201510771082.4A 2015-11-12 2015-11-12 The method that weight illumination render is carried out based on image and model calculating illumination parameter Expired - Fee Related CN105447906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510771082.4A CN105447906B (en) 2015-11-12 2015-11-12 The method that weight illumination render is carried out based on image and model calculating illumination parameter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510771082.4A CN105447906B (en) 2015-11-12 2015-11-12 The method that weight illumination render is carried out based on image and model calculating illumination parameter

Publications (2)

Publication Number Publication Date
CN105447906A true CN105447906A (en) 2016-03-30
CN105447906B CN105447906B (en) 2018-03-13

Family

ID=55558037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510771082.4A Expired - Fee Related CN105447906B (en) 2015-11-12 2015-11-12 The method that weight illumination render is carried out based on image and model calculating illumination parameter

Country Status (1)

Country Link
CN (1) CN105447906B (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023296A (en) * 2016-05-27 2016-10-12 华东师范大学 Fluid scene illumination parameter calculating method
CN106204714A (en) * 2016-08-01 2016-12-07 华东师范大学 Video fluid illumination calculation method based on Phong model
CN106570928A (en) * 2016-11-14 2017-04-19 河海大学 Image-based re-lighting method
CN107424206A (en) * 2017-04-14 2017-12-01 苏州蜗牛数字科技股份有限公司 A kind of interactive approach that the performance of virtual scene shadow is influenceed using actual environment
CN107506714A (en) * 2017-08-16 2017-12-22 成都品果科技有限公司 A kind of method of face image relighting
CN107909640A (en) * 2017-11-06 2018-04-13 清华大学 Face weight illumination method and device based on deep learning
CN107944420A (en) * 2017-12-07 2018-04-20 北京旷视科技有限公司 The photo-irradiation treatment method and apparatus of facial image
CN108364292A (en) * 2018-03-26 2018-08-03 吉林大学 A kind of illumination estimation method based on several multi-view images
CN108460841A (en) * 2018-01-23 2018-08-28 电子科技大学 A kind of indoor scene light environment method of estimation based on single image
CN108509887A (en) * 2018-03-26 2018-09-07 深圳超多维科技有限公司 A kind of acquisition ambient lighting information approach, device and electronic equipment
CN108682041A (en) * 2018-04-11 2018-10-19 浙江传媒学院 A method of multiple light courcess rendering is carried out based on the sampling of matrix ranks and deep learning
CN108765537A (en) * 2018-06-04 2018-11-06 北京旷视科技有限公司 A kind of processing method of image, device, electronic equipment and computer-readable medium
CN109224448A (en) * 2018-09-25 2019-01-18 北京天马时空网络技术有限公司 A kind of method and apparatus of streamer rendering
CN109389113A (en) * 2018-10-29 2019-02-26 大连恒锐科技股份有限公司 A kind of multi-function footprint acquisition equipment
CN109448098A (en) * 2018-09-29 2019-03-08 北京航空航天大学 A method of virtual scene light source is rebuild based on individual night scene image of building
CN109618472A (en) * 2018-07-16 2019-04-12 马惠岷 Lamp light control method and system
CN109785423A (en) * 2018-12-28 2019-05-21 广州华多网络科技有限公司 Image light compensation method, device and computer equipment
CN110009723A (en) * 2019-03-25 2019-07-12 阿里巴巴集团控股有限公司 The method for reconstructing and device of environment light source
CN111063034A (en) * 2019-12-13 2020-04-24 四川中绳矩阵技术发展有限公司 Time domain interaction method
CN111147745A (en) * 2019-12-30 2020-05-12 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN111798384A (en) * 2020-06-10 2020-10-20 武汉大学 Reverse rendering human face image illumination information editing method
CN111815750A (en) * 2020-06-30 2020-10-23 深圳市商汤科技有限公司 Method and device for polishing image, electronic equipment and storage medium
CN111968216A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN112258622A (en) * 2020-10-26 2021-01-22 北京字跳网络技术有限公司 Image processing method, image processing device, readable medium and electronic equipment
CN112819941A (en) * 2021-03-05 2021-05-18 网易(杭州)网络有限公司 Method, device, equipment and computer-readable storage medium for rendering water surface
CN112819940A (en) * 2021-01-29 2021-05-18 网易(杭州)网络有限公司 Rendering method and device and electronic equipment
CN112927342A (en) * 2021-02-22 2021-06-08 中铁二院工程集团有限责任公司 Illumination calculation method and fixed pipeline rendering and programmable pipeline rendering methods
WO2021226862A1 (en) * 2020-05-13 2021-11-18 Shanghaitech University Neural opacity point cloud
CN113920036A (en) * 2021-12-14 2022-01-11 武汉大学 Interactive relighting editing method based on RGB-D image
WO2022042470A1 (en) * 2020-08-31 2022-03-03 浙江商汤科技开发有限公司 Image decomposition method and related apparatus and device
WO2022140887A1 (en) * 2020-12-28 2022-07-07 华为技术有限公司 Image processing method and apparatus
CN116385614A (en) * 2023-03-29 2023-07-04 深圳海拓时代科技有限公司 3D vision module rendering control system based on visualization

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050080602A1 (en) * 2003-10-10 2005-04-14 Microsoft Corporation Systems and methods for all-frequency relighting using spherical harmonics and point light distributions
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
WO2009143163A2 (en) * 2008-05-21 2009-11-26 University Of Florida Research Foundation, Inc. Face relighting from a single image
CN103035025A (en) * 2012-12-28 2013-04-10 浙江大学 Material high realistic rendering algorithm based on bidirectional reflectance distribution function (BRDF) measured data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050080602A1 (en) * 2003-10-10 2005-04-14 Microsoft Corporation Systems and methods for all-frequency relighting using spherical harmonics and point light distributions
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
WO2009143163A2 (en) * 2008-05-21 2009-11-26 University Of Florida Research Foundation, Inc. Face relighting from a single image
CN103035025A (en) * 2012-12-28 2013-04-10 浙江大学 Material high realistic rendering algorithm based on bidirectional reflectance distribution function (BRDF) measured data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GD FINLAYSON ET AL: "Removing shadows from images", 《EUROPEAN CONFERENCE ON COMPUTER VISION》 *
丁晓东: "基于图像的重光照技术", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王晨昊 等: "光学遥感图像重光照方法研究", 《测绘通报》 *

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023296A (en) * 2016-05-27 2016-10-12 华东师范大学 Fluid scene illumination parameter calculating method
CN106023296B (en) * 2016-05-27 2018-09-28 华东师范大学 Fluid scene illumination parameter computational methods
CN106204714A (en) * 2016-08-01 2016-12-07 华东师范大学 Video fluid illumination calculation method based on Phong model
CN106204714B (en) * 2016-08-01 2019-02-01 华东师范大学 Video fluid illumination calculation method based on Phong model
CN106570928A (en) * 2016-11-14 2017-04-19 河海大学 Image-based re-lighting method
CN106570928B (en) * 2016-11-14 2019-06-21 河海大学 A kind of heavy illumination method based on image
CN107424206A (en) * 2017-04-14 2017-12-01 苏州蜗牛数字科技股份有限公司 A kind of interactive approach that the performance of virtual scene shadow is influenceed using actual environment
CN107424206B (en) * 2017-04-14 2020-09-22 苏州蜗牛数字科技股份有限公司 Interaction method for influencing shadow expression of virtual scene by using real environment
CN107506714A (en) * 2017-08-16 2017-12-22 成都品果科技有限公司 A kind of method of face image relighting
CN107909640A (en) * 2017-11-06 2018-04-13 清华大学 Face weight illumination method and device based on deep learning
CN107909640B (en) * 2017-11-06 2020-07-28 清华大学 Face relighting method and device based on deep learning
CN107944420B (en) * 2017-12-07 2020-10-27 北京旷视科技有限公司 Illumination processing method and device for face image
CN107944420A (en) * 2017-12-07 2018-04-20 北京旷视科技有限公司 The photo-irradiation treatment method and apparatus of facial image
CN108460841A (en) * 2018-01-23 2018-08-28 电子科技大学 A kind of indoor scene light environment method of estimation based on single image
CN108509887A (en) * 2018-03-26 2018-09-07 深圳超多维科技有限公司 A kind of acquisition ambient lighting information approach, device and electronic equipment
CN108364292B (en) * 2018-03-26 2021-05-25 吉林大学 Illumination estimation method based on multiple visual angle images
CN108364292A (en) * 2018-03-26 2018-08-03 吉林大学 A kind of illumination estimation method based on several multi-view images
CN108682041A (en) * 2018-04-11 2018-10-19 浙江传媒学院 A method of multiple light courcess rendering is carried out based on the sampling of matrix ranks and deep learning
CN108682041B (en) * 2018-04-11 2021-12-21 浙江传媒学院 Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning
CN108765537A (en) * 2018-06-04 2018-11-06 北京旷视科技有限公司 A kind of processing method of image, device, electronic equipment and computer-readable medium
CN109618472A (en) * 2018-07-16 2019-04-12 马惠岷 Lamp light control method and system
CN109224448A (en) * 2018-09-25 2019-01-18 北京天马时空网络技术有限公司 A kind of method and apparatus of streamer rendering
CN109448098B (en) * 2018-09-29 2023-01-24 北京航空航天大学 Method for reconstructing virtual scene light source based on single night scene image of building
CN109448098A (en) * 2018-09-29 2019-03-08 北京航空航天大学 A method of virtual scene light source is rebuild based on individual night scene image of building
CN109389113A (en) * 2018-10-29 2019-02-26 大连恒锐科技股份有限公司 A kind of multi-function footprint acquisition equipment
CN109389113B (en) * 2018-10-29 2020-12-15 大连恒锐科技股份有限公司 Multifunctional footprint acquisition equipment
CN109785423A (en) * 2018-12-28 2019-05-21 广州华多网络科技有限公司 Image light compensation method, device and computer equipment
CN109785423B (en) * 2018-12-28 2023-10-03 广州方硅信息技术有限公司 Image light supplementing method and device and computer equipment
CN110009723A (en) * 2019-03-25 2019-07-12 阿里巴巴集团控股有限公司 The method for reconstructing and device of environment light source
CN110009723B (en) * 2019-03-25 2023-01-31 创新先进技术有限公司 Reconstruction method and device of ambient light source
CN111063034B (en) * 2019-12-13 2023-08-04 四川中绳矩阵技术发展有限公司 Time domain interaction method
CN111063034A (en) * 2019-12-13 2020-04-24 四川中绳矩阵技术发展有限公司 Time domain interaction method
CN111147745B (en) * 2019-12-30 2021-11-30 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN111147745A (en) * 2019-12-30 2020-05-12 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
US11727628B2 (en) 2020-05-13 2023-08-15 Shanghaitech University Neural opacity point cloud
WO2021226862A1 (en) * 2020-05-13 2021-11-18 Shanghaitech University Neural opacity point cloud
CN111798384A (en) * 2020-06-10 2020-10-20 武汉大学 Reverse rendering human face image illumination information editing method
CN111815750A (en) * 2020-06-30 2020-10-23 深圳市商汤科技有限公司 Method and device for polishing image, electronic equipment and storage medium
CN111968216B (en) * 2020-07-29 2024-03-22 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN111968216A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium
WO2022042470A1 (en) * 2020-08-31 2022-03-03 浙江商汤科技开发有限公司 Image decomposition method and related apparatus and device
CN112258622A (en) * 2020-10-26 2021-01-22 北京字跳网络技术有限公司 Image processing method, image processing device, readable medium and electronic equipment
WO2022140887A1 (en) * 2020-12-28 2022-07-07 华为技术有限公司 Image processing method and apparatus
CN112819940A (en) * 2021-01-29 2021-05-18 网易(杭州)网络有限公司 Rendering method and device and electronic equipment
CN112819940B (en) * 2021-01-29 2024-02-23 网易(杭州)网络有限公司 Rendering method and device and electronic equipment
CN112927342B (en) * 2021-02-22 2022-12-20 中铁二院工程集团有限责任公司 Illumination calculation method and fixed pipeline rendering and programmable pipeline rendering methods
CN112927342A (en) * 2021-02-22 2021-06-08 中铁二院工程集团有限责任公司 Illumination calculation method and fixed pipeline rendering and programmable pipeline rendering methods
CN112819941A (en) * 2021-03-05 2021-05-18 网易(杭州)网络有限公司 Method, device, equipment and computer-readable storage medium for rendering water surface
CN112819941B (en) * 2021-03-05 2023-09-12 网易(杭州)网络有限公司 Method, apparatus, device and computer readable storage medium for rendering water surface
CN113920036A (en) * 2021-12-14 2022-01-11 武汉大学 Interactive relighting editing method based on RGB-D image
CN116385614A (en) * 2023-03-29 2023-07-04 深圳海拓时代科技有限公司 3D vision module rendering control system based on visualization
CN116385614B (en) * 2023-03-29 2024-03-01 深圳海拓时代科技有限公司 3D vision module rendering control system based on visualization

Also Published As

Publication number Publication date
CN105447906B (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN105447906A (en) Method for calculating lighting parameters and carrying out relighting rendering based on image and model
US10692277B1 (en) Dynamically estimating lighting parameters for positions within augmented-reality scenes using a neural network
CN107341853B (en) Virtual-real fusion method and system for super-large virtual scene and dynamic screen shooting
Hall et al. A testbed for realistic image synthesis
CN102096941B (en) Consistent lighting method under falsehood-reality fused environment
CN111968215B (en) Volume light rendering method and device, electronic equipment and storage medium
US8928662B2 (en) Apparatus, method, and system for demonstrating a lighting solution by image rendering
CN108460841A (en) A kind of indoor scene light environment method of estimation based on single image
Li et al. Physically-based editing of indoor scene lighting from a single image
CN104077802A (en) Method for improving displaying effect of real-time simulation image in virtual scene
CN103995700A (en) Method for achieving global illumination of 3D game engine
Sheng et al. A spatially augmented reality sketching interface for architectural daylighting design
Grosch et al. Consistent interactive augmentation of live camera images with correct near-field illumination
Ignatenko et al. A Real-Time 3D Rendering System with BRDF Materials and Natural Lighting
CN116894922A (en) Night vision image generation method based on real-time graphic engine
Thompson et al. Real-time mixed reality rendering for underwater 360 videos
Happa et al. Studying illumination and cultural heritage
CN114139249A (en) Automatic light distribution method and device based on illusion engine and electronic equipment
CN114139250A (en) Automatic light distribution method, device, equipment and storage medium based on illusion engine
Sheng et al. Virtual heliodon: Spatially augmented reality for architectural daylighting design
CN112258621A (en) Method for observing three-dimensional rendering two-dimensional animation in real time
Navvab et al. Evaluation of historical museum interior lighting system using fully immersive virtual luminous environment
Nasman et al. Physical avatars in a projector-camera tangible user interface enhance quantitative simulation analysis and engagement
Debelov et al. Light mesh: soft shadows as interpolation of visibility
Li et al. Translucent material transfer based on single images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180313

Termination date: 20181112

CF01 Termination of patent right due to non-payment of annual fee