WO2015070536A1 - 用户信息获取方法及用户信息获取装置 - Google Patents

用户信息获取方法及用户信息获取装置 Download PDF

Info

Publication number
WO2015070536A1
WO2015070536A1 PCT/CN2014/071141 CN2014071141W WO2015070536A1 WO 2015070536 A1 WO2015070536 A1 WO 2015070536A1 CN 2014071141 W CN2014071141 W CN 2014071141W WO 2015070536 A1 WO2015070536 A1 WO 2015070536A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
image
related information
information
fundus
Prior art date
Application number
PCT/CN2014/071141
Other languages
English (en)
French (fr)
Inventor
杜琳
Original Assignee
北京智谷睿拓技术服务有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京智谷睿拓技术服务有限公司 filed Critical 北京智谷睿拓技术服务有限公司
Priority to US14/888,204 priority Critical patent/US9838588B2/en
Publication of WO2015070536A1 publication Critical patent/WO2015070536A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/16Program or content traceability, e.g. by watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • the present application relates to information acquisition technology, and in particular, to a user information acquisition method and apparatus. Background technique
  • Electronic devices usually set the screen lock based on energy saving and preventing misoperation. When the user actually operates, they often need to unlock the screen before launching the corresponding application to complete the functions required by the user.
  • the technical problem to be solved by the present application is: providing a user information acquisition technology to obtain user related information, thereby helping the user to launch related applications quickly, conveniently, and securely.
  • the present application provides a method for acquiring user information, including:
  • the user related information includes an application activation information for starting a corresponding application; and the user related information is projected to a fundus of the user.
  • the application provides a user information obtaining apparatus, including:
  • An image acquisition module configured to acquire an image including at least one digital watermark
  • An information acquiring module configured to acquire at least one user-related information corresponding to the current user that is included in the digital watermark in the image, where the user-related information includes an application startup information used to start a corresponding application;
  • a projection module configured to project the user related information to a fundus of the user.
  • the present application provides a wearable device comprising the above user information acquiring device.
  • the foregoing at least one technical solution of the embodiment of the present application can obtain the user startup information corresponding to the current user in the image that includes the digital watermark, so that the user can obtain the application startup information of the application quickly, thereby being fast, safe, and convenient.
  • the launch corresponds to the application.
  • FIG. 1 is a flowchart of steps of a method for acquiring user information according to an embodiment of the present application
  • FIG. 2 and FIG. 3 are schematic diagrams of application of a method for acquiring user information according to an embodiment of the present disclosure
  • FIG. 4a and FIG. 4b are a light spot pattern used in a method for acquiring user information according to an embodiment of the present application, and the light spot obtained at the fundus includes the light spot.
  • FIG. 5 is a schematic block diagram showing a structure of a user information acquiring apparatus according to an embodiment of the present application.
  • FIG. 6a and 6b are schematic block diagrams showing the structure of two other user information acquiring apparatuses according to an embodiment of the present application
  • FIG. 7a is a schematic block diagram showing a structure of a position detecting module used by the user information acquiring apparatus according to an embodiment of the present application
  • FIG. 7b is a schematic block diagram showing a structure of a location detecting module used by another user information acquiring apparatus according to an embodiment of the present application.
  • 7c and 7d are optical path diagrams corresponding to position detection of a position detecting module used by the user information acquiring apparatus according to an embodiment of the present application;
  • FIG. 8 is a schematic diagram of a user information acquiring apparatus applied to glasses according to an embodiment of the present application
  • FIG. FIG. 9 is a schematic diagram of another user information acquiring apparatus applied to glasses according to an embodiment of the present invention
  • FIG. 10 is a schematic diagram of another embodiment of the user information acquiring apparatus applied to the glasses according to an embodiment of the present application
  • a schematic structural diagram of a user information acquiring device
  • FIG. 12 is a schematic block diagram showing the structure of a wearable device according to an embodiment of the present application.
  • FIG. 13 is a flowchart of steps of a user information interaction method according to an embodiment of the present application.
  • FIG. 14 is a schematic block diagram showing a structure of a user information interaction apparatus according to an embodiment of the present application.
  • FIG. 15 is a schematic block diagram showing the structure of an electronic terminal according to an embodiment of the present application.
  • the user needs to enter the corresponding user environment first to start the application in the user environment, which is not very convenient for applications that users often need to use.
  • Digital watermarking technology embeds some identification information into digital carriers for copyright protection, anti-counterfeiting, authentication, information hiding, etc.
  • certain devices are required to be read and verified by specific algorithms, and sometimes third-party authorities are required.
  • Organizations participate in the certification process, and these complex processes limit their application to some extent.
  • wearable devices especially smart glasses
  • the digital watermark information that the user sees can be reminded in a visual presentation in the smart glasses, and the password, pattern, motion, etc. of the user's screen unlocking can be embedded as a digital watermark.
  • the embodiments of the present application provide the following technical solutions to help users quickly and securely launch applications they need.
  • the “user environment” is a usage environment related to the user, for example, the user enters the environment of the electronic terminal system after logging in through the user login interface of the electronic terminal such as a mobile phone or a computer.
  • the usage environment of the electronic terminal system generally includes multiple applications. For example, after the user enters the use environment of the mobile phone system through the lock screen interface of the mobile phone, the application corresponding to each functional module in the system, such as a phone, an email, or the like, may be activated.
  • An application such as an information, a camera, or the like; or the user environment, for example, may also be a usage environment of the application after the user logs in through the login interface of an application, and the usage environment of the application may further include multiple next Class
  • the application for example, the phone application in the above mobile phone system, after startup, may further include a next-level application such as a call, a contact, and a call record.
  • the embodiment of the present application provides a method for acquiring user information, including:
  • S110 acquires an image that includes at least one digital watermark
  • the S120 obtains at least one user-related information corresponding to the current user that is included in the digital watermark in the image, where the user-related information includes an application startup information used to start a corresponding application;
  • S130 projects the user-related information to the fundus of the user.
  • the method of the embodiment of the present application can obtain the user-related information corresponding to the current user in the image that includes the digital watermark, so that the user can obtain the application startup information of the application quickly, and the corresponding application can be started quickly, safely, and conveniently. .
  • S110 acquires an image including at least one digital watermark.
  • an object seen by the user may be photographed by a smart glasses device, for example, when the user sees the image, the smart glasses device captures the image.
  • the image may also be acquired by another device, and the image may be acquired through interaction between devices; or obtained by interaction with a device displaying the image.
  • the image may also be acquired by another device, and the image may be acquired through interaction between devices; or obtained by interaction with a device displaying the image.
  • S120 Acquire at least one user-related information corresponding to the current user that is included in the digital watermark in the image.
  • the digital watermark in the image may be analyzed by, for example, a personal private key and a public or private watermark extraction method to extract the user related information. 2) transmitting the image to the outside; and receiving, from the outside, the user-related information in the image,
  • the image may be sent to the outside, for example, to a cloud server or a third-party authority, and the digital watermark of the image is extracted by the cloud server or the third-party authority.
  • User related information for example, to a cloud server or a third-party authority, and the digital watermark of the image is extracted by the cloud server or the third-party authority.
  • the image is a login interface 110 of a user environment displayed by a device.
  • the application startup information is used to directly start at least one corresponding application in the user environment corresponding to the current user on the login interface.
  • some of the application's quick-start interfaces on the lock screen interface of an electronic device that does not require user authentication are convenient, but not very secure.
  • the user can obtain the application startup information for starting the corresponding application in the user environment directly from the login interface by using the login interface of the user environment, so that the user can quickly, conveniently, and securely start the application. Apps that improve the user experience.
  • the user related information further includes: a user authentication information used by the current user to log in to the user environment.
  • the user authentication information described herein is, for example, information such as a user name, a password, a gesture, and the like that the user logs in to the user environment.
  • the user can enter the corresponding user environment.
  • a user-set password or a specific finger movement track is input on the lock screen interface of a mobile phone to release the lock screen state and enter the user environment of the mobile phone system.
  • the user can enter the user environment by inputting the user authentication information 130 shown in FIG. 3 on the screen (for example, by inputting the trajectory of the "Una" graphic shown in FIG. 3, entering the corresponding user. User environment), without launching the app directly.
  • the method before the acquiring the at least one user-related information corresponding to the current user that is included in the digital watermark in the image, the method further includes: authenticating the current user.
  • the step S120 is enabled to obtain the user-related information corresponding to the user.
  • a user uses an authenticated intelligence
  • the glasses can be used to implement the functions of the steps of the embodiments of the present application.
  • the authentication may not be performed, and the user may obtain corresponding information that can be obtained by the device by using the corresponding device.
  • the function of each step of the embodiment of the present application is implemented by a user through a smart glasses.
  • specific glasses are used only by a specific user. Therefore, in the present embodiment, only a smart glasses can be used.
  • the user-related information corresponding to the smart glasses does not need to specifically confirm the identity of the user.
  • S130 projects the user-related information to the fundus of the user.
  • the user in order to enable the user to obtain the user-related information in a confidential situation, the user obtains the corresponding user-related information by projecting the user-related information to the user's fundus.
  • the projection may be to directly project the user-related information to the user's fundus through the projection module.
  • the projecting may further display the user-related information in a location that only the user can see (for example, a display surface of a smart glasses), and the user is related through the display surface.
  • the information is projected to the bottom of the user.
  • the first method has higher privacy because it does not need to display the user-related information through the middle display, but directly reaches the user's eyes.
  • the step S130 includes:
  • the sharpness criteria described herein can be set according to sharpness measurement parameters commonly used by those skilled in the art, such as parameters such as the effective resolution of the image.
  • the parameter adjustment step includes: adjusting at least one imaging parameter of the at least one optical device of the optical path between the projection position and the eye of the user, and/or in the optical path The location in .
  • the imaging parameters described herein include the focal length of the optical device, the optical axis direction, and the like.
  • the user-related information can be appropriately projected on the fundus of the user, for example by adjusting the focal length of the optical device such that the user-related information is clearly imaged on the fundus of the user.
  • “Clear” herein refers to a first definition standard that satisfies the at least one setting.
  • the left and right eye images with parallax are directly generated in addition to generating the user-related information, and the same user-related information is separately projected with a certain deviation.
  • the stereoscopic display effect of the user-related information can also be achieved to both eyes.
  • the optical axis parameter of the optical device can be adjusted to achieve the effect.
  • the step S130 further includes: transmitting the user-related information to the fundus of the user corresponding to the position of the pupil when the optical axis direction of the eye is different.
  • the function of the foregoing steps may be implemented by a curved optical device such as a curved beam splitter, but the content to be displayed after the curved optical device is generally deformed, so
  • the step S130 further includes:
  • pre-processing the projected user-related information such that the projected user-related information has an inverse deformation opposite to the deformation, and the anti-deformation effect passes through the curved optical device described above, and the deformation effect of the curved optical device is Offset, so that the user-related information received by the user's eyes is the effect that needs to be presented.
  • the user-related information projected into the eyes of the user does not need to be aligned with the image, for example, when a user is required to enter a set of applications in an input box displayed in the image in a certain order.
  • Information or user authentication information such as "1234", only needs to be projected into the user's eyes to be seen by the user.
  • the step S130 includes:
  • the projected user related information is aligned with the image seen by the user at the bottom of the user.
  • the user acquires six user-related information, including five application activation information 120 and one user authentication information 130, in step S120.
  • the application startup information 120 that the user sees through the step S130 includes the identifier information 121 for identifying the application (may be the graph shown in FIG. 3, or other Text, symbols, etc.; of course, in some embodiments, the identification information 121 may also be directly displayed on the image) and the input information 122 for starting the application (may be the graphic trajectory shown in FIG. 3, It can also be numbers, symbols, etc.).
  • the application startup information may only include the input information. Taking the application launch information 120 in the upper left corner of the login interface 110 shown in FIG. 3 as an example, the application launch information 120 includes the identification information 121 of the browser application on the left side and the "e" shaped graphic track on the right side. For example, the user can directly launch the browser application by drawing an "e" shaped trajectory on the screen.
  • a specific graphic trajectory for example, a map at a specific position, for example, at the position of the screen where the "e" shaped trajectory shown in FIG.
  • the "e" shaped track shown in 3 can start the corresponding application.
  • the projected user related information needs to be aligned with the image so that the user sees the user related information at a desired location.
  • the method further includes: the aligning the projected user related information with the image seen by the user in the fundus of the user includes: The user's gaze point will be projected relative to the location of the user The user-related information is aligned with the image seen by the user at the bottom of the user's eyes.
  • the position corresponding to the user's gaze point is the location where the image is located.
  • a depth sensor such as infrared ranging
  • the step of detecting the current gaze point position of the user by the method iii) includes: a fundus image collection step of collecting an image of the user's fundus;
  • an image processing step of analyzing an image of the fundus collected by the pupil to obtain an optical path between the fundus image collection position and the eye corresponding to the image satisfying at least one set second definition standard The imaging parameters and at least one optical parameter of the eye, and calculate the use
  • the definition standard is the definition standard commonly used by those skilled in the art described above, which may be the same as or different from the first definition standard described above.
  • the "eye funda” presented here is mainly an image presented on the retina, which may be an image of the fundus itself, or may be an image of other objects projected to the fundus, such as the spot pattern mentioned below.
  • the fundus can be obtained at a certain position or state by adjusting the focal length of the optical device on the optical path between the eye and the concentrating position and/or the position in the optical path.
  • the adjustment may be continuous real-time.
  • the optical device may be a focus adjustable lens for accomplishing its focal length by adjusting the refractive index and/or shape of the optical device itself. Adjustment.
  • the focal length adjustable lens is filled with a specific liquid crystal medium, and the arrangement of the liquid crystal medium is adjusted by adjusting the voltage of the corresponding electrode of the liquid crystal medium, thereby changing the refractive index of the focus adjustable lens.
  • the optical device may be: a lens group for performing adjustment of the focal length of the lens group itself by adjusting the relative position between the lenses in the lens group.
  • the lenses in the lens group are the focus adjustable lenses described above.
  • the optical path parameters of the system can also be changed by adjusting the position of the optical device on the optical path.
  • the image processing step further includes: Performing an analysis on the images collected in the fundus image collection step to find an image satisfying at least one set second definition standard;
  • the optical parameters of the eye are calculated based on the image that satisfies at least one of the set second sharpness criteria, and the imaging parameters that are known when the image that satisfies at least one of the set second sharpness criteria is obtained.
  • the adjustment in the tunable imaging step enables the collection of an image that satisfies at least one of the set second definition criteria, but the image processing step is required to find the second definition standard that satisfies at least one setting
  • the image of the eye can be calculated by calculating an image that satisfies at least one set of second sharpness criteria and a known optical path parameter.
  • the image processing step may further include:
  • the projected spot may be used only to illuminate the fundus without a specific pattern.
  • the projected spot may also include a feature rich pattern. The rich features of the pattern facilitate inspection and improve detection accuracy.
  • An example of a spot pattern P which may be formed by a spot pattern generator, such as frosted glass, is shown in Fig. 4a; and Fig. 4b shows an image of the fundus that is collected when the spot pattern P is projected.
  • the spot is an infrared spot that is invisible to the eye.
  • Light other than the invisible light of the eyes in the projected spot can be filtered out.
  • the method implemented by the application may further comprise the steps of:
  • the brightness of the projected spot is controlled.
  • the analysis results include, for example, characteristics of the image collected by the image, including contrast of image features, texture features, and the like.
  • a special case of controlling the brightness of the projected spot is to start or stop the projection.
  • the observer can stop the projection periodically when the observer keeps watching the eye.
  • the observer can stop the projection when the fundus is bright enough, and use the fundus information to detect the eye. The distance from the current line of sight to the eye.
  • the brightness of the projected spot can be controlled according to the ambient light.
  • the image processing step further includes:
  • the calibration of the fundus image is performed to obtain at least one reference image corresponding to the image presented by the fundus.
  • the image collected by the ⁇ is compared with the reference image to obtain the image that satisfies at least one set second definition standard.
  • the standard image may be the one obtained with the smallest difference from the reference image.
  • the difference between the currently obtained image and the reference image can be calculated by an existing image processing algorithm, for example, using a classical phase difference autofocus algorithm.
  • the optical parameters of the eye can include an optical axis direction of the eye derived from characteristics of the eye when the image is captured to the at least one set second clarity criterion.
  • the features of the eye here may be obtained from the image that satisfies at least one of the set second definition criteria, or may be otherwise obtained.
  • the gaze direction of the user's eye line of sight can be obtained according to the optical axis direction of the eye.
  • the optical axis direction of the eye can be obtained according to the feature of the fundus when the image of the second sharpness standard satisfying at least one setting is obtained, and the accuracy of the optical axis direction of the eye is determined to be higher by the features of the fundus.
  • the size of the spot pattern may be larger than the fundus viewable area or smaller than the fundus viewable area, where:
  • a classical feature point matching algorithm for example, Scale Invariant Feature Transform (SIFT) algorithm
  • SIFT Scale Invariant Feature Transform
  • the direction of the observer's line of sight can be determined by determining the direction of the optical axis of the eye by the position of the spot pattern on the resulting image relative to the original spot pattern (obtained by image calibration).
  • the optical axis direction of the eye may also be obtained according to the feature of the eye pupil when the image satisfying the at least one set second definition standard is obtained.
  • the characteristics of the pupil of the eye may be obtained from the image satisfying at least one set of second definition criteria, or may be acquired separately. Obtaining the optical axis direction of the eye through the pupillary feature of the eye is a prior art and will not be described here.
  • a calibration step of the optical axis direction of the eye may be further included to more accurately determine the direction of the optical axis of the eye.
  • the known imaging parameters include fixed imaging parameters and real-time imaging parameters, wherein the real-time imaging parameters are optical devices when acquiring an image that satisfies at least one set second sharpness criterion Parameter information, the parameter information may be obtained in the at least one The set image of the second definition standard is obtained in real time.
  • the calculated distance from the eye focus to the eye can be combined (the specific process will be described in detail in conjunction with the device section) to obtain the position of the eye gaze point.
  • the user-related information may be stereoscopically directed to the user.
  • the user's fundus projection may be stereoscopically directed to the user.
  • the display of the stereoscopic image may be the same information, and the adjustment of the projection position by step S130, so that the information of the parallax seen by the two eyes of the user forms a stereoscopic display. effect.
  • the user-related information includes stereoscopic information corresponding to the two eyes of the user, and in step S130, corresponding user-related information is respectively projected to the two eyes of the user. That is, the user-related information includes left-eye information corresponding to the left eye of the user and right-eye information corresponding to the right eye of the user, and the left-eye information is projected to the left eye of the user when the projection is performed, and the right-eye information is projected to the right of the user. Eyes, so that the user-related information that the user sees has a suitable stereoscopic display effect, resulting in a better user experience.
  • the stereoscopic projection described above allows the user to view the three-dimensional spatial information.
  • the above method of the embodiment of the present application enables the user to see stereoscopic user-related information, and learn the specific location. And the specific gesture, so that the user can perform the gesture of the user-related information prompting at the specific location, and at other time, even if the other person sees the gesture action performed by the user, the user is not informed of the spatial information, so that the user The confidentiality of related information is better.
  • the embodiment of the present application further provides a user information acquiring apparatus 500, including: an image acquiring module 510, configured to acquire an image including at least one digital watermark;
  • An information obtaining module 520 configured to acquire at least one user-related information corresponding to the current user that is included in the digital watermark in the image, where the user-related information includes a Application startup information;
  • a projection module 530 is configured to project the user-related information to the fundus of the user.
  • the method of the embodiment of the present application can obtain the user-related information corresponding to the current user in the image that includes the digital watermark, so that the user can obtain the application startup information of the application quickly, and the corresponding application can be started quickly, safely, and conveniently. .
  • the device in the embodiment of the present application is a wearable device for the vicinity of the user's eyes, such as a smart glasses.
  • the image is automatically acquired by the image acquisition module 510, and the information is projected to the user's fundus after the user-related information is obtained.
  • the image acquisition module 510 may be in the form of a plurality of images as shown in FIG. 6a, and the image acquisition module 510 includes a shooting sub-module 511 for capturing the image.
  • the shooting sub-module 511 can be, for example, a camera of smart glasses for capturing images seen by a user.
  • the image obtaining module 510 includes:
  • a first communication sub-module 512 is configured to receive the image.
  • the image may be acquired by another device, and then the device of the embodiment of the present application may be sent; or the image may be acquired by interaction with a device that displays the image (ie, the device will display The image information is transmitted to the apparatus of the embodiment of the present application.
  • the information acquiring module 520 may be in various forms, for example, as shown in FIG. 6a, the information acquiring module 520 includes: an information extracting submodule 521, for using the image. Extracting the user related information.
  • the information extraction sub-module 512 shown may be, for example, a personal private key and a public An open or private watermark extraction method for analyzing a digital watermark in the image, extracting the user related
  • the information acquiring module 520 includes: a second communication submodule 522, configured to:
  • the image is transmitted to the outside; the user-related information in the image is received from the outside.
  • the image may be sent to the outside, for example, to a cloud server and/or a third-party authority, and the digital watermark of the image is extracted by the cloud server or the third-party authority.
  • the user-related information is sent back to the second communication sub-module 522 of the embodiment of the present application.
  • the functions of the first communication submodule 512 and the second communication submodule 522 may be implemented by the same communication module.
  • the device 500 further includes: a user authentication module 550, configured to authenticate the current user.
  • the user authentication module may be an existing user authentication module, for example, an authentication module that authenticates through biometrics such as a user's pupil, fingerprint, or the like; or a module that performs authentication by an instruction input by a user, etc., and these authentication modules are The existing technology will not be described here.
  • the smart glasses when a user uses smart glasses that can implement the functions of the device of the embodiment of the present application, the smart glasses first authenticate the user, so that the smart glasses know the identity of the user, and then extract by the information extraction module 520. When the user-related information is obtained, only user-related information corresponding to the user is obtained. That is, after the user only needs to authenticate the user's smart glasses once, the user can obtain the user-related information through his or her own smart devices.
  • the projection module 530 includes: An information projection sub-module 531, configured to project the user-related information;
  • a parameter adjustment sub-module 532 configured to adjust at least one projection imaging parameter of the optical path between the projection position and the eye of the user, until the image formed by the user-related information in the fundus of the user satisfies at least one setting The first definition of clarity.
  • the parameter adjustment submodule 532 includes:
  • At least one tunable lens device has an adjustable focal length and/or an adjustable position on the optical path between the projected position and the user's eye.
  • the projection module 530 includes:
  • a curved surface beam splitting device 533 is configured to respectively transmit the user-related information to the fundus of the user corresponding to the position of the pupil when the optical axis direction of the eye is different.
  • the projection module 530 includes:
  • An inverse deformation processing sub-module 534 is configured to perform an inverse deformation process corresponding to the position of the pupil when the optical axis direction of the eye is different from the user-related information, so that the fundus receives the user-related information that needs to be presented.
  • the projection module 530 includes:
  • An alignment adjustment sub-module 535 is configured to align the projected user-related information with the image seen by the user in the fundus of the user.
  • the device further includes: the alignment adjustment sub-module 535, configured to: the projected user-related information and the user according to the location of the user gaze point relative to the user The images seen are aligned at the bottom of the user's eyes.
  • the location detecting module 540 may have multiple implementation manners, for example, i)-iii) of the method embodiments.
  • the embodiment of the present application further illustrates the position detecting module corresponding to the method of the iii) by the embodiments corresponding to FIGS. 7a-7d, 8 and 9.
  • the location detection module 700 includes:
  • a fundus image collection sub-module 710 configured to collect an image of the user's fundus
  • an adjustable imaging sub-module 720 configured to perform at least one of the optical path between the fundus image collection position and the user's eye Adjusting the imaging parameters until the image is captured to meet at least one set second sharpness criterion;
  • An image processing sub-module 730 configured to analyze an image of the fundus collected by the pupil to obtain a position and a location of the fundus image corresponding to the image that satisfies at least one set second definition standard The imaging parameters of the optical path between the eyes and at least one optical parameter of the eye, and calculating a position of the current gaze point of the user relative to the user.
  • the position detecting module 700 analyzes and processes the image of the fundus of the eye to obtain an optical parameter of the eye when the fundus image collecting sub-module obtains an image satisfying at least one set second sharpness standard, and the eye current can be calculated. The location of the gaze point.
  • the "eye funda” presented here is mainly an image presented on the retina, which may be an image of the fundus itself, or may be an image of other objects projected to the fundus.
  • the eyes here can be the human eye or the eyes of other animals.
  • the fundus image collection sub-module 710 is a micro camera.
  • the fundus The image collection sub-module 710 can also directly use a photosensitive imaging device such as a CCD or CMOS device.
  • the adjustable imaging sub-module 720 includes: an adjustable lens device 721 located on an optical path between the eye and the fundus image collection sub-module 710, and a focal length thereof. Adjustable and / or adjustable in position in the light path. Through the tunable lens device 721, the system equivalent focal length between the eye and the fundus image collection sub-module 710 is adjustable, and the fundus image collection sub-module 710 is adjusted by the adjustment of the tunable lens device 721. An image that satisfies at least one of the set second sharpness criteria is obtained at a certain position or state of the tunable lens device 721. In the present embodiment, the tunable lens device 721 is continuously adjusted in real time during the detection process.
  • the adjustable lens device 721 is: a focus adjustable lens for adjusting the focal length of the self by adjusting the refractive index and/or shape of the self. Specifically: 1) adjusting the focal length by adjusting the curvature of at least one side of the focus adjustable lens, for example, increasing or decreasing the liquid medium in the cavity formed by the double transparent layer to adjust the curvature of the focus adjustable lens; 2) changing the focal length
  • the refractive index of the adjustable lens is used to adjust the focal length.
  • the focal length adjustable lens is filled with a specific liquid crystal medium, and the arrangement of the liquid crystal medium is adjusted by adjusting the voltage of the corresponding electrode of the liquid crystal medium, thereby changing the refractive index of the focus adjustable lens.
  • the tunable lens device 721 includes: a lens group composed of a plurality of lenses for adjusting a relative position between the lenses in the lens group to complete a focal length of the lens group itself. Adjustment.
  • the lens group may also include a lens whose imaging parameters such as its own focal length are adjustable.
  • optical path parameters of the system by adjusting the characteristics of the tunable lens device 721 itself, it is also possible to change the optical path parameters of the system by adjusting the position of the tunable lens device 721 on the optical path.
  • the adjustable imaging sub-module 720 further includes: splitting Unit 722 is configured to form an optical transmission path between the eye and the observation object, and between the eye and fundus image collection sub-module 710. This allows the optical path to be folded, reducing the size of the system while minimizing the user's other visual experience.
  • the beam splitting unit includes: a first beam splitting unit, located between the eye and the observation object, for transmitting light of the observation object to the eye, and transmitting the eye to the fundus image to collect the sub-module of the first light splitting unit
  • the image processing sub-module of the system may be a beam splitter, a split optical waveguide (including an optical fiber), or other suitable implementations of the embodiments of the present application.
  • the 730 includes an optical path calibration unit for calibrating the optical path of the system, for example, performing alignment alignment of the optical path optical axis to ensure measurement accuracy.
  • the image processing sub-module 730 includes:
  • the image analyzing unit 731 is configured to analyze an image obtained by the fundus image collection sub-module, and find an image that satisfies at least one set second definition standard;
  • a parameter calculating unit 732 configured to calculate an eye according to the image that satisfies at least one set second sharpness standard, and an imaging parameter known to the system when the image that satisfies at least one set second sharpness criterion is obtained Optical parameters.
  • the fundus image collection sub-module 710 can obtain an image satisfying at least one set second definition standard by the adjustable imaging sub-module 720, but needs to be found by the image analysis unit 731.
  • the image satisfies at least one of the set second sharpness criteria, and the optical parameters of the eye can be calculated by calculating an image of the at least one set second sharpness standard and the optical path parameters known to the system.
  • the optical parameters of the eye may include the optical axis direction of the eye.
  • the system further includes: a projection sub-module 740, configured to project a light spot to the fundus.
  • a projection sub-module 740 configured to project a light spot to the fundus.
  • the function of the projection sub-module can be visualized by a pico projector.
  • the spot projected here can be used to illuminate the fundus without a specific pattern.
  • the projected spot includes a feature-rich pattern.
  • the rich features of the pattern facilitate inspection and improve detection accuracy.
  • the spot is an infrared spot that is invisible to the eye.
  • the exit surface of the projection sub-module may be provided with an invisible light transmission filter for the eye.
  • the incident surface of the fundus image collection sub-module is provided with an eye invisible light transmission filter.
  • the image processing sub-module 730 further includes:
  • the projection control unit 734 is configured to control the brightness of the projection spot of the projection sub-module 740 according to the result obtained by the image analysis unit 731.
  • the projection control unit 734 can adaptively adjust the brightness according to the characteristics of the image obtained by the fundus image collection sub-module 710.
  • the characteristics of the image include the contrast of the image features as well as the texture features.
  • a special case of controlling the brightness of the projection spot of the projection sub-module 740 is to open or close the projection sub-module 740.
  • the user can periodically close the projection sub-module 740 when the user keeps watching a point; when the user's fundus is bright enough
  • the illumination source can be turned off using only fundus information to detect the distance from the eye's current line of sight to the eye.
  • the projection control unit 734 can also control the brightness of the projection spot of the projection sub-module 740 according to the ambient light.
  • the image processing sub-module 730 further includes: an image calibration unit 733, configured to perform calibration of the fundus image to obtain at least one reference image corresponding to the image presented by the fundus.
  • the image analyzing unit 731 compares the image obtained by the fundus image collection sub-module 730 with the reference image to obtain the image satisfying at least one set second definition standard.
  • the image satisfying at least one set second definition standard may be an image obtained with the smallest difference from the reference image.
  • the difference between the currently obtained image and the reference image is calculated by an existing image processing algorithm, for example, using a classical phase difference autofocus algorithm.
  • the parameter calculation unit 732 includes: an eye optical axis direction determining sub-unit 7321, configured to obtain an image that satisfies the at least one set second definition standard according to the obtaining The characteristics of the eye get the direction of the optical axis of the eye.
  • the features of the eye here may be obtained from the image that satisfies at least one of the set second definition criteria, or may be otherwise acquired.
  • the direction in which the user's eye is gazing can be obtained according to the direction of the optical axis of the eye.
  • the eye optical axis direction determining sub-unit 7321 includes: a first determining sub-unit, configured to obtain, according to the obtained second clear that meets at least one setting When the standard image is sharp, the characteristics of the fundus are obtained in the direction of the optical axis of the eye.
  • the accuracy of the optical axis direction of the eye is determined by the characteristics of the fundus as compared with the direction of the optical axis of the eye obtained by the features of the pupil and the surface of the eye.
  • the size of the spot pattern may be larger than the fundus viewable area or smaller than the fundus viewable area, where:
  • a classical feature point matching algorithm for example, Scale Invariant Feature Transform (SIFT) algorithm
  • SIFT Scale Invariant Feature Transform
  • the direction of the user's line of sight can be determined by determining the direction of the optical axis of the eye by the position of the spot pattern on the obtained image relative to the original spot pattern (obtained by the image calibration unit).
  • the eye optical axis direction determining sub-unit 7321 includes: a second determining sub-unit, configured to obtain, according to the obtained second definition standard that meets at least one setting The image when the pupil of the eye is characterized by the direction of the optical axis of the eye.
  • the characteristics of the pupil of the eye may be obtained from the image satisfying at least one set second definition standard, or may be additionally acquired.
  • the optical axis direction of the eye is obtained by the eye pupil feature.
  • the image processing sub-module 730 further includes: an eye optical axis direction calibration unit 735, which is not in a possible implementation of the embodiment of the present application. It is used to calibrate the direction of the optical axis of the eye in order to more accurately determine the direction of the optical axis of the above-mentioned eye.
  • the imaging parameters known by the system include fixed imaging parameters and real-time imaging parameters, wherein the real-time imaging parameters are the adjustable lens devices when acquiring an image satisfying at least one set second definition standard.
  • Parameter information which may be recorded in real time when acquiring the image that satisfies at least one set second definition standard.
  • Figure 7c shows a schematic diagram of eye imaging.
  • equation (1) can be obtained from Figure 7c: Dd e f e where d.
  • d e are the distance between the current observation object 7010 of the eye and the real image 7020 on the retina to the ocular equivalent lens 7030, respectively
  • f e is the equivalent focal length of the eye equivalent lens 7030
  • X is the viewing direction of the eye (may be made by the eye The direction of the optical axis is obtained).
  • Figure 7d is a schematic diagram showing the distance from the eye gaze point to the eye according to the known optical parameters of the system and the optical parameters of the eye.
  • the spot 7040 becomes a virtual object through the tunable lens device 721 (not shown in Fig. 7d).
  • the virtual image distance from the lens is X (not shown in Figure 7d)
  • combining the equation (1) gives the following equations.
  • d is the light of the spot 7040 to the tunable lens device 721; the effective distance, d ; is the adjustable lens to the eye!: the light of the lens 7030 ; the effective distance, f lakethe focus of the tunable lens device 721
  • the distance d from the current observation object 7010 (eye gaze point) to the eye 7030 can be obtained as shown in the formula (3): d - f
  • the distance from the subject 7010 to the eye is obtained, and since the previous description can obtain the direction of the optical axis of the eye, the position of the gaze point of the eye is easily obtained, which provides a basis for further eye-related further interaction.
  • an embodiment of the position detection module 800 applied to the glasses G is specifically as follows: As can be seen from FIG. 8, in the present embodiment, the module 800 of the present embodiment is integrated on the right side of the glasses G (not limited thereto), and includes:
  • the micro camera 810 has the same function as the fundus image collection sub-module described in the embodiment of FIG. 7b, and is disposed on the right outer side of the glasses G so as not to affect the line of sight of the user's normal viewing object;
  • the first beam splitter 820 has a function
  • the first beam splitting unit described in the embodiment of FIG. 7b is the same, and is disposed at an intersection of the eye A gaze direction and the incident direction of the camera 810 at a certain inclination angle, and transmits the light entering the eye A of the observation object and the light reflecting the eye to the camera 810;
  • the focal length adjustable lens 830 has the same function as the focus adjustable lens described in the embodiment of FIG. 7b, and is located between the first beam splitter 820 and the camera 810, and adjusts the focus value in real time so that at a certain focal length value
  • the camera 810 is capable of capturing an image in which the fundus meets at least one set second definition standard.
  • the image processing sub-module is not shown in Fig. 8, and its function is the same as that of the image processing sub-module shown in Fig. 7b.
  • the fundus is illuminated by one illumination source 840.
  • the illumination source 840 may be an invisible light source of the eye, preferably a near-infrared light source that is less influential to the eye A and which is more sensitive to the camera 810.
  • the illumination source 840 is located outside the spectacle frame on the right side. Therefore, it is necessary to complete the light emitted by the illumination source 840 to the fundus through a second dichroic mirror 850 and the first dichroic mirror 820. transfer.
  • the second beam splitter 850 is located before the incident surface of the camera 810. Therefore, it is also required to transmit light from the fundus to the second beam splitter 850.
  • the first beam splitter 820 may have a high infrared reflectance and a high visible light transmittance.
  • an infrared reflecting film may be provided on the side of the first beam splitter 820 facing the eye A to achieve the above characteristics.
  • the position detecting module 800 is located on the side of the lens of the glasses G away from the eye A, when the optical parameters of the eye are calculated, The lens is also considered to be part of the eye A, at which point it is not necessary to know the optical properties of the lens. In other embodiments of the present application, the position detecting module 800 may be located on the side of the lens of the glasses G close to the eye A. In this case, the optical characteristic parameters of the lens need to be obtained in advance, and when calculating the gaze point distance, Consider the factors that influence the lens.
  • the light emitted by the light source 840 passes through the reflection of the second beam splitter 850, the projection of the focus adjustable lens 830, and the reflection of the first beam splitter 820, and then passes through the lens of the glasses G to enter the user's eyes, and finally arrives.
  • the optical path formed by the camera 810 through the first beam splitter 820, the focal length adjustable lens 830, and the second beam splitter 850 is imaged through the pupil of the eye A to the fundus.
  • the position detecting module and the projection module may simultaneously include: a device having a projection function (such as the information projection sub-module of the projection module described above, and the projection sub-module of the position detection module); and an imaging device with adjustable imaging parameters (such as the parameter adjustment sub-module of the projection module described above, and
  • the functions of the position detecting module and the projection module are implemented by the same device.
  • the illumination source 840 can be used as an information projection sub-module of the projection module in addition to illumination of the position detection module.
  • the light source assists in projecting the user related information.
  • the illumination source 840 can simultaneously project an invisible light for illumination of the position detecting module, and a visible light for assisting in projecting the user related information;
  • the illumination source 840 can also switch the invisible light and the visible light in a time-sharing manner.
  • the location detection module can use the user-related information. To complete the function of illuminating the fundus.
  • the first beam splitter 820, the second beam splitter 850, and the focus adjustable lens 830 may be used as a parameter adjustment submodule of the projection module. It can also be used as an adjustable imaging sub-module of the position detection module.
  • the focal length adjustable lens 830 may be adjusted in a sub-region, and different regions respectively correspond to the position detecting module and the projection module, and the focal length may also be different.
  • the focal length of the focus adjustable lens 830 is integrally adjusted, but the front end of the photosensitive unit (such as a CCD or the like) of the micro camera 810 of the position detecting module is further provided with other optical devices for realizing the position detection.
  • the imaging parameters of the module are assisted.
  • the light path from the light emitting surface of the light source 840 ie, the user related information throwing position
  • the light from the eye to the miniature camera 810 may be configured.
  • the focal length adjustable lens 830 is adjusted until the miniature camera 810 receives the clearest fundus image, the user-related information projected by the illumination source 840 is clearly imaged at the fundus.
  • FIG. 9 is a schematic structural diagram of a position detecting module 900 according to another embodiment of the present application.
  • the present embodiment is similar to the embodiment shown in FIG. 8, and includes a micro camera 910, a second beam splitter 920, and a focus adjustable lens 930, except that the projector in the present embodiment is different.
  • the module 940 is a projection sub-module 940 that projects a spot pattern, and replaces the first beam splitter in the embodiment of FIG. 8 by a curved beam splitter 950 as a curved beam splitting device.
  • the surface beam splitter 950 is used to correspond to the position of the pupil when the optical axis direction of the eye is different, and the image presented by the fundus is transmitted to the fundus image collection sub-module.
  • the camera can capture the image superimposed and superimposed at various angles of the eyeball, but since only the fundus portion through the pupil can be clearly imaged on the camera, other parts will be out of focus and cannot be clearly imaged, and thus will not seriously interfere with the imaging of the fundus portion.
  • the characteristics of the fundus portion can still be detected. Therefore, compared with the embodiment shown in FIG. 8 , the embodiment can obtain an image of the fundus well when the eyes are gazing in different directions, so that the position detecting module of the embodiment has a wider application range and higher detection precision. .
  • the position detection The module and the projection module can also be multiplexed. Similar to the embodiment shown in FIG. 8, the projection sub-module 940 can simultaneously project the spot pattern and the user-related information simultaneously or in a time-sharing manner; or the position detecting module uses the projected user-related information as a The spot pattern is detected. Similar to the embodiment shown in FIG.
  • the first beam splitter 920, the second beam splitter 950, and the focus adjustable lens 930 can be used as In addition to the parameter adjustment sub-module of the projection module, the position detector module can also be adjusted.
  • the second beam splitter 950 is further configured to respectively correspond to the position of the pupil when the optical axis direction of the eye is different.
  • the light path between the projection module and the fundus Since the user-related information projected by the projection sub-module 940 is deformed after passing through the second beam splitter 950 of the curved surface, in the embodiment, the projection module includes:
  • the anti-deformation processing module (not shown in FIG. 9) is configured to perform inverse deformation processing corresponding to the curved surface spectroscopic device on the user-related information, so that the fundus receives the user-related information that needs to be presented.
  • the projection module is configured to project the user-related information stereoscopically to the fundus of the user.
  • the user-related information includes stereoscopic information corresponding to the two eyes of the user, and the projection module respectively projects corresponding user-related information to the two eyes of the user.
  • the user information acquiring apparatus 1000 needs to set two sets of projection modules respectively corresponding to the two eyes of the user, including:
  • a first projection module corresponding to a left eye of the user
  • a second projection module corresponding to the user's right eye.
  • the structure of the second projection module is similar to the structure of the composite position detecting module described in the embodiment of FIG. 10, and is also a structure that can simultaneously implement the function of the position detecting module and the function of the projection module, including the figure. 10 shows the same function of the miniature camera 1021, the second beam splitter 1022, the second focus adjustable lens 1023, and the first beam splitter 1024 (the image processing submodule of the position detecting module is not shown in FIG. 10) , the difference is that the cast in the present embodiment
  • the shot module is a second projection sub-module 1025 that can project user-related information corresponding to the right eye. It can also be used to detect the position of the gaze point of the user's eyes, and clearly project the user-related information corresponding to the right eye to the fundus of the right eye.
  • the structure of the first projection module is similar to that of the second projection module 1020, but it does not have a miniature camera and does not have the function of a composite position detecting module. As shown in FIG. 10, the first projection module includes:
  • the first projection sub-module 1011 is configured to project user-related information corresponding to the left eye to the fundus of the left eye;
  • the first focal length adjustable lens 1013 is configured to adjust an imaging parameter between the first projection sub-module 1011 and the fundus so that corresponding user-related information can be clearly presented in the fundus of the left eye and the user can see the presentation.
  • the user related information on the image
  • a third beam splitter 1012 configured to perform optical path transfer between the first projection sub-module 1011 and the first focus adjustable lens 1013;
  • the fourth dichroic mirror 1014 is configured to perform optical path transmission between the first focal length adjustable lens 1013 and the left eye fundus.
  • the user-related information that the user sees has a suitable stereoscopic display effect, resulting in a better user experience.
  • the stereoscopic projection described above allows the user to view the three-dimensional spatial information. For example, when the user is required to perform a specific gesture in a specific position in the three-dimensional space to correctly input the user-related information, the above method of the embodiment of the present application enables the user to see stereoscopic user-related information, and learn the specific location.
  • FIG. 11 is a schematic structural diagram of another user information acquiring apparatus 1100 according to an embodiment of the present disclosure.
  • the specific implementation of the user information acquiring apparatus 1100 is not limited in the specific embodiment of the present application.
  • the user information obtaining apparatus 1100 may include:
  • a processor 1110 a communication interface 1120, a memory 1130, and a communication bus 1140. among them:
  • the processor 1110, the communication interface 1120, and the memory 1130 perform communication with each other via the communication bus 1140.
  • the communication interface 1120 is configured to communicate with a network element such as a client.
  • the processor 1110 is configured to execute the program 1132. Specifically, the related steps in the foregoing method embodiments may be performed.
  • program 1132 can include program code, the program code including computer operating instructions.
  • the processor 1110 may be a central processing unit CPU, or an application specific integrated circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • the memory 1130 is configured to store the program 1132.
  • Memory 1130 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
  • the program 1132 may be specifically configured to cause the user information acquiring apparatus 1100 to perform the following steps:
  • the user related information is projected to the fundus of the user.
  • the present application also provides a computer readable medium, including performing the following operations when executed Machine readable instructions: performing steps S110, S120 and S130 in the method embodiment shown in FIG.
  • the embodiment of the present application further provides a wearable device 1200, which includes the user information acquiring device 1210 described in the foregoing embodiment.
  • the wearable device is a pair of glasses.
  • the spectacles may be, for example, the structure of Figures 8-10.
  • the embodiment of the present application provides a user information interaction method, including: S1310 embedding at least one digital watermark in an image, where the digital watermark includes at least one user related information corresponding to at least one user;
  • the user related information includes an application startup information for starting a corresponding application.
  • the digital watermark can be divided into symmetric and asymmetric watermarks according to symmetry.
  • the traditional symmetric watermark embeds and detects the same key, so that once the detection method and key are disclosed, the watermark can be easily removed from the digital carrier.
  • the asymmetric watermarking technique uses a private key to embed the watermark and uses the public key to extract and verify the watermark, making it difficult for an attacker to destroy or remove the watermark embedded with the private key through the public key. Therefore, in the embodiment of the present application, the asymmetric digital watermark is used.
  • the image includes a login interface of a user environment displayed by a device
  • the application startup information is used to directly start at least one application in the user environment corresponding to the user on the login interface.
  • the user-related information that needs to be embedded in the watermark may be preset by the user according to his or her own personalized needs, or may be configured by the system for the user.
  • the user related information further includes: a user authentication information used by the user to log in to the user environment.
  • the method further includes:
  • the corresponding application is started according to the received application startup information.
  • the method of the embodiment of the present application is performed. Receiving the input application startup information, the method in the embodiment of the present application starts the corresponding application according to the received application startup information.
  • the user sees the application activation information in the image shown in FIG. 3, taking the user's need to use the browser function as an example.
  • the user is on a device that displays the image.
  • An "e"-shaped trajectory is drawn. After receiving the "e"-shaped trajectory, the method of the embodiment of the present application directly starts the browser application.
  • the user can conveniently and quickly start the required application, thereby improving the user experience.
  • the present application further provides a user information interaction apparatus 1400, including: a watermark embedding module 1410, configured to embed at least one digital watermark in an image, where the digital watermark includes a corresponding to at least one user. At least one user related information; the user related information includes an application startup information for starting a corresponding application.
  • the device 1400 further includes: a display module 1420, configured to display a login interface of a user environment, where the image includes the login interface;
  • the application startup information is used to directly start at least one application in the corresponding user environment on the login interface.
  • the user related information further includes: a user authentication information used by the user to log in to the user environment;
  • the device also includes a user authentication information input module 1430 for inputting the user authentication.
  • the device further includes:
  • the startup information input module 1440 is configured to receive the input application startup information
  • the application startup module 1450 is configured to start a corresponding application according to the received application startup information.
  • an embodiment of the present application further provides an electronic terminal 1500, including the user information interaction device 1510 described above.
  • the electronic terminal 1500 is an electronic device such as a mobile phone, a tablet computer, a computer, an electronic access control, and the like.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present application which is essential to the prior art or part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like.

Abstract

本申请公开了一种用户信息获取方法及用户信息获取装置,所述方法包括:获取一包含至少一数字水印的图像;获取所述图像中所述数字水印包含的与当前用户对应的至少一用户相关信息,所述用户相关信息包括一用于启动一对应应用的应用启动信息;将所述用户相关信息向所述用户的眼底投射。本申请使得用户可以快速、安全、方便的启动对应应用。

Description

用户信息获取方法及用户信息获取装置 本申请要求于 2013 年 11 月 15 日提交中国专利局、 申请号为 201310572154.3、 发明名称为 "用户信息获取方法及用户信息获取装置" 的中 国专利申请的优先权, 其全部内容通过引用结合在本申请中。 技术领域
本申请涉及信息获取技术, 尤其涉及一种用户信息获取方法及装置。 背景技术
电子设备基于节能和防止误操作的原因, 通常都会设置屏幕锁定, 而用 户实际操作时, 往往需要先解锁屏幕后, 再启动相应的应用程序来完成用户 需要的功能。
公开号为 US8136053的专利公开了一种通过特殊的屏幕解锁手势来自动 启动特定的操作, 例如启动不同的应用程序等。 该方法虽然可以通过不同手 势启动不同应用, 但记忆手势比较困难, 而且无法区分操作的用户, 安全性 不高。
此外, 当电子设备需要给不同用户使用时, 可以设置不同的用户环境, 通常方法是通过不同的用户名和密码登录不同的用户环境, 这样做也存在不 够方便和不够安全的问题。
因此, 需要找到一种更加方便、 安全的方法, 帮助用户快速启动用户环 境中应用。
发明内容
本申请要解决的技术问题是: 提供一种用户信息获取技术, 以获得用户 相关信息, 进而可以帮助用户快速、 方便、 安全的启动相关的应用。
为实现上述目的, 第一方面, 本申请提供了一种用户信息获取方法, 包 括:
获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与当前用户对应的至少一用户相关 信息, 所述用户相关信息包括一用于启动一对应应用的应用启动信息; 将所述用户相关信息向所述用户的眼底投射。
第二方面, 本申请提供了一种用户信息获取装置, 包括:
一图像获取模块, 用于获取一包含至少一数字水印的图像;
一信息获取模块, 用于获取所述图像中所述数字水印包含的与当前用户 对应的至少一用户相关信息, 所述用户相关信息包括一用于启动一对应应用 的应用启动信息;
一投射模块, 用于将所述用户相关信息向所述用户的眼底投射。
第三方面, 本申请提供了一种可穿戴设备, 包含上述的用户信息获取装 置。
本申请实施例的上述至少一个技术方案通过提取包含数字水印的图像中 的与当前用户对应的用户相关信息, 使得用户可以保密地获得快速启动一应 用的应用启动信息, 进而可以快速、 安全、 方便的启动对应应用。
附图说明
图 1为本申请实施例的一种用户信息获取方法的步骤流程图;
图 2和图 3为本申请实施例的一种用户信息获取方法的应用示意图; 图 4a和图 4b为本申请实施例一种用户信息获取方法使用的光斑图案以 及在眼底获得的包括所述光斑图案的图像的示意图;
图 5为本申请实施例一种用户信息获取装置的结构示意框图;
图 6a和 6b为本申请实施例另外两种用户信息获取装置的结构示意框图; 图 7a为本申请实施例一种用户信息获取装置使用的位置检测模块的结构 示意框图;
图 7b为本申请实施例另一种用户信息获取装置使用的位置检测模块的结 构示意框图;
图 7c和图 7d为本申请实施例一种用户信息获取装置使用的位置检测模 块进行位置检测时对应的光路图;
图 8为本申请实施例一种用户信息获取装置应用在眼镜上的示意图; 图 9为本申请实施例另一种用户信息获取装置应用在眼镜上的示意图; 图 10为本申请实施例又一种用户信息获取装置应用在眼镜上的示意图; 图 11为本申请实施例再一种用户信息获取装置的结构示意图;
图 12为本申请实施例一种可穿戴设备的结构示意框图;
图 13为本申请实施例一种用户信息交互方法的步骤流程图;
图 14为本申请实施例一种用户信息交互装置的结构示意框图;
图 15为本申请实施例一种电子终端的结构示意框图。
具体实施方式
本申请的方法及装置结合附图及实施例详细说明如下。
一般, 用户需要首先进入对应的用户环境, 才能启动该用户环境中的应 用, 对一些用户经常需要使用的应用来说, 不是很方便。
数字水印技术是将一些标识信息嵌入到数字载体中, 来进行版权保护、 防伪、 鉴权、 信息隐藏等, 通常都需要一定的设备通过特定的算法来读取和 验证, 有时还需要第三方权威机构参与认证过程, 这些复杂过程一定程度上 限制了它们的应用。 随着可穿戴式设备特别是智能眼镜的出现, 可以在智能 眼镜中以视觉呈现的方式来提醒用户看到的数字水印信息, 而用户屏幕解锁 的密码、 图案、 动作等就可以作为数字水印嵌入在锁屏背景图中, 特定的用 户可以通过佩戴认证过的智能眼镜来看到隐藏的水印内容, 间接完成其它设 备的用户认证过程。 因此本申请实施例提供了下面的技术方案来帮助用户快 速并安全的启动自己需要的应用。
在本申请实施例的下述描述中, 所述 "用户环境" 为与用户相关的使用 环境, 例如用户通过手机、 电脑等电子终端的用户登录界面登录后, 进入到 的电子终端系统的使用环境, 该电子终端系统的使用环境中一般包括多个应 用, 例如用户通过手机的锁屏界面进入手机系统的使用环境之后, 就可以启 动该系统中的各功能模块对应的应用, 例如电话、 邮件、信息、相机等应用; 或者, 所述用户环境例如还可以为用户通过某个应用的登录界面登陆后, 进 入到的该应用的使用环境, 该应用的使用环境中有可能还包括多个下一级的 应用, 例如, 上述手机系统中的电话应用, 启动之后, 所述电话应用中有可 能还包括打电话、 联系人、 通话记录等下一级的应用。
如图 1所示, 本申请实施例提供了一种用户信息获取方法, 包括:
S110获取一包含至少一数字水印的图像;
S120获取所述图像中所述数字水印包含的与当前用户对应的至少一用户 相关信息, 所述用户相关信息包括一用于启动一对应应用的应用启动信息;
S130将所述用户相关信息向所述用户的眼底投射。
本申请实施例的方法通过提取包含数字水印的图像中的与当前用户对应 的用户相关信息, 使得用户可以保密地获得快速启动一应用的应用启动信息, 进而可以快速、 安全、 方便的启动对应应用。
下面本申请实施例通过以下的实施方式对各步骤进行进一步的描述:
S110获取一包含至少一数字水印的图像。
本申请实施例获取所述图像的方式有多种, 例如:
1 )通过拍摄的方式获取所述图像。
在本申请实施例中, 可以通过一智能眼镜设备, 拍摄用户看到的物体, 例如当用户看到所述图像时, 所述智能眼镜设备拍摄到所述图像。
2 )通过接收的方式获取所述图像。
在本申请实施例的一种可能的实施方式中, 还可以通过其它设备获取所 述图像, 再通过设备间的交互获取所述图像; 或者通过与一显示该图像的设 备之间的交互来获取所述图像。
S120获取所述图像中所述数字水印包含的与当前用户对应的至少一用户 相关信息。
在本申请实施例中, 获取所述用户相关信息的方法有多种, 例如为以下 1 )从所述图像中提取所述用户相关信息。
在本实施方式中, 例如可以通过个人私钥及公开或私有的水印提取方法 来分析所述图像中的数字水印, 提取所述用户相关信息。 2 )将所述图像向外部发送; 并从外部接收所述图像中的所述用户相关信 自、
在本实施方式中, 可以将所述图像向外部发送, 例如发送至云端服务器 或一第三方权威机构, 通过所述云端服务器或所述第三方权威机构来提取所 述图像的数字水印中的所述用户相关信息。
如图 2所示, 在本申请实施例的一种可能的实施方式中, 所述图像为一 设备显示的一用户环境的登录界面 110。
此时, 所述应用启动信息, 用于在所述登录界面直接启动与所述当前用 户对应的所述用户环境中的至少一个对应应用。
在现有技术中, 例如一些不需要进行用户鉴权的电子设备的锁屏界面上 也有一些应用的快速启动接口, 虽然很方便, 但是很不安全。 而用户通过本 申请实施例可以在一用户环境的登录界面保密地获得用于直接从所述登录界 面启动所述用户环境中对应应用的应用启动信息,使得用户可以快速、方便、 安全的启动需要的应用, 改善用户体验。
在一种可能的实施方式中, 除了上述的应用启动信息外, 所述用户相关 信息还包括: 所述当前用户用于登录所述用户环境的一用户鉴权信息。
这里所述的用户鉴权信息例如为用户登录所述用户环境的用户名、 密码、 手势等信息,通过输入该用户鉴权信息,用户可以进入到对应的用户环境中。 例如在一手机的锁屏界面上输入用户设定的密码或特定的手指移动轨迹等来 解除锁屏状态, 并进入手机系统的用户环境中。 如图 3所示, 用户可以通过 在屏幕上输入图 3所示的用户鉴权信息 130来进入所述用户环境中 (例如通 过输入图 3 所示的 "Una" 图形的轨迹进入与该用户对应的用户环境), 而不 直接启动应用。
在本申请实施例的一些可能的实施方式中, 在所述获取所述图像中所述 数字水印包含的与当前用户对应的至少一用户相关信息之前还包括: 认证当 前用户。 通过对当前用户的身份进行确认, 进而使得所述步骤 S120能够获得 与所述用户对应的所述用户相关信息。 例如, 用户使用一经过身份认证的智 能眼镜来实现本申请实施例各步骤的功能。
当然, 对于一些保密要求不是特别高的场合, 也可以不进行所述认证, 用户只要通过对应的设备就可以得到该设备可以得到的对应信息。 还是以用 户通过一智能眼镜来实现本申请实施例各步骤的功能为例, 一般特定的眼镜 只有特定的用户会使用, 因此, 在本实施方式中, 只要通过一智能眼镜, 就 可以获得与该智能眼镜对应的用户相关信息, 而不需要特地对用户的身份进 行确认。
S130将所述用户相关信息向所述用户的眼底投射。
在本申请实施例中, 为了使得用户可以在保密的场合下得到所述用户相 关信息, 通过将所述用户相关信息向用户眼底投射的方式来使得用户获得对 应的所述用户相关信息。
在一种可能的实施方式中, 该投射可以是将所述用户相关信息通过投射 模块直接投射至用户的眼底。
在另一种可能的实施方式中, 该投射还可以是将所述用户相关信息显示 在只有用户可以看到的位置(例如一智能眼镜的显示面上), 通过该显示面将 所述用户相关信息投射至用户眼底。
其中, 第一种方式由于不需要将用户相关信息通过中间显示, 而是直接 到达用户眼底, 因此其隐私性更高。 下面进一步说明本实施方式, 所述步骤 S130包括:
投射所述用户相关信息;
调整所述投射位置与所述用户的眼睛之间光路的至少一投射成像参数, 直至所述用户相关信息在所述用户的眼底所成的像满足至少一设定的第一清 晰度标准。 这里所述的清晰度标准可以根据本领域技术人员常用的清晰度衡 量参数来设定, 例如图像的有效分辨率等参数。
在本申请实施例的一种可能的实施方式中, 所述参数调整步骤包括: 调节所述投射位置与所述用户的眼睛之间光路的至少一光学器件的至少 一成像参数和 /或在光路中的位置。 这里所述成像参数包括光学器件的焦距、 光轴方向等等。 通过该调节, 可以使得所述用户相关信息被合适地投射在用户的眼底, 例如通过调节所述 光学器件的焦距使得所述用户相关信息清晰地在用户的眼底成像。这里的 "清 晰" 指的是满足所述至少一设定的第一清晰度标准。 或者在下面提到的实施 方式中, 需要进行立体显示时, 除了在生成所述用户相关信息时直接生成带 视差的左、 右眼图像外, 通过将相同的用户相关信息具有一定偏差地分别投 射至两只眼睛也可以实现所述用户相关信息的立体显示效果, 此时, 例如可 以调节所述光学器件的光轴参数来达到该效果。
由于用户观看所述用户相关信息时眼睛的视线方向可能会变化, 需要在 用户眼睛的视线方向不同时都能将所述用户相关信息较好的投射到用户的眼 底,因此,在本申请实施例的一种可能的实施方式中,所述步骤 S130还包括: 分别对应所述眼睛光轴方向不同时瞳孔的位置, 将所述用户相关信息向 所述用户的眼底传递。
在本申请实施例的一种可能的实施方式中, 可能会需要通过曲面分光镜 等曲面光学器件来实现所述上述步骤的功能, 但是通过曲面光学器件后待显 示的内容一般会发生变形, 因此,在本申请实施例的一种可能的实施方式中, 所述步骤 S130还包括:
对所述用户相关信息进行与所述眼睛光轴方向不同时瞳孔的位置对应的 反变形处理, 使得眼底接收到需要呈现的所述用户相关信息。
例如: 对投射的用户相关信息进行预处理, 使得投射出的用户相关信息 具有与所述变形相反的反变形, 该反变形效果再经过上述的曲面光学器件后, 与曲面光学器件的变形效果相抵消, 从而用户眼底接收到的用户相关信息是 需要呈现的效果。
在一种可能的实施方式中, 投射到用户眼睛中的用户相关信息不需要与 所述图像进行对齐, 例如, 当需要用户按一定顺序在所述图像中显示的输入 框中输入一组应用启动信息或用户鉴权信息,如" 1234"时, 只需要将该组信息 投射在用户的眼底被用户看到即可。 但是, 在一些情况下, 例如, 当所述用 户相关信息为一在特定位置完成特定动作, 例如需要在显示所述图像的屏幕 上的特定位置画出特定轨迹时, 需要将所述用户相关信息与所述图像进行对 齐显示。 因此, 在本申请实施例的一种可能的实施方式中, 所述步骤 S130包 括:
将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼底对 齐。
如图 3所示, 本实施方式中, 用户通过步骤 S120获取了六个用户相关信 息, 其中包括五个应用启动信息 120和一个用户鉴权信息 130。
由图 3可以看出, 本实施方式中, 用户通过所述步骤 S130看到的所述应 用启动信息 120包括用于标识应用的标识信息 121 (可以为图 3所示的图形, 也可以为其它文字、 符号等; 当然在一些实施方式中, 所述标识信息 121 也 可以是直接显示在所述图像上的)以及启动所述应用的输入信息 122 (可以为 图 3中所示的图形轨迹, 也可以为数字、 符号等)。 当然, 在本申请实施例的 其它实施方式中, 所述应用启动信息可能只包括所述输入信息。 以图 3 所示 的登录界面 110上左上角的应用启动信息 120为例, 该应用启动信息 120包 括左侧的浏览器应用的标识信息 121 , 以及右侧的 "e" 形图形轨迹。 例如, 用户可以通过在所述屏幕上画一个 "e" 形轨迹, 就可以直接启动浏览器的应 用。
在一些实施例中, 为了防止用户误操作, 并且提高输入的保密性, 需要 在特定位置, 例如在图 3所示的 "e" 形轨迹所在的屏幕的位置, 输入特定的 图形轨迹, 例如图 3所示的 "e" 形轨迹, 才可以启动对应的应用。 在这种情 况下, 需要将所述投射的用户相关信息与所述图像进行对齐, 使得用户在需 要的位置看到所述用户相关信息。
为了实现上述的对齐功能,在一种可能的实施方式中,所述方法还包括: 所述将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼 底对齐包括: 根据所述用户注视点相对于所述用户的所述位置将所述投射的 用户相关信息与所述用户看到的图像在所述用户的眼底对齐。
这里, 由于用户此时正在看所述图像,例如用户的手机锁屏界面, 因此, 用户的注视点对应的位置即为所述图像所在的位置。
本实施方式中, 检测所述用户注视点位置的方式有多种, 例如包括以下 i )釆用一个瞳孔方向检测器检测一个眼睛的光轴方向、 再通过一个深度 传感器 (如红外测距)得到眼睛注视场景的深度, 得到眼睛视线的注视点位 置, 该技术为已有技术, 本实施方式中不再赘述。
ii )分别检测两眼的光轴方向, 再根据所述两眼光轴方向得到用户两眼视 线方向, 通过所述两眼视线方向的交点得到眼睛视线的注视点位置, 该技术 也为已有技术, 此处不再赘述。
iii )根据釆集到眼睛的成像面呈现的满足至少一设定的第二清晰度标准的 图像时图像釆集位置与眼睛之间光路的光学参数以及眼睛的光学参数, 得到 所述眼睛视线的注视点位置, 本申请实施例会在下面给出该方法的详细过程, 此处不再赘述。
当然, 本领域的技术人员可以知道, 除了上述几种形式的注视点检测方 法外, 其它可以用于检测用户眼睛注视点的方法也可以用于本申请实施例的 方法中。
其中, 通过第 iii )种方法检测用户当前的注视点位置的步骤包括: 眼底图像釆集步骤, 釆集一所述用户眼底的图像;
可调成像步骤, 进行所述眼底图像釆集位置与所述用户眼睛之间光路的 至少一成像参数的调节直至釆集到至少一满足至少一设定的第二清晰度标准 的图像;
图像处理步骤, 对釆集到的所述眼底的图像进行分析, 得到与所述满足 至少一设定的第二清晰度标准的图像对应的所述眼底图像釆集位置与所述眼 睛之间光路的所述成像参数以及所述眼睛的至少一光学参数, 并计算所述用 这里所述的第二清晰度标准中, 所述清晰度标准为上面所述本领域技术 人员常用的清晰度标准, 其可以与上面所述的第一清晰度标准相同, 也可以 不同。
通过对眼睛眼底的图像进行分析处理, 得到釆集到满足至少一设定的第 二清晰度标准的图像时眼睛的光学参数, 从而计算得到视线当前的对焦点位 置, 为进一步基于该精确的对焦点的位置对观察者的观察行为进行检测提供 基础。
这里的 "眼底" 呈现的图像主要为在视网膜上呈现的图像, 其可以为眼 底自身的图像, 或者可以为投射到眼底的其它物体的图像, 例如下面提到的 光斑图案。
在可调成像步骤中, 可通过对眼睛与釆集位置之间的光路上的光学器件 的焦距和 /或在光路中的位置进行调节, 可在该光学器件在某一个位置或状态 时获得眼底满足至少一设定的第二清晰度标准的图像。 该调节可为连续实时 在本申请实施例方法的一种可能的实施方式中, 该光学器件可为焦距可 调透镜,用于通过调整该光学器件自身的折射率和 /或形状完成其焦距的调整。 具体为: 1 )通过调节焦距可调透镜的至少一面的曲率来调节焦距, 例如在双 层透明层构成的空腔中增加或减少液体介质来调节焦距可调透镜的曲率; 2 ) 通过改变焦距可调透镜的折射率来调节焦距, 例如焦距可调透镜中填充有特 定液晶介质, 通过调节液晶介质对应电极的电压来调整液晶介质的排列方式, 从而改变焦距可调透镜的折射率。
在本申请实施例的方法的另一种可能的实施方式中, 该光学器件可为: 透镜组, 用于通过调节透镜组中透镜之间的相对位置完成透镜组自身焦距的 调整。 或者, 所述透镜组中的一片或多片透镜为上面所述的焦距可调透镜。
除了上述两种通过光学器件自身的特性来改变系统的光路参数以外, 还 可以通过调节光学器件在光路上的位置来改变系统的光路参数。
此外, 在本申请实施例的方法中, 所述图像处理步骤进一步包括: 对在眼底图像釆集步骤中釆集到的图像进行分析, 找到满足至少一设定 的第二清晰度标准的图像;
根据所述满足至少一设定的第二清晰度标准的图像、 以及得到所述满足 至少一设定的第二清晰度标准的图像时已知的成像参数计算眼睛的光学参数。
所述可调成像步骤中的调整使得能够釆集到满足至少一设定的第二清晰 度标准的图像, 但是需要通过所述图像处理步骤来找到该满足至少一设定的 第二清晰度标准的图像, 根据所述满足至少一设定的第二清晰度标准的图像 以及已知的光路参数就可以通过计算得到眼睛的光学参数。
在本申请实施例的方法中, 所述图像处理步骤中还可包括:
向眼底投射光斑。 所投射的光斑可以没有特定图案仅用于照亮眼底。 所 投射的光斑还可包括特征丰富的图案。 图案的特征丰富可以便于检测, 提高 检测精度。 如图 4a所示为一个光斑图案 P的示例图, 该图案可以由光斑图案 生成器形成, 例如毛玻璃; 图 4b所示为在有光斑图案 P投射时釆集到的眼底 的图像。
为了不影响眼睛的正常观看,所述光斑为眼睛不可见的红外光斑。此时, 为了减小其它光谱的干扰: 可滤除投射的光斑中眼睛不可见光之外的光。
相应地, 本申请实施的方法还可包括步骤:
根据上述步骤分析得到的结果, 控制投射光斑亮度。 该分析结果例如包 括所述釆集到的图像的特性, 包括图像特征的反差以及紋理特征等。
需要说明的是, 控制投射光斑亮度的一种特殊的情况为开始或停止投射, 例如观察者持续注视一点时可以周期性停止投射; 观察者眼底足够明亮时可 以停止投射, 利用眼底信息来检测眼睛当前视线对焦点到眼睛的距离。
此外, 还可以根据环境光来控制投射光斑亮度。
在本申请实施例的方法中, 所述图像处理步骤还包括:
进行眼底图像的校准, 获得至少一个与眼底呈现的图像对应的基准图像。 具言之, 将釆集到的图像与所述基准图像进行对比计算, 获得所述满足至少 一设定的第二清晰度标准的图像。 这里, 所述满足至少一设定的第二清晰度 标准的图像可以为获得的与所述基准图像差异最小的图像。 在本实施方式的 方法中, 可以通过现有的图像处理算法计算当前获得的图像与基准图像的差 异, 例如使用经典的相位差值自动对焦算法。
所述眼睛的光学参数可包括根据釆集到所述满足至少一设定的第二清晰 度标准的图像时眼睛的特征得到的眼睛光轴方向。 这里眼睛的特征可以是从 所述满足至少一设定的第二清晰度标准的图像上获取的, 或者也可以是另外 获取的。 根据所述眼睛的光轴方向可以得到用户眼睛视线的注视方向。 具言 之, 可根据得到所述满足至少一设定的第二清晰度标准的图像时眼底的特征 得到眼睛光轴方向, 并且通过眼底的特征来确定眼睛光轴方向精确度更高。
在向眼底投射光斑图案时, 光斑图案的大小有可能大于眼底可视区域或 小于眼底可视区域, 其中:
当光斑图案的面积小于等于眼底可视区域时, 可以利用经典特征点匹配 算法(例如尺度不变特征转换( Scale Invariant Feature Transform, SIFT )算法) 通过检测图像上的光斑图案相对于眼底位置来确定眼睛光轴方向。
当光斑图案的面积大于等于眼底可视区域时, 可以通过得到的图像上的 光斑图案相对于原光斑图案 (通过图像校准获得) 的位置来确定眼睛光轴方 向确定观察者视线方向。
在本申请实施例的方法的另一种可能的实施方式中, 还可根据得到所述 满足至少一设定的第二清晰度标准的图像时眼睛瞳孔的特征得到眼睛光轴方 向。 这里眼睛瞳孔的特征可以是从所述满足至少一设定的第二清晰度标准的 图像上获取的, 也可以是另外获取的。 通过眼睛瞳孔特征得到眼睛光轴方向 为已有技术, 此处不再赘述。
此外,在本申请实施例的方法中,还可包括对眼睛光轴方向的校准步骤, 以便更精确的进行上述眼睛光轴方向的确定。
在本申请实施例的方法中, 所述已知的成像参数包括固定的成像参数和 实时成像参数, 其中实时成像参数为获取满足至少一设定的第二清晰度标准 的图像时所述光学器件的参数信息, 该参数信息可以在获取所述满足至少一 设定的第二清晰度标准的图像时实时记录得到。
在得到眼睛当前的光学参数之后, 就可以结合计算得到的眼睛对焦点到 眼睛的距离 (具体过程将结合装置部分详述), 得到眼睛注视点的位置。
为了让用户看到的用户相关信息具有立体显示效果、 更加真实, 在本申 请实施例的一种可能的实施方式中, 可以在所述步骤 130 中, 将所述用户相 关信息立体地向所述用户的眼底投射。
如上面所述的, 在一种可能的实施方式中, 该立体的显示可以是将相同 的信息, 通过步骤 S130投射位置的调整, 使得用户两只眼睛看到的具有视差 的信息, 形成立体显示效果。
在另一种可能的实施方式中, 所述用户相关信息包括分别与所述用户的 两眼对应的立体信息, 所述步骤 S130中, 分别向所述用户的两眼投射对应的 用户相关信息。 即: 所述用户相关信息包括与用户左眼对应的左眼信息以及 与用户右眼对应的右眼信息, 投射时将所述左眼信息投射至用户左眼, 将右 眼信息投射至用户右眼, 使得用户看到的用户相关信息具有适合的立体显示 效果, 带来更好的用户体验。 此外, 在对用户输入的用户相关信息中含有三 维空间信息时,通过上述的立体投射,使得用户可以看到所述三维空间信息。 例如: 当需要用户在三维空间中特定的位置做特定的手势才能正确输入所述 用户相关信息时, 通过本申请实施例的上述方法使得用户看到立体的用户相 关信息, 获知所述特定的位置和特定的手势, 进而使得用户可以在所述特定 位置做所述用户相关信息提示的手势, 此时其它人即使看到用户进行的手势 动作, 但是由于无法获知所述空间信息, 使得所述用户相关信息的保密效果 更好。 如图 5所示, 本申请实施例还提供了一种用户信息获取装置 500, 包括: 一图像获取模块 510, 用于获取一包含至少一数字水印的图像;
一信息获取模块 520,用于获取所述图像中所述数字水印包含的与当前用 户对应的至少一用户相关信息, 所述用户相关信息包括一用于启动一对应应 用的应用启动信息;
一投射模块 530, 用于将所述用户相关信息向所述用户的眼底投射。
本申请实施例的方法通过提取包含数字水印的图像中的与当前用户对应 的用户相关信息, 使得用户可以保密地获得快速启动一应用的应用启动信息, 进而可以快速、 安全、 方便的启动对应应用。
为了让用户更加自然、 方便地获取所述用户相关信息, 本申请实施例的 装置为一用于用户眼睛附近的可穿戴设备, 例如一智能眼镜。 在用户视线的 注视点落在所述图像上时, 通过所述图像获取模块 510 自动获取所述图像, 并在获得所述用户相关信息后将所述信息投射至用户眼底。
下面本申请实施例通过以下的实施方式对上述装置的各模块进行进一步 的描述:
在本申请实施例的实施方式中, 所述图像获取模块 510 的形式可以有多 如图 6a所示, 所述图像获取模块 510包括一拍摄子模块 511 , 用于拍摄 所述图像。
其中所述拍摄子模块 511 , 例如可以为一智能眼镜的摄像头, 用于对用户 看到的图像进行拍摄。
如图 6b所示, 在本申请实施例的另一个实施方式中, 所述图像获取模块 510包括:
一第一通信子模块 512, 用于接收所述图像。
在本实施方式中, 可以通过其它设备获取所述图像, 再发送本申请实施 例的装置; 或者通过与一显示该图像的设备之间的交互来获取所述图像(即 所述设备将显示的图像信息传送给本申请实施例的装置)。
在本申请实施例中,所述信息获取模块 520的形式也可以有多种,例如: 如图 6a所示, 所述信息获取模块 520包括: 一信息提取子模块 521 , 用 于从所述图像中提取所述用户相关信息。
在本实施方式中, 所示信息提取子模块 512例如可以通过个人私钥及公 开或私有的水印提取方法来分析所述图像中的数字水印, 提取所述用户相关 自
1口 Ά、。
如图 6b所示, 在本申请实施例的另一个实施方式中, 所述信息获取模块 520包括: 一第二通信子模块 522, 用于:
将所述图像向外部发送; 从外部接收所述图像中的所述用户相关信息。 在本实施方式中, 可以将所述图像向外部发送, 例如发送至云端服务器 和 /或一第三方权威机构, 通过所述云端服务器或所述第三方权威机构来提取 所述图像的数字水印中的所述用户相关信息后, 再回发给本申请实施例的所 述第二通信子模块 522。
这里, 所述第一通信子模块 512与所述第二通信子模块 522的功能有可 能由同一通信模块实现。
如图 6a所示, 在本申请实施例的一种可能的实施方式中, 在需要对装置 的用户进行认证的场合, 所述装置 500还包括: 用户认证模块 550, 用于认证 当前用户。
所述用户认证模块可以为现有的用户认证模块,例如:通过用户的瞳孔、 指紋等生物特征进行认证的认证模块; 或者, 通过用户输入的指令进行认证 的模块等等, 这些认证模块都为已有的技术, 这里不再赘述。
还是以智能眼镜为例, 当用户使用能实现本申请实施例装置的功能的智 能眼镜时, 首先智能眼镜对用户进行认证, 使得智能眼镜知道用户的身份, 在之后通过所述信息提取模块 520提取所述用户相关信息时, 只获取与用户 对应的用户相关信息。 即, 用户只需要对自己的智能眼镜进行一次用户认证 后, 就可以通过所述智能眼镜对自己的或公用的各设备进行用户相关信息的 获取。
当然, 本领域的技术人员可以知道, 在不需要对用户进行认证时, 所述 装置可以不包括所述用户认证模块, 如图 6b所示。 如图 6a所示, 在本实施方式中, 所述投射模块 530包括: 一信息投射子模块 531 , 用于投射所述用户相关信息;
一参数调整子模块 532,用于调整所述投射位置与所述用户的眼睛之间光 路的至少一投射成像参数, 直至所述用户相关信息在所述用户的眼底所成的 像满足至少一设定的第一清晰度标准。
在一种实施方式中, 所述参数调整子模块 532包括:
至少一可调透镜器件, 其自身焦距可调和 /或在所述投射位置与所述用户 的眼睛之间光路上的位置可调。
如图 6b所示, 在一种实施方式中, 所述投射模块 530包括:
一曲面分光器件 533 ,用于分别对应所述眼睛光轴方向不同时瞳孔的位置, 将所述用户相关信息向所述用户的眼底传递。
在一种实施方式中, 所述投射模块 530包括:
一反变形处理子模块 534,用于对所述用户相关信息进行与所述眼睛光轴 方向不同时瞳孔的位置对应的反变形处理, 使得眼底接收到需要呈现的所述 用户相关信息。
在一种实施方式中, 所述投射模块 530包括:
一对齐调整子模块 535 ,用于将所述投射的用户相关信息与所述用户看到 的图像在所述用户的眼底对齐。
在一种实施方式中, 所述装置还包括: 所述对齐调整子模块 535 ,用于根据所述用户注视点相对于所述用户的所 述位置将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼底 对齐。
上述投射模块 530 的各子模块的功能参见上面方法实施例中对应步骤的 描述, 并且在下面图 7a-7d, 图 8和图 9中所示的实施例中也会给出实例。
本申请实施例中, 所述位置检测模块 540可以有多种实现方式, 例如方 法实施例的 i)-iii)种所述方法对应的装置。本申请实施例通过图 7a-图 7d、 图 8 以及图 9对应的实施方式来进一步说明第 iii )种方法对应的位置检测模块: 如图 7a所示, 在本申请实施例的一种可能的实施方式中, 所述位置检测 模块 700包括:
一眼底图像釆集子模块 710, 用于釆集一所述用户眼底的图像; 一可调成像子模块 720,用于进行所述眼底图像釆集位置与所述用户眼睛 之间光路的至少一成像参数的调节直至釆集到一满足至少一设定的第二清晰 度标准的图像;
一图像处理子模块 730, 用于对釆集到的所述眼底的图像进行分析, 得到 与所述满足至少一设定的第二清晰度标准的图像对应的所述眼底图像釆集位 置与所述眼睛之间光路的所述成像参数以及所述眼睛的至少一光学参数, 并 计算所述用户当前的注视点相对于所述用户的位置。
本位置检测模块 700通过对眼睛眼底的图像进行分析处理, 得到所述眼 底图像釆集子模块获得满足至少一设定的第二清晰度标准的图像时眼睛的光 学参数, 就可以计算得到眼睛当前的注视点位置。
这里的 "眼底" 呈现的图像主要为在视网膜上呈现的图像, 其可以为眼 底自身的图像, 或者可以为投射到眼底的其它物体的图像。 这里的眼睛可以 为人眼, 也可以为其它动物的眼睛。
如图 7b所示, 本申请实施例的一种可能的实施方式中, 所述眼底图像釆 集子模块 710为微型摄像头, 在本申请实施例的另一种可能的实施方式中, 所述眼底图像釆集子模块 710 还可以直接使用感光成像器件, 如 CCD 或 CMOS等器件。
在本申请实施例的一种可能的实施方式中, 所述可调成像子模块 720包 括: 可调透镜器件 721 , 位于眼睛与所述眼底图像釆集子模块 710之间的光路 上, 自身焦距可调和 /或在光路中的位置可调。 通过该可调透镜器件 721 , 使 得从眼睛到所述眼底图像釆集子模块 710之间的系统等效焦距可调, 通过可 调透镜器件 721 的调节, 使得所述眼底图像釆集子模块 710在可调透镜器件 721 的某一个位置或状态时获得眼底满足至少一设定的第二清晰度标准的图 像。在本实施方式中,所述可调透镜器件 721在检测过程中连续实时的调节。 在本申请实施例的一种可能的实施方式中, 所述可调透镜器件 721 为: 焦距可调透镜, 用于通过调节自身的折射率和 /或形状完成自身焦距的调整。 具体为: 1 )通过调节焦距可调透镜的至少一面的曲率来调节焦距, 例如在双 层透明层构成的空腔中增加或减少液体介质来调节焦距可调透镜的曲率; 2 ) 通过改变焦距可调透镜的折射率来调节焦距, 例如焦距可调透镜中填充有特 定液晶介质, 通过调节液晶介质对应电极的电压来调整液晶介质的排列方式, 从而改变焦距可调透镜的折射率。
在本申请实施例的另一种可能的实施方式中, 所述可调透镜器件 721包 括: 多片透镜构成的透镜组, 用于调节透镜组中透镜之间的相对位置完成透 镜组自身焦距的调整。 所述透镜组中也可以包括自身焦距等成像参数可调的 透镜。
除了上述两种通过调节可调透镜器件 721 自身的特性来改变系统的光路 参数以外, 还可以通过调节所述可调透镜器件 721 在光路上的位置来改变系 统的光路参数。
在本申请实施例的一种可能的实施方式中, 为了不影响用户对观察对象 的观看体验, 并且为了使得系统可以便携应用在穿戴式设备上, 所述可调成 像子模块 720还包括: 分光单元 722, 用于形成眼睛和观察对象之间、 以及眼 睛和眼底图像釆集子模块 710之间的光传递路径。这样可以对光路进行折叠, 减小系统的体积, 同时尽可能不影响用户的其它视觉体验。
在本实施方式中, 所述分光单元包括: 第一分光单元, 位于眼睛和观察 对象之间, 用于透射观察对象到眼睛的光, 传递眼睛到眼底图像釆集子模块 所述第一分光单元可以为分光镜、 分光光波导 (包括光纤)或其它适合 在本申请实施例的一种可能的实施方式中, 所述系统的图像处理子模块
730包括光路校准单元, 用于对系统的光路进行校准, 例如进行光路光轴的对 齐校准等, 以保证测量的精度。 在本申请实施例的一种可能的实施方式中, 所述图像处理子模块 730包 括:
图像分析单元 731 ,用于对所述眼底图像釆集子模块得到的图像进行分析, 找到满足至少一设定的第二清晰度标准的图像;
参数计算单元 732,用于根据所述满足至少一设定的第二清晰度标准的图 像、 以及得到所述满足至少一设定的第二清晰度标准的图像时系统已知的成 像参数计算眼睛的光学参数。
在本实施方式中, 通过可调成像子模块 720使得所述眼底图像釆集子模 块 710可以得到满足至少一设定的第二清晰度标准的图像, 但是需要通过所 述图像分析单元 731 来找到该满足至少一设定的第二清晰度标准的图像, 此 时根据所述满足至少一设定的第二清晰度标准的图像以及系统已知的光路参 数就可以通过计算得到眼睛的光学参数。 这里眼睛的光学参数可以包括眼睛 的光轴方向。
在本申请实施例的一种可能的实施方式中, 所述系统还包括: 投射子模 块 740, 用于向眼底投射光斑。 在一个可能的实施方式中, 可以通过微型投影 仪来视线该投射子模块的功能。
这里投射的光斑可以没有特定图案仅用于照亮眼底。
在本申请实施例的一种实施方式中, 所述投射的光斑包括特征丰富的图 案。 图案的特征丰富可以便于检测, 提高检测精度。 如图 4a所示为一个光斑 图案 P的示例图, 该图案可以由光斑图案生成器形成, 例如毛玻璃; 图 4b所 示为在有光斑图案 P投射时拍摄到的眼底的图像。
为了不影响眼睛的正常观看, 所述光斑为眼睛不可见的红外光斑。
此时, 为了减小其它光谱的干扰:
所述投射子模块的出射面可以设置有眼睛不可见光透射滤镜。
所述眼底图像釆集子模块的入射面设置有眼睛不可见光透射滤镜。
在本申请实施例的一种可能的实施方式中, 所述图像处理子模块 730还 包括: 投射控制单元 734, 用于根据图像分析单元 731得到的结果, 控制所述投 射子模块 740的投射光斑亮度。
例如所述投射控制单元 734可以根据眼底图像釆集子模块 710得到的图 像的特性自适应调整亮度。 这里图像的特性包括图像特征的反差以及紋理特 征等。
这里, 控制所述投射子模块 740 的投射光斑亮度的一种特殊的情况为打 开或关闭投射子模块 740,例如用户持续注视一点时可以周期性关闭所述投射 子模块 740;用户眼底足够明亮时可以关闭发光源只利用眼底信息来检测眼睛 当前视线注视点到眼睛的距离。
此外, 所述投射控制单元 734还可以根据环境光来控制投射子模块 740 的投射光斑亮度。
在本申请实施例的一种可能的实施方式中, 所述图像处理子模块 730还 包括: 图像校准单元 733 , 用于进行眼底图像的校准, 获得至少一个与眼底呈 现的图像对应的基准图像。
所述图像分析单元 731将眼底图像釆集子模块 730得到的图像与所述基 准图像进行对比计算, 获得所述满足至少一设定的第二清晰度标准的图像。 这里, 所述满足至少一设定的第二清晰度标准的图像可以为获得的与所述基 准图像差异最小的图像。 在本实施方式中, 通过现有的图像处理算法计算当 前获得的图像与基准图像的差异, 例如使用经典的相位差值自动对焦算法。
在本申请实施例的一种可能的实施方式中,所述参数计算单元 732包括: 眼睛光轴方向确定子单元 7321 , 用于根据得到所述满足至少一设定的第 二清晰度标准的图像时眼睛的特征得到眼睛光轴方向。
这里眼睛的特征可以是从所述满足至少一设定的第二清晰度标准的图像 上获取的, 或者也可以是另外获取的。 根据眼睛光轴方向可以得到用户眼睛 视线注视的方向。
在本申请实施例的一种可能的实施方式中, 所述眼睛光轴方向确定子单 元 7321包括: 第一确定子单元, 用于根据得到所述满足至少一设定的第二清 晰度标准的图像时眼底的特征得到眼睛光轴方向。 与通过瞳孔和眼球表面的 特征得到眼睛光轴方向相比, 通过眼底的特征来确定眼睛光轴方向精确度更 高。
在向眼底投射光斑图案时, 光斑图案的大小有可能大于眼底可视区域或 小于眼底可视区域, 其中:
当光斑图案的面积小于等于眼底可视区域时, 可以利用经典特征点匹配 算法(例如尺度不变特征转换( Scale Invariant Feature Transform, SIFT )算法) 通过检测图像上的光斑图案相对于眼底位置来确定眼睛光轴方向;
当光斑图案的面积大于等于眼底可视区域时, 可以通过得到的图像上的 光斑图案相对于原光斑图案 (通过图像校准单元获得) 的位置来确定眼睛光 轴方向确定用户视线方向。
在本申请实施例的另一种可能的实施方式中, 所述眼睛光轴方向确定子 单元 7321包括: 第二确定子单元, 用于根据得到所述满足至少一设定的第二 清晰度标准的图像时眼睛瞳孔的特征得到眼睛光轴方向。 这里眼睛瞳孔的特 征可以是从所述满足至少一设定的第二清晰度标准的图像上获取的, 也可以 是另外获取的。 通过眼睛瞳孔特征得到眼睛光轴方向为已有技术, 此处不再 在本申请实施例的一种可能的实施方式中, 所述图像处理子模块 730还 包括: 眼睛光轴方向校准单元 735 , 用于进行眼睛光轴方向的校准, 以便更精 确的进行上述眼睛光轴方向的确定。
在本实施方式中, 所述系统已知的成像参数包括固定的成像参数和实时 成像参数, 其中实时成像参数为获取满足至少一设定的第二清晰度标准的图 像时所述可调透镜器件的参数信息, 该参数信息可以在获取所述满足至少一 设定的第二清晰度标准的图像时实时记录得到。
下面再计算得到眼睛注视点到眼睛的距离, 具体为:
图 7c所示为眼睛成像示意图, 结合经典光学理论中的透镜成像公式, 由 图 7c可以得到公式 (1): d de fe 其中 d。和 de分别为眼睛当前观察对象 7010和视网膜上的实像 7020到眼 等效透镜 7030的距离, fe为眼睛等效透镜 7030的等效焦距, X为眼睛的视 方向 (可以由所述眼睛的光轴方向得到)。
图 7d所示为根据系统已知光学参数和眼睛的光学参数得到眼睛注视点到 睛的距离的示意图, 图 7d中光斑 7040通过可调透镜器件 721会成一个虛 (图 7d中未示出 ), 假设该虛像距离透镜距离为 X (图 7d中未示出 ), 结合 么、式 (1)可以得到如下方程组
Figure imgf000023_0001
其中 d为光斑 7040到可调透镜器件 721的光 ;效距离, d;为可调透镜 到眼! :透镜 7030的光 ;效距离, f„为可调透镜器件 721的焦
Figure imgf000023_0002
由(1)和 (2)可以得出当前观察对象 7010 (眼睛注视点) 到眼 7030的距离 d如公式 (3)所示: d - f
d = d. + p p (3)
0 1 f - d
p p 根据上述计算得 察对象 7010到眼睛的距离, 又由于之前的记载可 以得到眼睛光轴方向, ¾易得到眼睛的注视点位置, 为后续与眼睛相 关的进一步交互提供了基础。
如图 8所示为本申请实施例的一种可能的实施方式的位置检测模块 800 应用在眼镜 G上的实施例,其包括图 7b所示实施方式的记载的内容,具体为: 由图 8可以看出, 在本实施方式中, 在眼镜 G右侧 (不局限于此)集成了本 实施方式的模块 800, 其包括:
微型摄像头 810, 其作用与图 7b实施方式中记载的眼底图像釆集子模块 相同, 为了不影响用户正常观看对象的视线, 其被设置于眼镜 G右外侧; 第一分光镜 820, 其作用与图 7b实施方式中记载的第一分光单元相同, 以一定倾角设置于眼睛 A注视方向和摄像头 810入射方向的交点处, 透射观 察对象进入眼睛 A的光以及反射眼睛到摄像头 810的光;
焦距可调透镜 830,其作用与图 7b实施方式中记载的焦距可调透镜相同, 位于所述第一分光镜 820和摄像头 810之间, 实时进行焦距值的调整, 使得 在某个焦距值时, 所述摄像头 810 能够拍到眼底满足至少一设定的第二清晰 度标准的图像。
在本实施方式中, 所述图像处理子模块在图 8 中未表示出, 其功能与图 7b所示的图像处理子模块相同。
由于一般情况下, 眼底的亮度不够, 因此, 最好对眼底进行照明, 在本 实施方式中, 通过一个发光源 840来对眼底进行照明。 为了不影响用户的体 验, 这里所述发光源 840可以为眼睛不可见光发光源, 优选对眼睛 A影响不 大并且摄像头 810又比较敏感的近红外光发光源。
在本实施方式中, 所述发光源 840位于右侧的眼镜架外侧, 因此需要通 过一个第二分光镜 850与所述第一分光镜 820—起完成所述发光源 840发出 的光到眼底的传递。 本实施方式中, 所述第二分光镜 850 又位于摄像头 810 的入射面之前, 因此其还需要透射眼底到第二分光镜 850的光。
可以看出, 在本实施方式中, 为了提高用户体验和提高摄像头 810 的釆 集清晰度, 所述第一分光镜 820 可以具有对红外反射率高、 对可见光透射率 高的特性。 例如可以在第一分光镜 820朝向眼睛 A的一侧设置红外反射膜实 现上述特性。
由图 8可以看出, 由于在本实施方式中, 所述位置检测模块 800位于眼 镜 G的镜片远离眼睛 A的一侧, 因此进行眼睛光学参数进行计算时, 可以将 镜片也看成是眼睛 A的一部分, 此时不需要知道镜片的光学特性。 在本申请实施例的其它实施方式中, 所述位置检测模块 800可能位于眼 镜 G的镜片靠近眼睛 A的一侧, 此时, 需要预先得到镜片的光学特性参数, 并在计算注视点距离时, 考虑镜片的影响因素。
本实施例中发光源 840发出的光通过第二分光镜 850的反射、 焦距可调 透镜 830的投射、 以及第一分光镜 820的反射后再透过眼镜 G的镜片进入用 户眼睛, 并最终到达眼底的视网膜上; 摄像头 810经过所述第一分光镜 820、 焦距可调透镜 830以及第二分光镜 850构成的光路透过眼睛 A的瞳孔拍摄到 眼底的图像。
在一种可能的实施方式中, 本申请实施例的装置的其它部分也实现在所 述眼镜 G上,并且,由于所述位置检测模块和所述投射模块有可能同时包括: 具有投射功能的设备(如上面所述的投射模块的信息投射子模块, 以及所述 位置检测模块的投射子模块); 以及成像参数可调的成像设备(如上面所述的 投射模块的参数调整子模块, 以及所述位置检测模块的可调成像子模块)等, 因此在本申请实施例的一种可能的实施方式中, 所述位置检测模块和所述投 射模块的功能由同一设备实现。
如图 8所示, 在本申请实施例的一种可能的实施方式中, 所述发光源 840 除了可以用于所述位置检测模块的照明外, 还可以作为所述投射模块的信息 投射子模块的光源辅助投射所述用户相关信息。 在一种可能的实施方式中, 所述发光源 840可以同时分别投射一个不可见的光用于所述位置检测模块的 照明; 以及一个可见光, 用于辅助投射所述用户相关信息; 在另一种可能的 实施方式中, 所述发光源 840还可以分时地切换投射所述不可见光与所述可 见光; 在又一种可能的实施方式中, 所述位置检测模块可以使用所述用户相 关信息来完成照亮眼底的功能。
在本申请实施例的一种可能的实施方式中, 所述第一分光镜 820、 第二分 光镜 850以及所述焦距可调透镜 830除了可以作为所述的投射模块的参数调 整子模块外, 还可以作为所述位置检测模块的可调成像子模块。 这里, 所述 的焦距可调透镜 830在一种可能的实施方式中, 其焦距可以分区域的调节, 不同的区域分别对应于所述位置检测模块和所述投射模块, 焦距也可能会不 同。 或者, 所述焦距可调透镜 830 的焦距是整体调节的, 但是所述位置检测 模块的微型摄像头 810的感光单元(如 CCD等 )的前端还设置有其它光学器 件, 用于实现所述位置检测模块的成像参数辅助调节。 此外, 在另一种可能 的实施方式中, 可以配置使得从所述发光源 840 的发光面 (即用户相关信息 投出位置) 到眼睛的光程与所述眼睛到所述微型摄像头 810 的光程相同, 则 所述焦距可调透镜 830调节至所述微型摄像头 810接收到最清晰的眼底图像 时, 所述发光源 840投射的用户相关信息正好在眼底清晰地成像。
由上述可以看出, 本申请实施例用户信息获取装置的位置检测模块与投 射模块的功能可以由一套设备实现, 使得整个系统结构简单、 体积小、 更加 便于携带。 如图 9所示为本申请实施例的另一种实施方式位置检测模块 900的结构 示意图。 由图 9可以看出, 本实施方式与图 8所示的实施方式相似, 包括微 型摄像头 910、 第二分光镜 920、 焦距可调透镜 930, 不同之处在于, 在本实 施方式中的投射子模块 940为投射光斑图案的投射子模块 940,并且通过一个 曲面分光镜 950作为曲面分光器件取代了图 8实施方式中的第一分光镜。
这里釆用了曲面分光镜 950分别对应眼睛光轴方向不同时瞳孔的位置, 将眼底呈现的图像传递到眼底图像釆集子模块。 这样摄像头可以拍摄到眼球 各个角度混合叠加的成像, 但由于只有通过瞳孔的眼底部分能够在摄像头上 清晰成像, 其它部分会失焦而无法清晰成像, 因而不会对眼底部分的成像构 成严重干扰, 眼底部分的特征仍然可以检测出来。 因此, 与图 8所示的实施 方式相比, 本实施方式可以在眼睛注视不同方向时都能很好的得到眼底的图 像, 使得本实施方式的位置检测模块适用范围更广, 检测精度更高。
在本申请实施例的一种可能的实施方式中, 本申请实施例的用户信息获 取装置的其它部分也实现在所述眼镜 G上。 在本实施方式中, 所述位置检测 模块和所述投射模块也可以复用。 与图 8所示的实施例类似地, 此时所述投 射子模块 940可以同时或者分时切换地投射光斑图案以及所述用户相关信息; 或者所述位置检测模块将投射的用户相关信息作为所述光斑图案进行检测。 与图 8所示的实施例类似地, 在本申请实施例的一种可能的实施方式中, 所 述第一分光镜 920、第二分光镜 950以及所述焦距可调透镜 930除了可以作为 所述的投射模块的参数调整子模块外, 还可以作为所述位置检测模块的可调 此时, 所述第二分光镜 950还用于分别对应眼睛光轴方向不同时瞳孔的 位置, 进行所述投射模块与眼底之间的光路传递。 由于所述投射子模块 940 投射的用户相关信息经过所述曲面的第二分光镜 950之后会发生变形, 因此 在本实施方式中, 所述投射模块包括:
反变形处理模块(图 9中未示出), 用于对所述用户相关信息进行与所述 曲面分光器件对应的反变形处理, 使得眼底接收到需要呈现的用户相关信息。 在一种实施方式中, 所述投射模块用于将所述用户相关信息立体地向所 述用户的眼底投射。
所述用户相关信息包括分别与所述用户的两眼对应的立体信息, 所述投 射模块, 分别向所述用户的两眼投射对应的用户相关信息。
如图 10所示,在需要进行立体显示的情况下,所述用户信息获取装置 1000 需要分别与用户的两只眼睛对应的设置两套投射模块, 包括:
与用户的左眼对应的第一投射模块; 以及
与用户的右眼对应的第二投射模块。
其中, 所述第二投射模块的结构与图 10的实施例中记载的复合有位置检 测模块功能的结构类似, 也为可以同时实现位置检测模块功能以及投射模块 功能的结构,包括与所述图 10所示实施例功能相同的微型摄像头 1021、 第二 分光镜 1022、 第二焦距可调透镜 1023 , 第一分光镜 1024 (所述位置检测模块 的图像处理子模块在图 10 中未示出), 不同之处在于, 在本实施方式中的投 射子模块为可以投射右眼对应的用户相关信息的第二投射子模块 1025。 其同 时可以用于检测用户眼睛的注视点位置, 并且把与右眼对应的用户相关信息 清晰投射至右眼眼底。
所述第一投射模块的结构与所述第二投射模块 1020的结构类似, 但是其 不具有微型摄像头, 并且没有复合位置检测模块的功能。 如图 10所示, 所述 第一投射模块包括:
第一投射子模块 1011 , 用于将与左眼对应的用户相关信息向左眼眼底投 射;
第一焦距可调透镜 1013,用于对所述第一投射子模块 1011与眼底之间的 成像参数进行调节, 使得对应的用户相关信息可以清晰地呈现在左眼眼底并 且使得用户可以看到呈现在所述图像上的所述用户相关信息;
第三分光镜 1012,用于在所述第一投射子模块 1011与所述第一焦距可调 透镜 1013之间进行光路传递;
第四分光镜 1014,用于在所述第一焦距可调透镜 1013与所述左眼眼底之 间进行光路传递。 通过本实施例, 使得用户看到的用户相关信息具有适合的立体显示效果, 带来更好的用户体验。 此外, 在对用户输入的用户相关信息中含有三维空间 信息时, 通过上述的立体投射,使得用户可以看到所述三维空间信息。例如: 当需要用户在三维空间中特定的位置做特定的手势才能正确输入所述用户相 关信息时, 通过本申请实施例的上述方法使得用户看到立体的用户相关信息, 获知所述特定的位置和特定的手势, 进而使得用户可以在所述特定位置做所 述用户相关信息提示的手势, 此时其它人即使看到用户进行的手势动作, 但 是由于无法获知所述空间信息, 使得所述用户相关信息的保密效果更好。 图 11为本申请实施例提供的又一种用户信息获取装置 1100的结构示意 图, 本申请具体实施例并不对用户信息获取装置 1100的具体实现做限定。 如 图 11所示, 该用户信息获取装置 1100可以包括:
处理器 (processor)1110、 通信接口(Communications Interface) 1120、 存储器 (memory) 1130, 以及通信总线 1140。 其中:
处理器 1110、 通信接口 1120、 以及存储器 1130通过通信总线 1140完成 相互间的通信。
通信接口 1120, 用于与比如客户端等的网元通信。
处理器 1110, 用于执行程序 1132, 具体可以执行上述方法实施例中的相 关步骤。
具体地, 程序 1132可以包括程序代码, 所述程序代码包括计算机操作指 令。
处理器 1110 可能是一个中央处理器 CPU, 或者是特定集成电路 ASIC ( Application Specific Integrated Circuit ) , 或者是被配置成实施本申请实施例 的一个或多个集成电路。
存储器 1130, 用于存放程序 1132。 存储器 1130可能包含高速 RAM存储 器, 也可能还包括非易失性存储器 ( non- volatile memory ), 例如至少一个磁 盘存储器。 程序 1132具体可以用于使得所述用户信息获取装置 1100执行以 下步骤:
获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与当前用户对应的至少一用户相关 信息, 所述用户相关信息包括一用于启动一对应应用的应用启动信息;
将所述用户相关信息向所述用户的眼底投射。
程序 1132中各步骤的具体实现可以参见上述实施例中的相应步骤和单元 中对应的描述, 在此不赘述。 所属领域的技术人员可以清楚地了解到, 为描 述的方便和简洁, 上述描述的设备和模块的具体工作过程, 可以参考前述方 法实施例中的对应过程描述, 在此不再赘述。 此外, 本申请还提供一种计算机可读介质, 包括在被执行时进行以下操 机可读指令: 执行图 1所示方法实施例中的步骤 S110、 S120和 S130
Figure imgf000030_0001
如图 12所示, 本申请实施例还提供了一种可穿戴设备 1200,包含上面所 述实施例中记载的所述的用户信息获取装置 1210。
所述可穿戴设备为一眼镜。 在一些实施方式中, 该眼镜例如可以为图 8- 图 10的结构。 如图 13所示, 本申请实施例提供了一种用户信息交互方法, 包括: S1310在一图像中嵌入至少一数字水印,所述数字水印中包含与至少一用 户对应的至少一用户相关信息; 所述用户相关信息包括一用于启动一对应应 用的应用启动信息。
其中, 所述数字水印可以按照对称性分为对称式和非对称式水印两种。 传统的对称水印嵌入和检测的密钥相同, 这样一旦公开了检测的方法和密钥, 水印就会很容易从数字载体中移除。 而非对称水印技术通过使用私钥来嵌入 水印, 使用公钥来提取和验证水印, 这样攻击者就很难通过公钥来破坏或者 移除利用私钥嵌入的水印。 因此, 本申请实施例中, 使用所述非对称式数字 水印。
在本申请实施例的一种可能的实施方式中, 所述图像包括一设备显示的 一用户环境的登录界面;
所述应用启动信息, 用于在所述登录界面直接启动与所述用户对应的所 述用户环境中的至少一个应用。
在本申请实施例中, 所述需要包含在水印中被嵌入的用户相关信息可以 是用户根据自己的个性化需求预先设定的, 也可以是系统主动为用户配置的。
在本申请实施例的一种可能的实施方式中, 所述用户相关信息还包括: 所述用户用于登录所述用户环境的一用户鉴权信息。
上述各步骤的实现参见图 1至图 3所示的方法实施例中对应的描述, 此 处不再赘述。
在本申请实施例的一种可能的实施方式中, 所述方法还包括:
接收输入的所述应用启动信息;
根据接收的所述应用启动信息启动对应的应用。
在本申请实施例中, 用户根据上述图 1至图 12所述的实施例中的方法或 装置获取了所述应用启动信息后, 根据需要输入对应的应用启动信息, 使得 本申请实施例的方法接收到所述输入的应用启动信息, 本申请实施例的方法 再根据接收的所述应用启动信息启动对应的应用。
例如, 在图 3所示的实施例中, 用户看到了图 3所示的图像中的应用启 动信息, 以用户需要使用浏览器功能为例, 此时, 用户在显示所述图像的一 设备上画一个 "e"形轨迹, 本申请实施例的方法接收到该 "e"形轨迹后, 直 接启动所述浏览器应用。
通过本申请实施例的方法, 使得用户可以方便的直接快捷启动需要的应 用, 提高了用户体验。
应理解, 在本申请的各种实施例中, 上述各过程的序号的大小并不意味 着执行顺序的先后, 各过程的执行顺序应以其功能和内在逻辑确定, 而不应 对本申请实施例的实施过程构成任何限定。 如图 14所示, 本申请还提供了一种用户信息交互装置 1400, 包括: 水印嵌入模块 1410, 用于在一图像中嵌入至少一数字水印, 所述数字水 印中包含与至少一用户对应的至少一用户相关信息; 所述用户相关信息包括 一用于启动一对应应用的应用启动信息。
在本申请实施例的一种可能的实施方式中, 所述装置 1400还包括: 一显示模块 1420, 用于显示一用户环境的登录界面, 所述图像包括所述 登录界面;
所述应用启动信息, 用于在所述登录界面直接启动对应用户环境中的至 少一个应用。 在本申请实施例的一种可能的实施方式中, 所述用户相关信息还包括: 所述用户用于登录所述用户环境的一用户鉴权信息;
所述装置还包括一用户鉴权信息输入模块 1430, 用于输入所述用户鉴权
1口 Ά自、。
在本申请实施例的一种可能的实施方式中, 所述装置还包括:
启动信息输入模块 1440, 用于接收输入的所述应用启动信息;
应用启动模块 1450,用于根据接收的所述应用启动信息启动对应的应用。 本实施例各模块功能的实现参见图 1至图 13所示的实施例中对应的描述, 此处不再赘述。 如图 15所示, 本申请实施例还提供了一种电子终端 1500, 包括上述的用 户信息交互装置 1510。
在本申请实施例的一种可能的实施方式中, 所示电子终端 1500为手机、 平板电脑、 电脑、 电子门禁等电子设备。
本领域普通技术人员可以意识到, 结合本文中所公开的实施例描述的各 示例的单元及方法步骤, 能够以电子硬件、 或者计算机软件和电子硬件的结 合来实现。 这些功能究竟以硬件还是软件方式来执行, 取决于技术方案的特 定应用和设计约束条件。 专业技术人员可以对每个特定的应用来使用不同方 法来实现所描述的功能, 但是这种实现不应认为超出本申请的范围。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用 时, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本申请的 技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可 以以软件产品的形式体现出来, 该计算机软件产品存储在一个存储介质中, 包括若干指令用以使得一台计算机设备(可以是个人计算机, 服务器, 或者 网络设备等) 执行本申请各个实施例所述方法的全部或部分步骤。 而前述的 存储介质包括: U盘、 移动硬盘、 只读存储器(ROM, Read-Only Memory ), 随机存取存储器(RAM, Random Access Memory ),磁碟或者光盘等各种可以 存储程序代码的介质。
以上实施方式仅用于说明本申请, 而并非对本申请的限制, 有关技术领 域的普通技术人员, 在不脱离本申请的精神和范围的情况下, 还可以做出各 种变化和变型, 因此所有等同的技术方案也属于本申请的范畴, 本申请的专 利保护范围应由权利要求限定。

Claims

权 利 要 求
1、 一种用户信息获取方法, 其特征在于, 包括:
获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与当前用户对应的至少一用户相关 信息, 所述用户相关信息包括一用于启动一对应应用的应用启动信息;
将所述用户相关信息向所述用户的眼底投射。
2、 如权利要求 1所述的方法, 其特征在于, 所述图像包括一设备显示的 一用户环境的登录界面;
所述应用启动信息, 用于在所述登录界面直接启动与所述当前用户对应 的所述用户环境中的至少一个对应应用。
3、 如权利要求 2所述的方法, 其特征在于, 所述用户相关信息还包括: 所述当前用户用于登录所述用户环境的一用户鉴权信息。
4、 如权利要求 1所述的方法, 其特征在于, 在所述获取所述图像中所述 数字水印包含的与当前用户对应的至少一用户相关信息之前还包括: 认证当 前用户。
5、 如权利要求 1所述的方法, 其特征在于, 所述获取一包含至少一数字 水印的图像包括:
通过拍摄的方式获取所述图像。
6、 如权利要求 1所述的方法, 其特征在于, 所述获取一包含至少一数字 水印的图像包括:
通过接收的方式获取所述图像。
7、 如权利要求 1所述的方法, 其特征在于, 所述获取所述图像中所述数 字水印包含的与当前用户对应的至少一用户相关信息包括:
从所述图像中提取所述用户相关信息。
8、 如权利要求 1所述的方法, 其特征在于, 所述获取所述图像中所述数 字水印包含的与当前用户对应的至少一用户相关信息包括:
将所述图像向外部发送; 从外部接收所述图像中的所述用户相关信息。
9、 如权利要求 1所述的方法, 其特征在于, 所述将所述用户相关信息向 所述用户的眼底投射包括:
投射所述用户相关信息;
调整投射位置与所述用户的眼睛之间光路的至少一投射成像参数, 直至 所述用户相关信息在所述用户的眼底所成的像满足至少一设定的第一清晰度 标准。
10、 如权利要求 1 所述的方法, 其特征在于, 所述将所述用户相关信息 向所述用户的眼底投射还包括:
将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼底对 齐。
11、 如权利要求 10所述的方法, 其特征在于, 所述方法还包括: 所述将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼 底对齐包括:
根据所述用户注视点相对于所述用户的所述位置将所述投射的用户相关 信息与所述用户看到的图像在所述用户的眼底对齐。
12、 如权利要求 1 所述的方法, 其特征在于, 所述将所述用户相关信息 向所述用户的眼底投射包括:
将所述用户相关信息立体地向所述用户的眼底投射。
13、 如权利要求 12所述的方法, 其特征在于,
所述用户相关信息包括分别与所述用户的两眼对应的立体信息; 所述将所述用户相关信息向所述用户的眼底投射包括:
分别向所述用户的两眼投射对应的用户相关信息。
14、 一种用户信息获取装置, 其特征在于, 包括:
一图像获取模块, 用于获取一包含至少一数字水印的图像;
一信息获取模块, 用于获取所述图像中所述数字水印包含的与当前用户 对应的至少一用户相关信息, 所述用户相关信息包括一用于启动一对应应用 的应用启动信息;
一投射模块, 用于将所述用户相关信息向所述用户的眼底投射。
15、 如权利要求 14所述的装置, 其特征在于, 所述装置还包括: 一用户 认证模块, 用于认证当前用户。
16、 如权利要求 14所述的装置, 其特征在于, 所述图像获取模块包括: 一拍摄子模块, 用于拍摄所述图像。
17、 如权利要求 14所述的装置, 其特征在于, 所述图像获取模块包括: 一第一通信子模块, 用于接收所述图像。
18、 如权利要求 14所述的装置, 其特征在于, 所述信息获取模块包括: 一信息提取子模块, 用于从所述图像中提取所述用户相关信息。
19、 如权利要求 14所述的装置, 其特征在于, 所述信息获取模块包括: 一第二通信子模块, 用于:
将所述图像向外部发送;
从外部接收所述图像中的所述用户相关信息。
20、 如权利要求 14所述的装置, 其特征在于, 所述投射模块包括: 一信息投射子模块, 用于投射所述用户相关信息;
一参数调整子模块, 用于调整所述投射位置与所述用户的眼睛之间光路 的至少一投射成像参数, 直至所述用户相关信息在所述用户的眼底所成的像 满足至少一设定的第一清晰度标准。
21、 如权利要求 14所述的装置, 其特征在于, 所述投射模块还包括: 一对齐调整子模块, 用于将所述投射的用户相关信息与所述用户看到的 图像在所述用户的眼底对齐。
22、 如权利要求 21所述的装置, 其特征在于, 所述装置还包括: 所述对齐调整子模块, 用于根据所述用户注视点相对于所述用户的所述 位置将所述投射的用户相关信息与所述用户看到的图像在所述用户的眼底对 齐。
23、 如权利要求 14所述的装置, 其特征在于, 所述投射模块用于: 将所述用户相关信息立体地向所述用户的眼底投射。
24、 如权利要求 23所述的装置, 其特征在于,
所述用户相关信息包括分别与所述用户的两眼对应的立体信息; 所述投射模块, 用于分别向所述用户的两眼投射对应的用户相关信息。
25、 一种可穿戴设备, 其特征在于, 包含所述权利要求 14所述的用户信 息获取装置。
26、 如权利要求 25所示, 其特征在于, 所述可穿戴设备为一眼镜。
27、 一种计算机可读存储介质, 其特征在于, 所述计算机可读存储介质 包含可执行指令, 当一可穿戴设备的中央处理器执行所述可执行指令时, 所 述可执行指令用于使所述可穿戴设备执行如下方法:
获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与当前用户对应的至少一用户相关 信息, 所述用户相关信息包括一用于启动一对应应用的应用启动信息;
将所述用户相关信息向所述用户的眼底投射。
28、 一种用户信息获取装置, 其特征在于, 包括中央处理器和存储器, 所述存储器存储计算机执行指令, 所述中央处理器与所述存储器通过通信总 线连接, 当所述用户信息获取装置运行时, 所述中央处理器执行所述存储器 存储的所述计算机执行指令, 使得所述用户信息获取装置执行如下方法: 获取一包含至少一数字水印的图像;
获取所述图像中所述数字水印包含的与当前用户对应的至少一用户相关 信息, 所述用户相关信息包括一用于启动一对应应用的应用启动信息;
将所述用户相关信息向所述用户的眼底投射。
PCT/CN2014/071141 2013-11-15 2014-01-22 用户信息获取方法及用户信息获取装置 WO2015070536A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/888,204 US9838588B2 (en) 2013-11-15 2014-01-22 User information acquisition method and user information acquisition apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310572154.3A CN103616998B (zh) 2013-11-15 2013-11-15 用户信息获取方法及用户信息获取装置
CN201310572154.3 2013-11-15

Publications (1)

Publication Number Publication Date
WO2015070536A1 true WO2015070536A1 (zh) 2015-05-21

Family

ID=50167701

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/071141 WO2015070536A1 (zh) 2013-11-15 2014-01-22 用户信息获取方法及用户信息获取装置

Country Status (3)

Country Link
US (1) US9838588B2 (zh)
CN (1) CN103616998B (zh)
WO (1) WO2015070536A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631503B (zh) * 2013-11-15 2017-12-22 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互装置
KR102396685B1 (ko) * 2015-10-08 2022-05-11 삼성전자 주식회사 전자 장치의 모니터링 방법 및 장치
CN106484481B (zh) * 2016-10-10 2020-01-14 Oppo广东移动通信有限公司 一种多开应用的配置方法、装置及终端
US10674143B2 (en) * 2017-05-12 2020-06-02 Qualcomm Incorporated System for eye tracking
CN111026986B (zh) * 2018-10-10 2023-07-04 阿里巴巴集团控股有限公司 一种网页水印渲染方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004870A (zh) * 2009-08-31 2011-04-06 株式会社理光 图像形成装置、图像形成系统、信息处理装置、信息处理方法
WO2013012603A2 (en) * 2011-07-20 2013-01-24 Google Inc. Manipulating and displaying an image on a wearable computing system
CN103310142A (zh) * 2013-05-22 2013-09-18 复旦大学 基于可穿戴设备的人机融合安全认证方法
US20130300652A1 (en) * 2011-11-30 2013-11-14 Google, Inc. Unlocking a Screen Using Eye Tracking Information

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8332478B2 (en) * 1998-10-01 2012-12-11 Digimarc Corporation Context sensitive connected content
US6580815B1 (en) * 1999-07-19 2003-06-17 Mandylion Research Labs, Llc Page back intrusion detection device
US8091025B2 (en) * 2000-03-24 2012-01-03 Digimarc Corporation Systems and methods for processing content objects
US6948068B2 (en) * 2000-08-15 2005-09-20 Spectra Systems Corporation Method and apparatus for reading digital watermarks with a hand-held reader device
US8965460B1 (en) * 2004-01-30 2015-02-24 Ip Holdings, Inc. Image and augmented reality based networks using mobile devices and intelligent electronic glasses
CN101004777B (zh) * 2006-01-21 2010-11-10 鸿富锦精密工业(深圳)有限公司 数字浮水印自动加载系统及方法
CN101207485B (zh) * 2007-08-15 2010-12-01 深圳市同洲电子股份有限公司 对用户进行统一身份安全认证的系统及其方法
US20100114344A1 (en) * 2008-10-31 2010-05-06 France Telecom Communication system incorporating ambient sound pattern detection and method of operation thereof
US20100226526A1 (en) * 2008-12-31 2010-09-09 Modro Sierra K Mobile media, devices, and signaling
WO2011121927A1 (ja) * 2010-03-31 2011-10-06 日本電気株式会社 デジタルコンテンツ管理システム、装置、プログラムおよび方法
US20110283241A1 (en) 2010-05-14 2011-11-17 Google Inc. Touch Gesture Actions From A Device's Lock Screen
CN103018914A (zh) * 2012-11-22 2013-04-03 唐葵源 一种眼镜式3d显示头戴电脑
CN103150013A (zh) * 2012-12-20 2013-06-12 天津三星光电子有限公司 一种移动终端
CN102970307B (zh) * 2012-12-21 2016-01-13 网秦无限(北京)科技有限公司 密码安全系统和密码安全方法
CN103116717B (zh) * 2013-01-25 2015-11-18 东莞宇龙通信科技有限公司 一种用户登录方法及系统
US9372531B2 (en) * 2013-03-12 2016-06-21 Gracenote, Inc. Detecting an event within interactive media including spatialized multi-channel audio content
KR102039427B1 (ko) * 2013-07-01 2019-11-27 엘지전자 주식회사 스마트 글라스
US9727753B2 (en) * 2013-08-26 2017-08-08 Nbcuniversal Media, Llc Watermark access/control system and method
US9331856B1 (en) * 2014-02-10 2016-05-03 Symantec Corporation Systems and methods for validating digital signatures

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004870A (zh) * 2009-08-31 2011-04-06 株式会社理光 图像形成装置、图像形成系统、信息处理装置、信息处理方法
WO2013012603A2 (en) * 2011-07-20 2013-01-24 Google Inc. Manipulating and displaying an image on a wearable computing system
US20130300652A1 (en) * 2011-11-30 2013-11-14 Google, Inc. Unlocking a Screen Using Eye Tracking Information
CN103310142A (zh) * 2013-05-22 2013-09-18 复旦大学 基于可穿戴设备的人机融合安全认证方法

Also Published As

Publication number Publication date
CN103616998A (zh) 2014-03-05
US9838588B2 (en) 2017-12-05
US20160073002A1 (en) 2016-03-10
CN103616998B (zh) 2018-04-06

Similar Documents

Publication Publication Date Title
CN104834901B (zh) 一种基于双目立体视觉的人脸检测方法、装置及系统
US10049272B2 (en) User authentication using multiple capture techniques
JP6873918B2 (ja) 傾斜シフト虹彩撮像
WO2018040307A1 (zh) 一种基于红外可见双目图像的活体检测方法及装置
WO2015027599A1 (zh) 内容投射系统及内容投射方法
US10380418B2 (en) Iris recognition based on three-dimensional signatures
WO2015070537A1 (zh) 用户信息提取方法及用户信息提取装置
US11238143B2 (en) Method and system for authenticating a user on a wearable heads-up display
WO2015070536A1 (zh) 用户信息获取方法及用户信息获取装置
KR101645084B1 (ko) 실외 및 실내에서 홍채인식이 가능한 손 부착형 웨어러블 장치
WO2015035823A1 (en) Image collection with increased accuracy
KR101231068B1 (ko) 생체 정보 수집 장치 및 방법
KR20180134280A (ko) 3차원 깊이정보 및 적외선정보에 기반하여 생체여부의 확인을 행하는 얼굴인식 장치 및 방법
WO2015070623A1 (en) Information interaction
KR101919090B1 (ko) 3차원 깊이정보 및 적외선정보에 기반하여 생체여부의 확인을 행하는 얼굴인식 장치 및 방법
US20160155000A1 (en) Anti-counterfeiting for determination of authenticity
JP2016105555A (ja) ヘッドマウントディスプレイ装置、撮影制御方法、及びプログラム
JP4354067B2 (ja) 虹彩画像入力装置
WO2015070624A1 (en) Information interaction
JP2013134570A (ja) 撮像装置及びその制御方法、並びにプログラム
KR20160116106A (ko) 홍채 촬영 전용 카메라를 구비한 이동통신 단말기
JP2007209646A (ja) 誘導装置、撮影装置および認証装置ならびに誘導方法
TW201133124A (en) Apparatus and method for capturing three-dimensional image
US11948402B2 (en) Spoof detection using intraocular reflection correspondences
WO2024021251A1 (zh) 身份校验方法、装置、电子设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14862861

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14888204

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14862861

Country of ref document: EP

Kind code of ref document: A1