US20150042893A1 - Image data processing method and apparatus - Google Patents

Image data processing method and apparatus Download PDF

Info

Publication number
US20150042893A1
US20150042893A1 US14/230,250 US201414230250A US2015042893A1 US 20150042893 A1 US20150042893 A1 US 20150042893A1 US 201414230250 A US201414230250 A US 201414230250A US 2015042893 A1 US2015042893 A1 US 2015042893A1
Authority
US
United States
Prior art keywords
image
operator
sub
sub image
skeleton
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/230,250
Other versions
US8964128B1 (en
Inventor
Rui Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Beijing Lenovo Software Ltd
Original Assignee
Lenovo Beijing Ltd
Beijing Lenovo Software Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd, Beijing Lenovo Software Ltd filed Critical Lenovo Beijing Ltd
Assigned to LENOVO (BEIJING) CO., LTD., BEIJING LENOVO SOFTWARE LTD. reassignment LENOVO (BEIJING) CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, RUI
Publication of US20150042893A1 publication Critical patent/US20150042893A1/en
Application granted granted Critical
Publication of US8964128B1 publication Critical patent/US8964128B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • G06K9/00355
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region

Definitions

  • the disclosure relates to the field of computer application technology, and particularly to a data processing method and a data processing apparatus.
  • the gesture is obtained and the recognized by providing a camera on the TV.
  • the recognition accuracy to the gesture is low with the existing solution since the camera for performing gesture recognition can not meet a requirement of high pixel.
  • a data processing method and a data processing apparatus are provided in the disclosure, to solve a technical problem in the existing gesture recognition solution that commands and operations are performed with low accuracy due to low recognition accuracy.
  • the disclosure provides a data processing method applied to an electronic device, the data processing method includes:
  • each second image includes a second sub image, and each second sub image corresponds to the operator
  • the second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
  • the obtaining at least one second image acquired by the acquiring unit with the operator in the first sub image as a target includes:
  • the determining a first sub image in the first image includes:
  • the recognizing the first image to obtain a position of the operator in the first image includes:
  • the recognizing the first image to obtain a position of the operator in the first image includes:
  • the obtaining an operation command corresponding to the operator in the at least one second sub image includes:
  • the disclosure further provides a data processing apparatus applied to an electronic device, and the electronic device includes an acquiring unit, the data processing apparatus includes:
  • a first image obtaining unit configured to obtain a first image acquired by the acquiring unit
  • a first sub image determining unit configured to determine a first sub image in the first image, where the first sub image corresponds to an operator
  • a second image obtaining unit configured to obtain at least one second image acquired by the acquiring unit with the operator in the first sub image as a target, where each second image includes a second sub image, and each second sub image corresponds to the operator,
  • a command obtaining unit configured to obtain an operation command corresponding to the operator in at least one second sub image
  • a command execution unit configured to execute the operation command.
  • the second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
  • the second image obtaining unit includes:
  • an acquisition triggering subunit configured to trigger the acquiring unit to acquire the at least one second image with the operator in the first sub image as focus
  • an image obtaining subunit configured to obtain the at least one second image.
  • the first sub image determining unit includes:
  • an image recognition subunit configured to recognize the first image, to obtain a position of the operator in the first image
  • an image determining subunit configured to determine an image of a region where the operator is located as the first sub image in the first image.
  • the image recognition subunit includes:
  • a first scanning module configured to scan the first image, to obtain a skeleton of at least one operator to be selected in the first image
  • a first skeleton determining module configured to determine a target skeleton in the skeleton of the at least one operator to be selected which matches with a skeleton described by a received user operation instruction
  • a first position determining module configured to determine a position of the operator to be selected corresponding to the target skeleton as the position of the operator in the first image.
  • the image recognition subunit includes:
  • a second scanning module configured to scan the first image, to obtain a human skeleton in the first image
  • a second skeleton determining module configured to determine a target skeleton in the human skeleton which matches with a skeleton described by a received user operation instruction
  • a second position determining module configured to determine a position of the target skeleton as the position of the operator in the first image.
  • the command obtaining unit includes:
  • an image selection subunit configured to select a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in the at least one second sub image as a target sub image;
  • contour recognition subunit configured to recognize contour data of the operator in the target sub image
  • a command determining subunit configured to determine a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
  • the disclosure provides a data processing method and a data processing apparatus.
  • the first image acquired by an acquiring unit in an electronic device is obtained, and the first sub image in the first image corresponding to the operator is determined; multiple second images acquired by the acquiring unit with the operator in the first sub image as a target are obtained, where each second image includes a second sub image corresponding to the operator, and a first ratio between the first sub image and the first image is less than a second ratio between each second sub image and the corresponding second image; then an operation command corresponding to the operator in the at least one second sub image is obtained and is executed.
  • an object of the disclosure is realized.
  • an image of a region where the operator is located can be obtained by zooming in, to obtain a clearer and more precise image of the operator, therefore, a corresponding operation command is obtained and executed, which improves recognition accuracy of the operator, and further improves operation accuracy.
  • FIG. 1 is a flow chart of a data processing method according to a first embodiment of the disclosure
  • FIG. 2 is a diagram of an exemplary application according to the first embodiment of the disclosure
  • FIG. 3 is a diagram of another exemplary application according to the first embodiment of the disclosure.
  • FIG. 4 is a partial flow chart of a data processing method according to a second embodiment of the disclosure.
  • FIG. 5 is a partial flow chart of the second embodiment of the disclosure.
  • FIG. 6 is another partial flow chart of the second embodiment of the disclosure.
  • FIG. 7 is a diagram of an exemplary application according to the second embodiment of the disclosure.
  • FIG. 8 is a diagram of another exemplary application according to the second embodiment of the disclosure.
  • FIG. 9 is a partial flow chart of a data processing method according to the third embodiment of the disclosure.
  • FIG. 10 is a schematic structural diagram of a data processing apparatus according to a fourth embodiment of the disclosure.
  • FIG. 11 is a partial schematic structural diagram of a fourth embodiment of the disclosure.
  • FIG. 12 is a partial schematic structural diagram of a data processing apparatus according to a fifth embodiment of the disclosure.
  • FIG. 13 is a partial schematic structural diagram of the fifth embodiment of the disclosure.
  • FIG. 14 is another partial schematic structural diagram of the fifth embodiment of the disclosure.
  • FIG. 15 is a partial schematic structural diagram of a data processing apparatus according to a sixth embodiment of the disclosure.
  • FIG. 1 is a flow chart of a data processing method according to a first embodiment of the disclosure.
  • the method may be applied to an electronic device.
  • the electronic device includes an acquiring unit, and the acquiring unit may acquire a scene image.
  • the method may include steps 101 to 105 .
  • Step 101 is obtaining a first image acquired by the acquiring unit.
  • the first image refers to a panoramic image of a scene where an operator to be recognized is located, and the operator to be recognized may be, for example, a hand, a mouth and an eyeball of a human body.
  • the acquiring unit may be a device capable of acquiring image data, such as a camera.
  • the electronic device may be an apparatus including the acquiring unit, such as a TV, a computer or a pad.
  • the acquiring unit is triggered to acquire a panoramic image, i.e., a first image, of a region where the user is located, and thus the first image is obtained in the disclosure.
  • Step 102 is determining a first sub image in the first image, where the first sub image corresponds to an operator.
  • the first sub image corresponding to the operator means that an image region in the first image corresponding to the operator is the first sub image, as shown in FIG. 2 .
  • a ratio between the first sub image, i.e., the image region of the operator, and the first image is defined as a first ratio.
  • Step 103 is obtaining at least one second image acquired by the acquiring unit with the operator in the first sub image as a target.
  • Each second image includes a second sub image, and each second sub image corresponds to the operator.
  • the second image acquired by the acquiring unit is an image including the image region of the operator.
  • the second sub image corresponding to the operator means that the image region in the second image corresponding to the operator is a second sub image in the second image, as shown in FIG. 3 .
  • a ratio between the second sub image, i.e., an image region of the operator and the second image, in the second image is defined as a second ratio.
  • each second ratio is greater than the first ratio.
  • resolution and size of the first image acquired by the acquiring unit are the same as those in the second image acquired by the acquiring unit.
  • the first ratio between the operator and the first image is less than the second ratio between the operator and each second image.
  • the second image is an image in which a partial image including the first sub image in the first image is zoomed in by a variable times with the resolution unvaried.
  • the second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
  • Step 103 may be realized as follows:
  • the acquiring unit is triggered to acquire the second image including the entire region of the first sub image with at least one focal length and with the operator in the first sub image as focus.
  • Each focal length is different and is reduced with acquiring sequence of the second image, and each focal length corresponds to a second image and a second sub image thereof.
  • the second image acquired by the acquiring unit is an image obtained by zooming in the focal length with the operator in the first sub image as a target, the second ratio between the second sub image and the second image acquired by the acquiring unit for the last time is the greatest. Then each second image is obtained in the disclosure.
  • the second sub image in the second image obtained in step 103 is easier to be recognized compared with the first sub image.
  • Step 104 is obtaining an operation command corresponding to the operator in the at least one second sub image.
  • step 104 the intention of the user which the operator belongs to is determined based on a contour or a skeleton shape of the operator in the second sub image, and the operation command corresponding to the user intention is then determined.
  • Step 105 is executing the operation command.
  • a corresponding action is performed on the content displayed by the electronic device based on the operation command, such as selecting or opening a file by clicking.
  • the first embodiment of the disclosure provides a data processing method.
  • the first image acquired by an acquiring unit in an electronic device is obtained, and a first sub image in the first image corresponding to an operator is determined; multiple second images acquired by the acquiring unit with the operator in the first sub image as a target are obtained, where each second image includes a second sub image corresponding to the operator, and a first ratio between the first sub image and the first image is less than a second ratio between each second sub image and a corresponding second image; then an operation command corresponding to the operator in at least one second sub image is obtained and executed.
  • an object of the disclosure is realized.
  • an image of a region where the operator is located can be obtained by zooming in, to obtain a clearer and more precise image of the operator, therefore, a corresponding operation command is obtained and executed, which improves recognition accuracy of the operator, and further improves operation accuracy.
  • Step 102 includes steps 401 to 402 .
  • Step 401 is recognizing the first image, to obtain a position of the operator in the first image.
  • the position of the operator may be a coordinate value of a position of an absolute centre point of the operator, or may be a coordinate value of a position of an edge contour of the operator, or may be an edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center.
  • Step 401 may be realized by steps 501 to 503 .
  • Step 501 is scanning the first image, to obtain a skeleton of at least one operator to be selected in the first image.
  • the first image may be scanned using a two-dimensional image scanning method, and may be scanned and matched using a sample match and recognition method, to obtain the skeleton of at least one operator to be selected in the first image.
  • the operator to be selected may be any one of a mouth, an eyeball, a hand and a body or any combination thereof.
  • Step 502 is determining a target skeleton in the skeleton of the at least one operator to be selected which matches with a skeleton described by a received user operation instruction.
  • the user operation instruction refers to an instruction which is used to define or set a type of the operator by a user.
  • the user operation instruction may be set by the user in advance, or may be set by the user in a process of applying the embodiment of the disclosure. Skeleton information corresponding to the type of the operator defined by the user is set in the user operation instruction.
  • Step 503 is determining a position of the target skeleton as a position of the operator in the first image.
  • the position of the target skeleton in step 503 may be the coordination value of the position of the absolute centre point of the operator, or may be the edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center, in step 401 .
  • Step 401 may be realized by steps 601 to 603 .
  • Step 601 is scanning the first image, to obtain a human skeleton in the first image.
  • the first image may be scanned using a two-dimensional image scanning method, or may be scanned and matched using a sample match and recognition method, to obtain the human skeleton in the first image.
  • the human skeleton may be all human skeletons or partial human skeleton including the operator skeleton, as shown in FIG. 7 ( 1 -tracking).
  • Step 602 is determining a target skeleton in the human skeleton which matches with a skeleton described by a received user operation instruction.
  • the user operation instruction refers to an instruction which is used to define or set a type of the operator by a user.
  • the user operation instruction may be set by the user in advance, or may be set by the user in a process of applying the embodiment of the disclosure. Skeleton information corresponding to the type of the operator defined by the user is set in the user operation instruction.
  • step 602 the target skeleton in the human skeleton which matches with the skeleton in the user operation instruction may be determined in a gradually matching manner.
  • Step 603 is determining a position of the target skeleton as the position of the operator in the first image.
  • the position of the target skeleton in step 603 may be the coordination value of the position of the absolute centre point of the operator, or may be the edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center, in step 401 .
  • Step 402 is determining an image of a region where the operator is located as a first sub image in the first image.
  • the first sub image in step 402 includes an image consisting of pixels corresponding to the position of the operator including an edge contour of the operator.
  • FIG. 8 is taken as an example.
  • FIG. 8 a shows an obtained first image which is a panoramic image including a palm image.
  • a first sub image i.e., the palm image
  • a second image including the palm image in the first sub image is obtained.
  • the second image includes a second sub image corresponding to the palm, and the second ratio is greater than the first ratio. That is, compared with the first sub image in the first image, the second sub image in the second image is larger, relatively clearer, and easier to be recognized.
  • Step 104 is a flow chart of step 104 in a data processing method according to a third embodiment of the disclosure.
  • Step 104 may be realized by steps 901 to 903 .
  • Step 901 is selecting a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in at least one second sub image as a target sub image.
  • the second ratio between the second sub image and the second image being the greatest means that the second sub image in the second image is the clearest, and the accuracy of determining the operator command later is high.
  • Step 902 is recognizing contour data of an operator in the target sub image.
  • the contour data of the operator in the target sub image may be recognized by using, for example, a two-dimensional image scanning method in step 902 .
  • Step 903 is determining a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
  • the correspondence between a command and a contour may be set by the user in advance.
  • Each contour data corresponds to an operation command of an operator gesture corresponding to the contour data.
  • an operation command corresponding to the contour data is determined, and the operation command is then executed, which realizes the object of the disclosure.
  • FIG. 10 is a schematic structural diagram of a data processing apparatus according to a fourth embodiment of the disclosure.
  • the data processing apparatus is applied to an electronic device.
  • the electronic device includes an acquiring unit, and the acquiring unit is configured to acquire a scene image.
  • the data processing apparatus may include a first image obtaining unit 1001 , a first sub image determining unit 1002 , a second image obtaining unit 1003 , a command obtaining unit 1004 and a command execution unit 1005 .
  • the first image obtaining unit 1001 is configured to obtain a first image acquired by the acquiring unit.
  • the first image refers to a panoramic image of a scene where an operator to be recognized is located, and the operator to be recognized may be, for example, a hand, a mouth and an eyeball of a human body.
  • the acquiring unit may be a device capable of acquiring image data, such as a camera.
  • the electronic device may be an apparatus including the acquiring unit, such as a TV, a computer or a pad.
  • the acquiring unit is triggered to acquire a panoramic image, i.e., a first image, of a region where the user is located, and thus the first image is obtained in the disclosure.
  • the first sub image determining unit 1002 is configured to determine a first sub image in the first image, where the first sub image corresponds to an operator.
  • the first sub image corresponding to the operator means that an image region in the first image corresponding to the operator is the first sub image, as shown in FIG. 2 .
  • a ratio between the first sub image, i.e., the image region of the operator, and the first image is defined as a first ratio.
  • the second image obtaining unit 1003 is configured to obtain at least one second image acquired by the acquiring unit with the operator in the first sub image as a target.
  • Each second image includes a second sub image, and each second sub image corresponds to the operator.
  • the second image acquired by the acquiring unit is an image including the image region of the operator.
  • the second sub image corresponding to the operator means that the image region in the second image corresponding to the operator is a second sub image in the second image, as shown in FIG. 3 .
  • a ratio between the second sub image, i.e., an image region of the operator in the second image, and the second image is defined as a second ratio.
  • each second ratio is greater than the first ratio.
  • resolution and size of the first image acquired by the acquiring unit are the same as those in the second image acquired by the acquiring unit.
  • the first ratio between the operator and the first image is less than the second ratio between the operator and each second image.
  • the second image is an image in which a partial image including the first sub image in the first image is zoomed in by a variable times with the resolution unvaried.
  • the second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
  • FIG. 11 is a schematic structural diagram of the second image obtaining unit 1003 according to the fourth embodiment of the disclosure.
  • the second image obtaining unit 1003 may includes an acquisition triggering subunit 1031 and an image obtaining subunit 1032 .
  • the acquisition triggering subunit 1031 is configured to trigger the acquiring unit to acquire at least one second image with the operator in the first sub image as focus.
  • the acquiring unit is triggered to acquire the second image including the entire region of the first sub image with at least one focal length and with the operator in the first sub image as focus.
  • Each focal length is different and is reduced with acquiring sequence of the second image, and each focal length corresponds to a second image and a second sub image thereof.
  • the second image acquired by the acquiring unit is an image obtained by zooming in the focal length with the operator in the first sub image as a target, the second ratio between the second sub image and the second image acquired by the acquiring unit for the last time is the greatest.
  • the image obtaining subunit 1032 is configured to obtain each second image.
  • each second image is obtained by the image obtaining subunit 1032 .
  • the second sub image in the second image obtained by the second image obtaining unit 1003 is easier to be recognized compared with the first sub image.
  • the command obtaining unit 1004 is configured to obtain an operation command corresponding to the operator in the at least one second sub image.
  • the intention of the user which the operator belongs to is determined based on a contour or a skeleton shape of the operator in the second sub image, and the operation command corresponding to the user intention is then determined.
  • the command execution unit 1005 is configured to execute the operation command.
  • a corresponding action is performed on the content displayed by the electronic device by the command execution unit 1005 based on the operation command, such as selecting or opening a file by clicking.
  • the fourth embodiment of the disclosure provides a data processing apparatus.
  • the first image acquired by an acquiring unit in an electronic device is obtained, and a first sub image in the first image corresponding to an operator is determined; multiple second images acquired by the acquiring unit with the operator in the first sub image as a target are obtained, where each second image includes a second sub image corresponding to the operator, and a first ratio between the first sub image and the first image is less than a second ratio between each second sub image and a corresponding second image; then an operation command corresponding to the operator in at least one second sub image is obtained and executed.
  • an object of the disclosure is realized.
  • an image of a region where the operator is located can be obtained by zooming in, to obtain a clearer and more precise image of the operator, therefore, a corresponding operation command is obtained and executed, which improves recognition accuracy of the operator, and further improves operation accuracy.
  • FIG. 12 is a schematic structural diagram of the first sub image determining unit 1002 in the data processing apparatus according to a fifth embodiment of the disclosure.
  • the first sub image determining unit 1002 may include an image recognition subunit 1021 and an image determining subunit 1022 .
  • the image recognition subunit 1021 is configured to recognize the first image, to obtain a position of an operator in the first image.
  • the position of the operator may be a coordinate value of a position of an absolute centre point of the operator, or may be a coordinate value of a position of an edge contour of the operator, or may be an edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center.
  • FIG. 13 is a schematic structural diagram of the image recognition subunit 1021 according to the fifth embodiment of the disclosure.
  • the image recognition subunit 1021 may include a first scanning module 10211 , a first skeleton determining module 10212 and a first position determining module 10213 .
  • the first scanning module 10211 is configured to scan the first image, to obtain a skeleton of at least one operation to be selected in the first image.
  • the first scanning module 10211 may scan the first image using a two-dimensional image scanning method, or may scan and match the first image using a sample match and recognition method, to obtain the skeleton of at least one operation to be selected in the first image.
  • the operator to be selected may be any one of a mouth, an eyeball, a hand and a body or any combination thereof.
  • the first skeleton determining module 10212 is configured to determine a target skeleton in the skeleton of the operator to be selected which matches with a skeleton described by a received user operation instruction.
  • the user operation instruction refers to an instruction which is used to define or set a type of the operator by a user.
  • the user operation instruction may be set by the user in advance, or may be set by the user in a process of applying the embodiment of the disclosure. Skeleton information corresponding to the type of the operator defined by the user is set in the user operation instruction.
  • the first position determining module 10213 is configured to determine a position of the operator to be selected corresponding to the target skeleton as a position of the operator in the first image.
  • the position of the target skeleton used in the first position determining module 10213 may be the coordination value of the position of the absolute centre point of the operator, or may be the edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center, used in the image recognition subunit 1021 .
  • FIG. 14 is another schematic structural diagram of the image recognition subunit 1021 according to the fifth embodiment of the disclosure.
  • the image recognition subunit 1021 may include a second scanning module 10214 , a second skeleton determining module 10215 and a second position determining module 10216 .
  • the second scanning module 10214 is configured to scan the first image, to obtain a human skeleton in the first image.
  • the second scanning module 10214 may scan the first image using a two-dimensional image scanning method, or may scan and match the first image using a sample match and recognition method, to obtain the human skeleton in the first image.
  • the human skeleton may be all human skeletons or a partial human skeleton including an operator skeleton, as shown in FIG. 7 .
  • the second skeleton determining module 10215 is configured to determine a target skeleton in the human skeleton which matches with a skeleton described by a received user operation instruction.
  • the user operation instruction refers to an instruction which is used to define or set a type of the operator by a user.
  • the user operation instruction may be set by the user in advance, or may be set by the user in a process of applying the embodiment of the disclosure. Skeleton information corresponding to the type of the operator defined by the user is set in the user operation instruction.
  • the second skeleton determining module 10215 may determine the target skeleton in the human skeleton which matches with the skeleton in the user operation instruction in a gradually matching manner.
  • the second position determining module 10216 is configured to determine a position of the target skeleton as the position of the operator in the first image.
  • the position of the target skeleton used in the second position determining module 10216 may be the coordination value of the position of the absolute centre point of the operator, or may be the edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center, used in the image determining subunit 1022 .
  • the image determining subunit 1022 is configured to determine an image of a region where the operator is located as a first sub image in the first image.
  • the first sub image used in the image determining subunit 1022 includes an image consisting of pixels corresponding to the position of the operator including an edge contour of the operator.
  • FIG. 8 is taken as an example.
  • FIG. 8 a shows an obtained first image which is a panoramic image including a palm image.
  • a first sub image i.e., the palm image
  • a second image including the palm image in the first sub image is obtained.
  • the second image includes a second sub image corresponding to the palm, and the second ratio is greater than the first ratio. That is, compared with the first sub image in the first image, the second sub image in the second image is larger, relatively clearer, and easier to be recognized.
  • FIG. 15 is a schematic structural diagram of the command obtaining unit 1004 in a data processing apparatus according to a sixth embodiment provided by the disclosure.
  • the command obtaining unit 1004 may include an image selection subunit 1041 , a contour recognition subunit 1042 and a command determining subunit 1043 .
  • the image selection subunit 1041 is configured to select a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in at least one second sub image as a target sub image.
  • the second ratio between the second sub image and the second image being the greatest means that the second sub image in the second image is the clearest, and the accuracy of determining the operator command later is high.
  • the contour recognition subunit 1042 is configured to recognize contour data of the operator in the target sub image.
  • the contour recognition subunit 1042 may recognize the contour data of the operator in the target sub image by using, for example, a two-dimensional image scanning method.
  • the command determining subunit 1043 is configured to determine a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
  • the correspondence between a command and a contour may be set by the user in advance.
  • Each contour data corresponds to an operation command of an operator gesture corresponding to the contour data.
  • an operation command corresponding to the contour data is determined, and the operation command is then executed, which realizes the object of the disclosure.
  • a relationship term such as “the first” and “the second” herein is only used to distinguish one entity or operation from another entity or operation, and does not necessarily require or imply that there is an actual relationship or sequence between these entities or operations.
  • terms “include”, “comprise” or any other variations are intended to cover non-exclusive “include”, so that a process, a method, an object or a device including a series of factors not only include the factors, but also include other factors not explicitly listed, or also include inherent factors of the process, the method, the object or the device.
  • a factor defined in a sentence “include one . . . ” does not exclude a case that there is also another same factor in the process, the method, the object or the device including the described factor.
  • a data processing method and a data processing apparatus provided by the disclosure are introduced in detail above, the principle and the embodiments of the disclosure are summed by applying a specific example, and the above embodiments are illustrated to assist in understanding the method of the disclosure and a core concept thereof. Meanwhile, for those skilled in the art, the specific embodiment and the application scope can be altered according to the concept of the disclosure, therefore, the content of the specification is not understood to limit the disclosure.

Abstract

A data processing method and a data processing apparatus applied to an electronic device are provided. The electronic device includes an acquiring unit. The data processing method includes: obtaining a first image acquired by the acquiring unit; determining a first sub image in the first image, where the first sub image corresponds to an operator; obtaining at least one second image acquired by the acquiring unit with the operator in the first sub image as a target, where each second image comprises a second sub image, and each second sub image corresponds to the operator, and where a first ratio of the first sub image in the first image is less than a second ratio of each second sub image in the corresponding second image; obtaining an operation command corresponding to the operator in at least one second sub image; and executing the operation command.

Description

  • The present application claims the priority to Chinese Patent Application No. 201310347509.9, entitled “DATA PROCESSING METHOD AND APPARATUS”, filed on Aug. 9, 2013 with the Chinese State Intellectual Property Office, which is incorporated herein by reference in its entirety.
  • FIELD
  • The disclosure relates to the field of computer application technology, and particularly to a data processing method and a data processing apparatus.
  • BACKGROUND
  • Currently, in recognizing a gesture of an operator of a TV, the gesture is obtained and the recognized by providing a camera on the TV. However, in a case where the operator of the TV is far way from the TV, the recognition accuracy to the gesture is low with the existing solution since the camera for performing gesture recognition can not meet a requirement of high pixel.
  • SUMMARY
  • A data processing method and a data processing apparatus are provided in the disclosure, to solve a technical problem in the existing gesture recognition solution that commands and operations are performed with low accuracy due to low recognition accuracy.
  • The disclosure provides a data processing method applied to an electronic device, the data processing method includes:
  • obtaining a first image acquired by an acquiring unit included in the electronic device;
  • determining a first sub image in the first image, where the first sub image corresponds to an operator;
  • obtaining at least one second image acquired by the acquiring unit with the operator in the first sub image as a target, where each second image includes a second sub image, and each second sub image corresponds to the operator,
  • where a first ratio between the first sub image and the first image is less than a second ratio between each second sub image and the corresponding second image;
  • obtaining an operation command corresponding to the operator in at least one second sub image; and
  • executing the operation command.
  • Preferably, the second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
  • Preferably, the obtaining at least one second image acquired by the acquiring unit with the operator in the first sub image as a target includes:
  • triggering the acquiring unit to acquire the at least one second image with the operator in the first sub image as focus; and
  • obtaining the at least one second image.
  • Preferably, the determining a first sub image in the first image includes:
  • recognizing the first image, to obtain a position of the operator in the first image; and
  • determining an image of a region where the operator is located as the first sub image in the first image.
  • Preferably, the recognizing the first image to obtain a position of the operator in the first image includes:
  • scanning the first image, to obtain a skeleton of at least one operator to be selected in the first image;
  • determining a target skeleton in the skeleton of the at least one operator to be selected which matches with a skeleton described by a received user operation instruction; and
  • determining a position of the target skeleton as the position of the operator in the first image.
  • Preferably, the recognizing the first image to obtain a position of the operator in the first image includes:
  • scanning the first image, to obtain a human skeleton in the first image;
  • determining a target skeleton in the human skeleton which matches with a skeleton described by a received user operation instruction; and
  • determining a position of the target skeleton as the position of the operator in the first image.
  • Preferably, the obtaining an operation command corresponding to the operator in the at least one second sub image includes:
  • selecting a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in the at least one second sub image as a target sub image;
  • recognizing contour data of the operator in the target sub image; and
  • determining a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
  • The disclosure further provides a data processing apparatus applied to an electronic device, and the electronic device includes an acquiring unit, the data processing apparatus includes:
  • a first image obtaining unit configured to obtain a first image acquired by the acquiring unit;
  • a first sub image determining unit configured to determine a first sub image in the first image, where the first sub image corresponds to an operator;
  • a second image obtaining unit configured to obtain at least one second image acquired by the acquiring unit with the operator in the first sub image as a target, where each second image includes a second sub image, and each second sub image corresponds to the operator,
  • where a first ratio between the first sub image in the first image is less than a second ratio between each second sub image and the corresponding second image;
  • a command obtaining unit configured to obtain an operation command corresponding to the operator in at least one second sub image; and
  • a command execution unit configured to execute the operation command.
  • Preferably, the second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
  • Preferably, the second image obtaining unit includes:
  • an acquisition triggering subunit configured to trigger the acquiring unit to acquire the at least one second image with the operator in the first sub image as focus; and
  • an image obtaining subunit configured to obtain the at least one second image.
  • Preferably, the first sub image determining unit includes:
  • an image recognition subunit configured to recognize the first image, to obtain a position of the operator in the first image; and
  • an image determining subunit configured to determine an image of a region where the operator is located as the first sub image in the first image.
  • Preferably, the image recognition subunit includes:
  • a first scanning module configured to scan the first image, to obtain a skeleton of at least one operator to be selected in the first image;
  • a first skeleton determining module configured to determine a target skeleton in the skeleton of the at least one operator to be selected which matches with a skeleton described by a received user operation instruction; and
  • a first position determining module configured to determine a position of the operator to be selected corresponding to the target skeleton as the position of the operator in the first image.
  • Preferably, the image recognition subunit includes:
  • a second scanning module configured to scan the first image, to obtain a human skeleton in the first image;
  • a second skeleton determining module configured to determine a target skeleton in the human skeleton which matches with a skeleton described by a received user operation instruction; and
  • a second position determining module configured to determine a position of the target skeleton as the position of the operator in the first image.
  • Preferably, the command obtaining unit includes:
  • an image selection subunit configured to select a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in the at least one second sub image as a target sub image;
  • a contour recognition subunit configured to recognize contour data of the operator in the target sub image; and
  • a command determining subunit configured to determine a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
  • It can be known from the above solution that the disclosure provides a data processing method and a data processing apparatus. The first image acquired by an acquiring unit in an electronic device is obtained, and the first sub image in the first image corresponding to the operator is determined; multiple second images acquired by the acquiring unit with the operator in the first sub image as a target are obtained, where each second image includes a second sub image corresponding to the operator, and a first ratio between the first sub image and the first image is less than a second ratio between each second sub image and the corresponding second image; then an operation command corresponding to the operator in the at least one second sub image is obtained and is executed. In this way, an object of the disclosure is realized. In the disclosure, an image of a region where the operator is located can be obtained by zooming in, to obtain a clearer and more precise image of the operator, therefore, a corresponding operation command is obtained and executed, which improves recognition accuracy of the operator, and further improves operation accuracy.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order to more clearly illustrate the technical solution in the embodiments of the disclosure, in the following, accompanying drawings required in the description of the embodiments will be introduced simply. Obviously, the accompanying drawings in the following description are just some embodiments of the disclosure. For those skilled in the art, other accompanying drawings can also be obtained according to the accompanying drawings provided without any creative work.
  • FIG. 1 is a flow chart of a data processing method according to a first embodiment of the disclosure;
  • FIG. 2 is a diagram of an exemplary application according to the first embodiment of the disclosure;
  • FIG. 3 is a diagram of another exemplary application according to the first embodiment of the disclosure;
  • FIG. 4 is a partial flow chart of a data processing method according to a second embodiment of the disclosure;
  • FIG. 5 is a partial flow chart of the second embodiment of the disclosure;
  • FIG. 6 is another partial flow chart of the second embodiment of the disclosure;
  • FIG. 7 is a diagram of an exemplary application according to the second embodiment of the disclosure;
  • FIG. 8 is a diagram of another exemplary application according to the second embodiment of the disclosure;
  • FIG. 9 is a partial flow chart of a data processing method according to the third embodiment of the disclosure;
  • FIG. 10 is a schematic structural diagram of a data processing apparatus according to a fourth embodiment of the disclosure;
  • FIG. 11 is a partial schematic structural diagram of a fourth embodiment of the disclosure;
  • FIG. 12 is a partial schematic structural diagram of a data processing apparatus according to a fifth embodiment of the disclosure;
  • FIG. 13 is a partial schematic structural diagram of the fifth embodiment of the disclosure;
  • FIG. 14 is another partial schematic structural diagram of the fifth embodiment of the disclosure; and
  • FIG. 15 is a partial schematic structural diagram of a data processing apparatus according to a sixth embodiment of the disclosure.
  • DETAILED DESCRIPTION
  • In the following, the technical solution in the embodiments of the disclosure will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the disclosure. Obviously, the described embodiments are just a part of embodiments of the disclosure, and are not all embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the disclosure without any creative work will fall within the scope of protection of the disclosure.
  • Reference is made to FIG. 1 which is a flow chart of a data processing method according to a first embodiment of the disclosure. The method may be applied to an electronic device. The electronic device includes an acquiring unit, and the acquiring unit may acquire a scene image. The method may include steps 101 to 105.
  • Step 101 is obtaining a first image acquired by the acquiring unit.
  • The first image refers to a panoramic image of a scene where an operator to be recognized is located, and the operator to be recognized may be, for example, a hand, a mouth and an eyeball of a human body.
  • The acquiring unit may be a device capable of acquiring image data, such as a camera. The electronic device may be an apparatus including the acquiring unit, such as a TV, a computer or a pad. In a case where it is required to recognize an operator of a user and then execute a command on the electronic device, for example, content displayed in a display unit of the electronic device is triggered and clicked by the user through a hand gesture, the acquiring unit is triggered to acquire a panoramic image, i.e., a first image, of a region where the user is located, and thus the first image is obtained in the disclosure.
  • Step 102 is determining a first sub image in the first image, where the first sub image corresponds to an operator.
  • The first sub image corresponding to the operator means that an image region in the first image corresponding to the operator is the first sub image, as shown in FIG. 2.
  • It should be illustrated that a ratio between the first sub image, i.e., the image region of the operator, and the first image is defined as a first ratio.
  • Step 103 is obtaining at least one second image acquired by the acquiring unit with the operator in the first sub image as a target.
  • Each second image includes a second sub image, and each second sub image corresponds to the operator.
  • It should be noted from step 103 that the second image acquired by the acquiring unit is an image including the image region of the operator. The second sub image corresponding to the operator means that the image region in the second image corresponding to the operator is a second sub image in the second image, as shown in FIG. 3.
  • A ratio between the second sub image, i.e., an image region of the operator and the second image, in the second image is defined as a second ratio. In the embodiment of the disclosure, each second ratio is greater than the first ratio. It should be understood that resolution and size of the first image acquired by the acquiring unit are the same as those in the second image acquired by the acquiring unit. The first ratio between the operator and the first image is less than the second ratio between the operator and each second image. The second image is an image in which a partial image including the first sub image in the first image is zoomed in by a variable times with the resolution unvaried.
  • The second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
  • Step 103 may be realized as follows:
  • the acquiring unit is triggered to acquire the second image including the entire region of the first sub image with at least one focal length and with the operator in the first sub image as focus. Each focal length is different and is reduced with acquiring sequence of the second image, and each focal length corresponds to a second image and a second sub image thereof. The second image acquired by the acquiring unit is an image obtained by zooming in the focal length with the operator in the first sub image as a target, the second ratio between the second sub image and the second image acquired by the acquiring unit for the last time is the greatest. Then each second image is obtained in the disclosure.
  • In this case, the second sub image in the second image obtained in step 103 is easier to be recognized compared with the first sub image.
  • Step 104 is obtaining an operation command corresponding to the operator in the at least one second sub image.
  • It can be understood from step 104 that the intention of the user which the operator belongs to is determined based on a contour or a skeleton shape of the operator in the second sub image, and the operation command corresponding to the user intention is then determined.
  • Step 105 is executing the operation command.
  • After the operation command corresponding to the operator in the second sub image is determined in step 104, a corresponding action is performed on the content displayed by the electronic device based on the operation command, such as selecting or opening a file by clicking.
  • It can be known from the above solution that the first embodiment of the disclosure provides a data processing method. The first image acquired by an acquiring unit in an electronic device is obtained, and a first sub image in the first image corresponding to an operator is determined; multiple second images acquired by the acquiring unit with the operator in the first sub image as a target are obtained, where each second image includes a second sub image corresponding to the operator, and a first ratio between the first sub image and the first image is less than a second ratio between each second sub image and a corresponding second image; then an operation command corresponding to the operator in at least one second sub image is obtained and executed. In this way, an object of the disclosure is realized. In the disclosure, an image of a region where the operator is located can be obtained by zooming in, to obtain a clearer and more precise image of the operator, therefore, a corresponding operation command is obtained and executed, which improves recognition accuracy of the operator, and further improves operation accuracy.
  • Reference is made to FIG. 4 which is a flow chart of step 102 in a data processing method provided according to a second embodiment of the disclosure. Step 102 includes steps 401 to 402.
  • Step 401 is recognizing the first image, to obtain a position of the operator in the first image.
  • The position of the operator may be a coordinate value of a position of an absolute centre point of the operator, or may be a coordinate value of a position of an edge contour of the operator, or may be an edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center.
  • Reference is made to FIG. 5 which is a flow chart of step 401 according to the second embodiment of the disclosure. Step 401 may be realized by steps 501 to 503.
  • Step 501 is scanning the first image, to obtain a skeleton of at least one operator to be selected in the first image.
  • In step 501, the first image may be scanned using a two-dimensional image scanning method, and may be scanned and matched using a sample match and recognition method, to obtain the skeleton of at least one operator to be selected in the first image.
  • The operator to be selected may be any one of a mouth, an eyeball, a hand and a body or any combination thereof.
  • Step 502 is determining a target skeleton in the skeleton of the at least one operator to be selected which matches with a skeleton described by a received user operation instruction.
  • The user operation instruction refers to an instruction which is used to define or set a type of the operator by a user. The user operation instruction may be set by the user in advance, or may be set by the user in a process of applying the embodiment of the disclosure. Skeleton information corresponding to the type of the operator defined by the user is set in the user operation instruction.
  • Step 503 is determining a position of the target skeleton as a position of the operator in the first image.
  • The position of the target skeleton in step 503 may be the coordination value of the position of the absolute centre point of the operator, or may be the edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center, in step 401.
  • Reference is made to FIG. 6 which is a flow chart of step 401 according to the second embodiment of the disclosure. Step 401 may be realized by steps 601 to 603.
  • Step 601 is scanning the first image, to obtain a human skeleton in the first image.
  • In step 601, the first image may be scanned using a two-dimensional image scanning method, or may be scanned and matched using a sample match and recognition method, to obtain the human skeleton in the first image. The human skeleton may be all human skeletons or partial human skeleton including the operator skeleton, as shown in FIG. 7 (1-tracking).
  • Step 602 is determining a target skeleton in the human skeleton which matches with a skeleton described by a received user operation instruction.
  • The user operation instruction refers to an instruction which is used to define or set a type of the operator by a user. The user operation instruction may be set by the user in advance, or may be set by the user in a process of applying the embodiment of the disclosure. Skeleton information corresponding to the type of the operator defined by the user is set in the user operation instruction.
  • It should be illustrated that in step 602, the target skeleton in the human skeleton which matches with the skeleton in the user operation instruction may be determined in a gradually matching manner.
  • Step 603 is determining a position of the target skeleton as the position of the operator in the first image.
  • The position of the target skeleton in step 603 may be the coordination value of the position of the absolute centre point of the operator, or may be the edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center, in step 401.
  • Step 402 is determining an image of a region where the operator is located as a first sub image in the first image.
  • The first sub image in step 402 includes an image consisting of pixels corresponding to the position of the operator including an edge contour of the operator.
  • FIG. 8 is taken as an example. FIG. 8 a shows an obtained first image which is a panoramic image including a palm image. After the first image is obtained, a first sub image, i.e., the palm image, in the first image is determined as shown in FIGS. 8 b to 8 d. After the first sub image is determined, a second image including the palm image in the first sub image is obtained. As shown in FIG. 8 e, the second image includes a second sub image corresponding to the palm, and the second ratio is greater than the first ratio. That is, compared with the first sub image in the first image, the second sub image in the second image is larger, relatively clearer, and easier to be recognized.
  • Reference is made to FIG. 9 which is a flow chart of step 104 in a data processing method according to a third embodiment of the disclosure. Step 104 may be realized by steps 901 to 903.
  • Step 901 is selecting a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in at least one second sub image as a target sub image.
  • It should be illustrated that the second ratio between the second sub image and the second image being the greatest means that the second sub image in the second image is the clearest, and the accuracy of determining the operator command later is high.
  • Step 902 is recognizing contour data of an operator in the target sub image.
  • The contour data of the operator in the target sub image may be recognized by using, for example, a two-dimensional image scanning method in step 902.
  • Step 903 is determining a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
  • The correspondence between a command and a contour may be set by the user in advance. Each contour data corresponds to an operation command of an operator gesture corresponding to the contour data. Thus, after the contour data of the operator is recognized, an operation command corresponding to the contour data is determined, and the operation command is then executed, which realizes the object of the disclosure.
  • Reference is made to FIG. 10 which is a schematic structural diagram of a data processing apparatus according to a fourth embodiment of the disclosure. The data processing apparatus is applied to an electronic device. The electronic device includes an acquiring unit, and the acquiring unit is configured to acquire a scene image. The data processing apparatus may include a first image obtaining unit 1001, a first sub image determining unit 1002, a second image obtaining unit 1003, a command obtaining unit 1004 and a command execution unit 1005.
  • The first image obtaining unit 1001 is configured to obtain a first image acquired by the acquiring unit.
  • The first image refers to a panoramic image of a scene where an operator to be recognized is located, and the operator to be recognized may be, for example, a hand, a mouth and an eyeball of a human body.
  • The acquiring unit may be a device capable of acquiring image data, such as a camera.
  • The electronic device may be an apparatus including the acquiring unit, such as a TV, a computer or a pad. In a case where it is required to recognize an operator of a user and then execute a command on the electronic device, for example, content displayed in a display unit of the electronic device is triggered and clicked by the user through a hand gesture, the acquiring unit is triggered to acquire a panoramic image, i.e., a first image, of a region where the user is located, and thus the first image is obtained in the disclosure.
  • The first sub image determining unit 1002 is configured to determine a first sub image in the first image, where the first sub image corresponds to an operator.
  • The first sub image corresponding to the operator means that an image region in the first image corresponding to the operator is the first sub image, as shown in FIG. 2.
  • It should be illustrated that a ratio between the first sub image, i.e., the image region of the operator, and the first image is defined as a first ratio.
  • The second image obtaining unit 1003 is configured to obtain at least one second image acquired by the acquiring unit with the operator in the first sub image as a target.
  • Each second image includes a second sub image, and each second sub image corresponds to the operator.
  • It should be noted from the second image obtaining unit 1003 that the second image acquired by the acquiring unit is an image including the image region of the operator. The second sub image corresponding to the operator means that the image region in the second image corresponding to the operator is a second sub image in the second image, as shown in FIG. 3.
  • A ratio between the second sub image, i.e., an image region of the operator in the second image, and the second image is defined as a second ratio. In the embodiment of the disclosure, each second ratio is greater than the first ratio. It should be understood that resolution and size of the first image acquired by the acquiring unit are the same as those in the second image acquired by the acquiring unit. The first ratio between the operator and the first image is less than the second ratio between the operator and each second image. The second image is an image in which a partial image including the first sub image in the first image is zoomed in by a variable times with the resolution unvaried.
  • The second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
  • Reference is made to FIG. 11 which is a schematic structural diagram of the second image obtaining unit 1003 according to the fourth embodiment of the disclosure. The second image obtaining unit 1003 may includes an acquisition triggering subunit 1031 and an image obtaining subunit 1032.
  • The acquisition triggering subunit 1031 is configured to trigger the acquiring unit to acquire at least one second image with the operator in the first sub image as focus.
  • It can be understood from the acquisition triggering subunit 1031 that the acquiring unit is triggered to acquire the second image including the entire region of the first sub image with at least one focal length and with the operator in the first sub image as focus. Each focal length is different and is reduced with acquiring sequence of the second image, and each focal length corresponds to a second image and a second sub image thereof. The second image acquired by the acquiring unit is an image obtained by zooming in the focal length with the operator in the first sub image as a target, the second ratio between the second sub image and the second image acquired by the acquiring unit for the last time is the greatest.
  • The image obtaining subunit 1032 is configured to obtain each second image.
  • After the acquiring unit is triggered by the acquisition triggering subunit 1031 to obtain the second image, each second image is obtained by the image obtaining subunit 1032.
  • In this case, the second sub image in the second image obtained by the second image obtaining unit 1003 is easier to be recognized compared with the first sub image.
  • The command obtaining unit 1004 is configured to obtain an operation command corresponding to the operator in the at least one second sub image.
  • It can be understood from the command obtaining unit 1004 that the intention of the user which the operator belongs to is determined based on a contour or a skeleton shape of the operator in the second sub image, and the operation command corresponding to the user intention is then determined.
  • The command execution unit 1005 is configured to execute the operation command.
  • After the operation command corresponding to the operator in the second sub image is determined by the command obtaining unit 1004, a corresponding action is performed on the content displayed by the electronic device by the command execution unit 1005 based on the operation command, such as selecting or opening a file by clicking.
  • It can be known from the above solution that the fourth embodiment of the disclosure provides a data processing apparatus. The first image acquired by an acquiring unit in an electronic device is obtained, and a first sub image in the first image corresponding to an operator is determined; multiple second images acquired by the acquiring unit with the operator in the first sub image as a target are obtained, where each second image includes a second sub image corresponding to the operator, and a first ratio between the first sub image and the first image is less than a second ratio between each second sub image and a corresponding second image; then an operation command corresponding to the operator in at least one second sub image is obtained and executed. In this way, an object of the disclosure is realized. In the disclosure, an image of a region where the operator is located can be obtained by zooming in, to obtain a clearer and more precise image of the operator, therefore, a corresponding operation command is obtained and executed, which improves recognition accuracy of the operator, and further improves operation accuracy.
  • Reference is made to FIG. 12 which is a schematic structural diagram of the first sub image determining unit 1002 in the data processing apparatus according to a fifth embodiment of the disclosure. The first sub image determining unit 1002 may include an image recognition subunit 1021 and an image determining subunit 1022.
  • The image recognition subunit 1021 is configured to recognize the first image, to obtain a position of an operator in the first image.
  • The position of the operator may be a coordinate value of a position of an absolute centre point of the operator, or may be a coordinate value of a position of an edge contour of the operator, or may be an edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center.
  • Reference is made to FIG. 13 which is a schematic structural diagram of the image recognition subunit 1021 according to the fifth embodiment of the disclosure. The image recognition subunit 1021 may include a first scanning module 10211, a first skeleton determining module 10212 and a first position determining module 10213.
  • The first scanning module 10211 is configured to scan the first image, to obtain a skeleton of at least one operation to be selected in the first image.
  • The first scanning module 10211 may scan the first image using a two-dimensional image scanning method, or may scan and match the first image using a sample match and recognition method, to obtain the skeleton of at least one operation to be selected in the first image.
  • The operator to be selected may be any one of a mouth, an eyeball, a hand and a body or any combination thereof.
  • The first skeleton determining module 10212 is configured to determine a target skeleton in the skeleton of the operator to be selected which matches with a skeleton described by a received user operation instruction.
  • The user operation instruction refers to an instruction which is used to define or set a type of the operator by a user. The user operation instruction may be set by the user in advance, or may be set by the user in a process of applying the embodiment of the disclosure. Skeleton information corresponding to the type of the operator defined by the user is set in the user operation instruction.
  • The first position determining module 10213 is configured to determine a position of the operator to be selected corresponding to the target skeleton as a position of the operator in the first image.
  • The position of the target skeleton used in the first position determining module 10213 may be the coordination value of the position of the absolute centre point of the operator, or may be the edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center, used in the image recognition subunit 1021.
  • Reference is made to FIG. 14 which is another schematic structural diagram of the image recognition subunit 1021 according to the fifth embodiment of the disclosure. The image recognition subunit 1021 may include a second scanning module 10214, a second skeleton determining module 10215 and a second position determining module 10216.
  • The second scanning module 10214 is configured to scan the first image, to obtain a human skeleton in the first image.
  • The second scanning module 10214 may scan the first image using a two-dimensional image scanning method, or may scan and match the first image using a sample match and recognition method, to obtain the human skeleton in the first image. The human skeleton may be all human skeletons or a partial human skeleton including an operator skeleton, as shown in FIG. 7.
  • The second skeleton determining module 10215 is configured to determine a target skeleton in the human skeleton which matches with a skeleton described by a received user operation instruction.
  • The user operation instruction refers to an instruction which is used to define or set a type of the operator by a user. The user operation instruction may be set by the user in advance, or may be set by the user in a process of applying the embodiment of the disclosure. Skeleton information corresponding to the type of the operator defined by the user is set in the user operation instruction.
  • It should be illustrated that the second skeleton determining module 10215 may determine the target skeleton in the human skeleton which matches with the skeleton in the user operation instruction in a gradually matching manner.
  • The second position determining module 10216 is configured to determine a position of the target skeleton as the position of the operator in the first image.
  • The position of the target skeleton used in the second position determining module 10216 may be the coordination value of the position of the absolute centre point of the operator, or may be the edge coordinate value of the maximum circumcircle with the absolute center point of the operator as a center, used in the image determining subunit 1022.
  • The image determining subunit 1022 is configured to determine an image of a region where the operator is located as a first sub image in the first image.
  • The first sub image used in the image determining subunit 1022 includes an image consisting of pixels corresponding to the position of the operator including an edge contour of the operator.
  • FIG. 8 is taken as an example. FIG. 8 a shows an obtained first image which is a panoramic image including a palm image. After the first image is obtained, a first sub image, i.e., the palm image, in the first image is determined as shown in FIGS. 8 b to 8 d. After the first sub image is determined, a second image including the palm image in the first sub image is obtained. As shown in the small block of FIG. 8 e, the second image includes a second sub image corresponding to the palm, and the second ratio is greater than the first ratio. That is, compared with the first sub image in the first image, the second sub image in the second image is larger, relatively clearer, and easier to be recognized.
  • Reference is made to FIG. 15 which is a schematic structural diagram of the command obtaining unit 1004 in a data processing apparatus according to a sixth embodiment provided by the disclosure. The command obtaining unit 1004 may include an image selection subunit 1041, a contour recognition subunit 1042 and a command determining subunit 1043.
  • The image selection subunit 1041 is configured to select a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in at least one second sub image as a target sub image.
  • It should be illustrated that the second ratio between the second sub image and the second image being the greatest means that the second sub image in the second image is the clearest, and the accuracy of determining the operator command later is high.
  • The contour recognition subunit 1042 is configured to recognize contour data of the operator in the target sub image.
  • The contour recognition subunit 1042 may recognize the contour data of the operator in the target sub image by using, for example, a two-dimensional image scanning method.
  • The command determining subunit 1043 is configured to determine a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
  • The correspondence between a command and a contour may be set by the user in advance. Each contour data corresponds to an operation command of an operator gesture corresponding to the contour data. Thus, after the contour data of the operator is recognized, an operation command corresponding to the contour data is determined, and the operation command is then executed, which realizes the object of the disclosure.
  • The various embodiments in this specification are described herein in a progressive manner, with the emphasis of each of the embodiments on the difference between it and the other embodiment; hence, for the same or similar parts between the various embodiments, one can refer to the other embodiments.
  • Finally, it should also be illustrated that a relationship term such as “the first” and “the second” herein is only used to distinguish one entity or operation from another entity or operation, and does not necessarily require or imply that there is an actual relationship or sequence between these entities or operations. Furthermore, terms “include”, “comprise” or any other variations are intended to cover non-exclusive “include”, so that a process, a method, an object or a device including a series of factors not only include the factors, but also include other factors not explicitly listed, or also include inherent factors of the process, the method, the object or the device. Without more limitation, a factor defined in a sentence “include one . . . ” does not exclude a case that there is also another same factor in the process, the method, the object or the device including the described factor.
  • A data processing method and a data processing apparatus provided by the disclosure are introduced in detail above, the principle and the embodiments of the disclosure are summed by applying a specific example, and the above embodiments are illustrated to assist in understanding the method of the disclosure and a core concept thereof. Meanwhile, for those skilled in the art, the specific embodiment and the application scope can be altered according to the concept of the disclosure, therefore, the content of the specification is not understood to limit the disclosure.

Claims (18)

1. A data processing method, which is applied to an electronic device, the data processing method comprises:
obtaining a first image acquired by an acquiring unit comprised in the electronic device;
determining a first sub image in the first image, wherein the first sub image corresponds to an operator;
obtaining at least one second image acquired by the acquiring unit with the operator in the first sub image as a target, wherein each second image comprises a second sub image, and each second sub image corresponds to the operator,
wherein a first ratio between the first sub image and the first image is less than a second ratio between each second sub image and the corresponding second image;
obtaining an operation command corresponding to the operator in at least one second sub image; and
executing the operation command.
2. The method according to claim 1, wherein the second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
3. The method according to claim 1, wherein the obtaining at least one second image acquired by the acquiring unit with the operator in the first sub image as a target comprises:
triggering the acquiring unit to acquire the at least one second image with the operator in the first sub image as focus; and
obtaining the at least one second image.
4. The method according to claim 1, wherein the determining a first sub image in the first image comprises:
recognizing the first image, to obtain a position of the operator in the first image; and
determining an image of a region where the operator is located as the first sub image in the first image.
5. The method according to claim 3, wherein the determining a first sub image in the first image comprises:
recognizing the first image, to obtain a position of the operator in the first image; and
determining an image of a region where the operator is located as the first sub image in the first image.
6. The method according to claim 4, wherein the recognizing the first image to obtain a position of the operator in the first image comprises:
scanning the first image, to obtain a skeleton of at least one operator to be selected in the first image;
determining a target skeleton in the skeleton of the at least one operator to be selected which matches with a skeleton described by a received user operation instruction; and
determining a position of the target skeleton as the position of the operator in the first image.
7. The method according to claim 4, wherein the recognizing the first image to obtain a position of the operator in the first image comprises:
scanning the first image, to obtain a human skeleton in the first image;
determining a target skeleton in the human skeleton which matches with a skeleton described by a received user operation instruction; and
determining a position of the target skeleton as the position of the operator in the first image.
8. The method according to claim 1, wherein the obtaining an operation command corresponding to the operator in the at least one second sub image comprises:
selecting a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in the at least one second sub image as a target sub image;
recognizing contour data of the operator in the target sub image; and
determining a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
9. The method according to claim 3, wherein the obtaining an operation command corresponding to the operator in the at least one second sub image comprises:
selecting a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in the at least one second sub image as a target sub image;
recognizing contour data of the operator in the target sub image; and
determining a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
10. A data processing apparatus, which is applied to an electronic device, wherein the electronic device comprises an acquiring unit, the data processing apparatus comprises:
a first image obtaining unit configured to obtain a first image acquired by the acquiring unit;
a first sub image determining unit configured to determine a first sub image in the first image, wherein the first sub image corresponds to an operator;
a second image obtaining unit configured to obtain at least one second image acquired by the acquiring unit with the operator in the first sub image as a target, wherein each second image comprises a second sub image, and each second sub image corresponds to the operator,
wherein a first ratio between the first sub image and the first image is less than a second ratio between each second sub image and the corresponding second image;
a command obtaining unit configured to obtain an operation command corresponding to the operator in at least one second sub image; and
a command execution unit configured to execute the operation command.
11. The apparatus according to claim 10, wherein the second ratio between each second sub image and the corresponding second image is different, and the second ratio between each second sub image and the corresponding second image is increased with acquiring sequence of the second images by the acquiring unit.
12. The apparatus according to claim 10, wherein the second image obtaining unit comprises:
an acquisition triggering subunit configured to trigger the acquiring unit to acquire the at least one second image with the operator in the first sub image as focus; and
an image obtaining subunit configured to obtain the at least one second image.
13. The apparatus according to claim 10, wherein the first sub image determining unit comprises:
an image recognition subunit configured to recognize the first image, to obtain a position of the operator in the first image; and
an image determining subunit configured to determine an image of a region where the operator is located as the first sub image in the first image.
14. The apparatus according to claim 12, wherein the first sub image determining unit comprises:
an image recognition subunit configured to recognize the first image, to obtain a position of the operator in the first image; and
an image determining subunit configured to determine an image of a region where the operator is located as the first sub image in the first image.
15. The apparatus according to claim 13, wherein the image recognition subunit comprises:
a first scanning module configured to scan the first image, to obtain a skeleton of at least one operator to be selected in the first image;
a first skeleton determining module configured to determine a target skeleton in the skeleton of the at least one operator to be selected which matches with a skeleton described by a received user operation instruction; and
a first position determining module configured to determine a position of the operator to be selected corresponding to the target skeleton as the position of the operator in the first image.
16. The apparatus according to claim 13, wherein the image recognition subunit comprises:
a second scanning module configured to scan the first image, to obtain a human skeleton in the first image;
a second skeleton determining module configured to determine a target skeleton in the human skeleton which matches with a skeleton described by a received user operation instruction; and
a second position determining module configured to determine a position of the target skeleton as the position of the operator in the first image.
17. The apparatus according to claim 10, wherein the command obtaining unit comprises:
an image selection subunit configured to select a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in the at least one second sub image as a target sub image;
a contour recognition subunit configured to recognize contour data of the operator in the target sub image; and
a command determining subunit configured to determine a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
18. The apparatus according to claim 12, wherein the command obtaining unit comprises:
an image selection subunit configured to select a second sub image, wherein a ratio between the second sub image and the corresponding second image is the greatest, and the second sub image is selected in the at least one second sub image as a target sub image;
a contour recognition subunit configured to recognize contour data of the operator in the target sub image; and
a command determining subunit configured to determine a command corresponding to the contour data as an operation command based on a preset correspondence between a command and a contour.
US14/230,250 2013-08-09 2014-03-31 Image data processing method and apparatus Active US8964128B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201310347509.9 2013-08-09
CN201310347509.9A CN104349197B (en) 2013-08-09 2013-08-09 A kind of data processing method and device
CN201310347509 2013-08-09

Publications (2)

Publication Number Publication Date
US20150042893A1 true US20150042893A1 (en) 2015-02-12
US8964128B1 US8964128B1 (en) 2015-02-24

Family

ID=52448363

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/230,250 Active US8964128B1 (en) 2013-08-09 2014-03-31 Image data processing method and apparatus

Country Status (2)

Country Link
US (1) US8964128B1 (en)
CN (1) CN104349197B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105472238A (en) * 2015-11-16 2016-04-06 联想(北京)有限公司 Image processing method and electronic device
WO2020057667A1 (en) * 2018-09-21 2020-03-26 北京市商汤科技开发有限公司 Image processing method and apparatus, and computer storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550655A (en) * 2015-12-16 2016-05-04 Tcl集团股份有限公司 Gesture image obtaining device and method
CN113743282A (en) * 2021-08-30 2021-12-03 深圳Tcl新技术有限公司 Content search method, content search device, electronic equipment and computer-readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100306713A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Gesture Tool
US20110112996A1 (en) * 2006-07-14 2011-05-12 Ailive, Inc. Systems and methods for motion recognition using multiple sensing streams
US20110169866A1 (en) * 2010-01-08 2011-07-14 Nintendo Co., Ltd. Storage medium, information processing system, and information processing method
US20110237324A1 (en) * 2010-03-29 2011-09-29 Microsoft Corporation Parental control settings based on body dimensions
US20120226981A1 (en) * 2011-03-02 2012-09-06 Microsoft Corporation Controlling electronic devices in a multimedia system through a natural user interface
US20120268424A1 (en) * 2011-04-20 2012-10-25 Kim Taehyeong Method and apparatus for recognizing gesture of image display device
US20130010071A1 (en) * 2011-07-04 2013-01-10 3Divi Methods and systems for mapping pointing device on depth map
US8384665B1 (en) * 2006-07-14 2013-02-26 Ailive, Inc. Method and system for making a selection in 3D virtual environment
US20130257720A1 (en) * 2012-03-27 2013-10-03 Sony Corporation Information input apparatus, information input method, and computer program
US20130321271A1 (en) * 2011-02-09 2013-12-05 Primesense Ltd Pointing-based display interaction
US20130344961A1 (en) * 2012-06-20 2013-12-26 Microsoft Corporation Multiple frame distributed rendering of interactive content
US20140009384A1 (en) * 2012-07-04 2014-01-09 3Divi Methods and systems for determining location of handheld device within 3d environment
US20140173504A1 (en) * 2012-12-17 2014-06-19 Microsoft Corporation Scrollable user interface control

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2667571Y (en) * 2003-10-27 2004-12-29 北京雷波泰克信息技术有限公司 Fast multi-target human figures identification and tracking safety protection apparatus
CN102685382B (en) * 2011-03-18 2016-01-20 安尼株式会社 Image processing apparatus and method and moving body collision prevention device
JP6106921B2 (en) * 2011-04-26 2017-04-05 株式会社リコー Imaging apparatus, imaging method, and imaging program
CN102307309A (en) * 2011-07-29 2012-01-04 杭州电子科技大学 Somatosensory interactive broadcasting guide system and method based on free viewpoints
CN103135755B (en) * 2011-12-02 2016-04-06 深圳泰山在线科技有限公司 Interactive system and method
CN102801924B (en) * 2012-07-20 2014-12-03 合肥工业大学 Television program host interaction system based on Kinect
CN103123689B (en) * 2013-01-21 2016-11-09 信帧电子技术(北京)有限公司 A kind of run detection method and device based on the detection of people's leg

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8384665B1 (en) * 2006-07-14 2013-02-26 Ailive, Inc. Method and system for making a selection in 3D virtual environment
US20110112996A1 (en) * 2006-07-14 2011-05-12 Ailive, Inc. Systems and methods for motion recognition using multiple sensing streams
US20100306713A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Gesture Tool
US20110169866A1 (en) * 2010-01-08 2011-07-14 Nintendo Co., Ltd. Storage medium, information processing system, and information processing method
US20110237324A1 (en) * 2010-03-29 2011-09-29 Microsoft Corporation Parental control settings based on body dimensions
US20130321271A1 (en) * 2011-02-09 2013-12-05 Primesense Ltd Pointing-based display interaction
US20120226981A1 (en) * 2011-03-02 2012-09-06 Microsoft Corporation Controlling electronic devices in a multimedia system through a natural user interface
US20120268424A1 (en) * 2011-04-20 2012-10-25 Kim Taehyeong Method and apparatus for recognizing gesture of image display device
US20130010071A1 (en) * 2011-07-04 2013-01-10 3Divi Methods and systems for mapping pointing device on depth map
US20130257720A1 (en) * 2012-03-27 2013-10-03 Sony Corporation Information input apparatus, information input method, and computer program
US20130344961A1 (en) * 2012-06-20 2013-12-26 Microsoft Corporation Multiple frame distributed rendering of interactive content
US20140009384A1 (en) * 2012-07-04 2014-01-09 3Divi Methods and systems for determining location of handheld device within 3d environment
US20140173504A1 (en) * 2012-12-17 2014-06-19 Microsoft Corporation Scrollable user interface control

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105472238A (en) * 2015-11-16 2016-04-06 联想(北京)有限公司 Image processing method and electronic device
WO2020057667A1 (en) * 2018-09-21 2020-03-26 北京市商汤科技开发有限公司 Image processing method and apparatus, and computer storage medium

Also Published As

Publication number Publication date
CN104349197B (en) 2019-07-26
US8964128B1 (en) 2015-02-24
CN104349197A (en) 2015-02-11

Similar Documents

Publication Publication Date Title
KR102140882B1 (en) Dual-aperture zoom digital camera with automatic adjustable tele field of view
KR101706365B1 (en) Image segmentation method and image segmentation device
EP3127320B1 (en) System and method for multi-focus imaging
US8442269B2 (en) Method and apparatus for tracking target object
CN104427252B (en) Method and its electronic equipment for composograph
JP5704363B2 (en) Automatic synchronous navigation system for digital pathological images
US8964128B1 (en) Image data processing method and apparatus
CN108919958A (en) A kind of image transfer method, device, terminal device and storage medium
US20190109981A1 (en) Guided image composition on mobile devices
CN111050017A (en) Picture and text photographing equipment
KR101631015B1 (en) Gesture recognition apparatus and control method of gesture recognition apparatus
KR20210054551A (en) Video processing methods and devices, electronic devices and storage media
CN112317241B (en) Dispensing method, system, equipment and storage medium
JP2016506648A (en) Annular view for panoramic images
CN105657260A (en) Shooting method and terminal
CN110166680B (en) Device imaging method and device, storage medium and electronic device
CN106599888A (en) Translation method and device, and mobile terminal
CN103873760A (en) Focusing point adjusting method and electronic equipment
CN107566724B (en) Panoramic image shooting method and mobile terminal
CN107340962B (en) Input method and device based on virtual reality equipment and virtual reality equipment
JPWO2015141185A1 (en) Imaging control apparatus, imaging control method, and program
US10877641B2 (en) Image adjustment method, apparatus, device and computer readable storage medium
CN114500837A (en) Shooting method and device and electronic equipment
JP2006127530A (en) Method and device for extracting object in moving image
CN113419808A (en) Map marking method, map marking device, display control equipment and computer readable storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: LENOVO (BEIJING) CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LI, RUI;REEL/FRAME:032568/0834

Effective date: 20140328

Owner name: BEIJING LENOVO SOFTWARE LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LI, RUI;REEL/FRAME:032568/0834

Effective date: 20140328

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8