|Publication number||US7302117 B2|
|Application number||US 11/190,912|
|Publication date||27 Nov 2007|
|Filing date||28 Jul 2005|
|Priority date||29 Jan 1999|
|Also published as||CN1229996C, CN1333976A, EP1185106A1, EP1185106A4, US6611628, US7013051, US20030174906, US20050267879, US20090110296, WO2000045600A1|
|Publication number||11190912, 190912, US 7302117 B2, US 7302117B2, US-B2-7302117, US7302117 B2, US7302117B2|
|Inventors||Shunichi Sekiguchi, Yoshihisa Yamada, James Chow, Kohtaro Asai|
|Original Assignee||Mitsubishi Denki Kabushiki Kaisha|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (17), Non-Patent Citations (12), Referenced by (21), Classifications (49), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application is a continuation of application Ser. No. 10/413,513 filed on Apr. 15, 2003 now U.S. Pat. No. 7,013,051, which is a continuation of application Ser. No. 09/714,140, filed on Nov. 17, 2000 and issued as U.S. Pat. No. 6,611,628 on Aug. 26, 2003, for which priority is claimed under 35 U.S.C § 120. Application Ser. No. 09/714,140 is the continuation application of PCT International Application No. PCT/JP99/00403 filed on Jan. 29, 1999 under 35 U.S.C § 371. The entire contents of each of the above-identified applications are hereby incorporated by reference.
1. Field of the Invention
The present invention generally relates to methods of image feature coding and image search and, more particularly, to a method of image feature coding and a method of image search in which features of analog or digital data of moving images or still images are extracted and coded so that the image data is searched using the coded feature.
2. Description of the Related Art
The conceptual keyword 203 is a keyword indicating color information and feature of a segment. The scene keyword 204 is a keyword representing a feature of the segment using descriptive words relating to position, color, shape, size and orientation.
The pre-processing unit 91 of
The search tool 92 of
A description will now be given of the operation.
When the still image 201 is supplied to the pre-processing unit 91, the segmentation unit 93 segments the still image 201. The conceptual keyword extracting unit 94 extracts the conceptual keyword 203 from the color and feature of the segment. More specifically, the conceptual keyword extracting unit 94 starts with a conceptual keyword associated with the color information to arrive at the conceptual keyword 203.
The scene descriptive keyword providing unit 95 provides the scene keyword 204 to the image feature of the segment, by receiving the predicate description 202 from the user 96.
When searching for the still image 201, the user 97 inputs the keyword 205, selected from a prepared set of conceptual keywords 203 and scene keywords 204, to the search tool 92. The feature identifying unit 98 retrieves the still image 201 requested by the user 97, based on the keyword 205 provided by the user 97, the conceptual keyword 203 and the scene descriptive keyword 204 from the pre-processing unit 91.
Since the target of the image search system described above is the still image 201, there is a drawback in that it is difficult to search for moving images.
In addition, since not much consideration is given to how the keywords are provided and stored, a one-to-one correspondence between an image server and a client (search tool 92) is a prerequisite. Therefore, according to the related art, an image search system where a large number of users are capable of searching for images using a variety of search tools via a network cannot be built.
Accordingly, a general object of the present invention is to provide a method of image feature coding and a method of image search in which the aforementioned drawbacks are eliminated.
Another and more specific object is to provide a method of image feature coding and a method of image search in which a large number of users can search for images using a variety of search tools.
The aforementioned objects can be achieved by an image feature coding method comprising the steps of: extracting segments of image areas from an image frame; attaching a segment number to each of the extracted segments; assigning a representative color to each of the extracted segments; computing a relative area of each of the segments with respect to the image frame; coding the representative color and the relative area to produce a feature of the image; and generating a feature stream corresponding to the image having the feature encoded therein.
The segments may be extracted from the image frame in accordance with color information, and the color information used in extracting the segments is assigned to the extracted segments as the representative color.
The segments from adjacent image frames may be checked for identity match, and those segments determined to match each other are given a same segment number.
The segments may be tracked from image frame to image frame so as to determine movement information relating to the segments that match each other in identity, the movement information is generated by coding to produce the feature of the segments, and the feature stream, having the feature thus produced encoded therein, is generated.
An appropriate key frame that provides a key for a search is extracted from a group of image frames of a video signal, whereupon the segments are extracted from the extracted key frames.
A reduced image of the key frame may be generated by averaging pixels located in respective areas of the key frame, the reduced image is coded to produce the feature of the key frame, and the feature stream, having the feature thus produced encoded therein, is generated.
The aforementioned objects can also be achieved by an image searching method using first storage unit for storing image frames and a second storage unit for storing a feature stream having features of the image frames encoded therein, comprising the steps of: decoding the features stored in the second storage unit, in accordance with a search instruction from a user; and checking the decoded features against a search criteria provided by the user for identity match.
The features stored in the second storage unit may include a representative color of a segment constituting an area in the image frame, and the search criteria from the user may include the representative color.
The features stored in the second storage unit may include a relative area of a segment, constituting an area in the image frame, with respect to the image frame, and the search criteria from the user may include the relative area.
The features stored in the second storage unit may include movement information related movement between adjacent image frames, and the search criteria from the user may include the movement information.
The features stored in the second storage unit may include a reduced image of the image frame, the decoded feature may be checked against the search criteria from the user, and the reduced image may be presented to the user.
The features stored in the second storage unit may include information indicating whether a designated object is captured in the image frame.
The features stored in the second storage unit may include information indicating whether a designated object is captured subsequent image frames.
The features stored in the second storage unit may include information indicating whether a designated object is captured previous image frames.
Priority given to the decoded feature when checking the decoded feature against the search criteria from the user may be presented to the user.
A plurality of decoded features may be checked from an viewpoint against a plurality of search criteria from the user for a match from an overall perspective.
Other objects and further features of the present invention will be apparent from the following detailed description when read in conjunction with the accompanying drawings, in which:
A detailed description of the best mode of carrying out the present invention will now be given, with reference to the attached drawings.
According to an apparatus of a first embodiment, a rectangular area surrounding an object contained in a frame of a video sequence is identified as a segment. Segments are extracted from each frame constituting a video signal. A feature stream identifying the features of the segments is generated.
A description will now be given of a system using the apparatus of the first embodiment.
The client 1 comprises a user interface (I/F) 8 for outputting a search control signal 106 for controlling the client 1, in accordance with a user instruction/setting 105; a search processing unit 9 for receiving the search control signal 106 and outputting a search instruction 107 and a search key 108.
A description will now be given of the operation according to the first embodiment.
The system shown in
The client 1 and the server 2 may be connected to each other via the network for operation. Alternatively, the client 1 and the server 2 may operate within the same unit.
(1) Significance of Feature Stream
As shown in
By describing the feature of the video content (VCn) 111 in the feature stream (FSn) 103, the user can retrieve the desired video content 111 from a large repository of the video contents 111, using an intuitive search key 108. The search method according to the first embodiment improves the efficiency of the process of retrieving the desired video content 111 from a video library database or a video tape that contains a large volume of video contents 111.
(2) Generation of a Feature Stream
Generation of the feature stream 103 means generating the feature stream (FSn) 103 corresponding to the video content (VCn) 111 and storing it in the feature stream storage unit 7. These steps are carried out by the decode processing unit 4, the feature coding unit 5 and the user interface 6. The decode processing unit 4 is necessary only when the video content (VCn) 111 is stored in the form of the digitally compressed bit stream 101. The decode processing unit 4 outputs the video signal 102. When the video content (VCn) 111 is image data directly displayable, the decode processing unit 4 is not necessary.
The feature coding unit 5 generates the feature stream (FSn) based on the video signal 102 and stores it in the feature stream storage unit 7. Generation of the feature stream 103 will be discussed in detail later.
(3) Search Process
The search process is initiated by the client 1. The client 1 is a processing unit which the user uses to search for the desired video content 111 from a repository of video contents 111 in the video contents storage unit 3. The user supplies via the user interface 8 the user instruction/setting 105 to generate the search control signal 106. The search control signal 106 is then supplied to the search processing unit 9 to request the target coded feature in the feature stream 103.
Assuming that the client 1 and the server 2 are connected to each other via the network, the search instruction 107 and the search key 108 are transmitted to the server 2 and the apparatus for feature identification (the feature decoding unit 10 and the feature identifying unit 11).
When the search instruction 107 is activated, the feature decoding unit 10 successively retrieves the feature stream (FSn) 103 from the feature stream storage unit 7 so as to decode the feature contained in the feature stream 103. The decoded feature 109 obtained as a result of decoding is checked against the search key 108 by the feature identifying unit 11. When the decoded feature 109 that matches the search key 108 is identified, the pointer 110 of the feature stream 103 containing the matching feature is referred to, so as to identify the video content (VCn) 111. In the example of
The apparatus for feature identification (the feature decoding unit 10 and the feature identifying unit 11) may be included in the client 1 or the server 2. Alternatively, the apparatus for feature identification may be included in another apparatus located in the network. When the client 1 and the server 2 are housed in the same unit, the apparatus for feature identification may be housed in that unit.
The video content 111 output as the search result are transmitted to the client 1 so that the user can browse the image contents via the user interface 8. When a plurality of video contents 111 are identified as a result of the search with respect to the feature “blue sky”, the plurality of video contents 111 may be browsed by displaying them via the user interface 8. With this system, the user need not browse the entirety of the video contents 111 but can narrow the field of browsing by displaying only those video contents 111 that include a desired segment. Accordingly, the efficiency of a search is improved.
(4) Interactive Function
In the system of
(5) Transmission and Distribution of Feature Stream
The feature stream (FSn) 103 does not have to be stored in the server 2 together with the video contents 111 but may be located anywhere as long as the feature stream 103 is provided with the pointer 110 that points to the corresponding video content (VCn) 111. For example, a CD-ROM may contain only the feature stream 103. By reading from the CD-ROM, the client 1 can identify the location of the video content 111 corresponding to the feature stream 103. In this case, a requirement for the feature stream 103 may be that it has a uniform resource locator (URL) of the video content.
The feature stream 103 generally has a smaller data volume than the video content 111. Accordingly, the feature stream 103 may be stored in a relatively small storage medium so as to be used on a portable terminal such as a notebook personal computer (PC) or a personal digital assistant (PDA).
The feature stream 103 may be attached to the video contents 111 and transmitted and distributed over the network. With the interactive function described in (4) above, a receiver of the feature stream 103 may process or edit the received feature stream 103 either for reuse or for retransmission. The video contents 111 may be freely distributed among different types of medium without losing the flexibility in searching.
A detailed description will now be given of generation of the feature stream 103.
As described above, generation of the feature stream 103 is carried out by the feature coding unit 5.
Referring again to
Referring again to
Referring again to
Referring again to
A description will now be given of an operation of the feature coding unit 5.
(A) Determination of Key Frame
The key determining unit 21 determines a key frame, which serves as a key in the video content 111 (step ST1). The key frame is defined as a frame containing a substantial change (scene change) in the video content 111 or a frame which the user would like to define as a reference point in a search for a specific feature.
The video signal 102 is supplied to the key frame determining unit 21 frame by frame. The frame counter 31 counts the frame number. The frame counter 31 is reset to zero at the start of the video signal 102.
The video signal 102 is also sent to the monitor 22. The user selects a key frame while monitoring the video signal 102 using the monitor 22. An instruction for selection is provided by activating the key frame setting signal 121. By activating the key frame setting signal 121, the switch 32 outputs the key frame number 122, and the switch 33 outputs the key frame image 123 of the selected key frame.
The video signal 102 is supplied to the frame counter 31, as in the key frame determining unit 21 shown in
When the scene change is detected, the key frame setting instruction 141 is activated so that the switch 42 outputs the current frame count as the key frame number 122. The scene change detecting unit 41 outputs the detected frame of scene change as the key frame image 123.
An intra-frame coding frame (not shown) that occurs at a predetermined period may be used as a key frame. For example, MPEG-1, MPEG-2 and MPEG-4 (MPEG stands for Moving Picture Experts Group) are known to have the intra frame coding mode in which the frame is coded without using the interframe prediction. The intra-frame coding frame is periodically inserted in the video contents 111 while the video contents 111 is being coded so that the inserted frames may be used as reference points of random access. For this reason, the infra-frame coding frame meets the requirement of the key frame.
(B) Detection of Segment
When the key frame image 123 is selected, the segment extracting unit 23 of
By describing the representative color of the segment in the feature stream 103, the user can retrieve the key frame that matches the requirement substantially automatically, by matching the value in the feature stream 103 and target value for the search. For example, the user can search for the video content 111 containing a “red segment” or a “blue segment”.
Since the size information indicates the relative area of the segment with respect to the key frame, the size information is also referred to as representing an important aspect of the segment in the key frame. For example, by specifying “a segment substantially filling the screen in size and having a color of human skin”, the key frame containing an image of a face filling the screen may be substantially automatically retrieved. The size information may also include position information relating to a position that serve as a reference of measurement such as the top left corner of a rectangle or relating to a center of gravity of the rectangle.
For example, a video object defined in the MPEG-4 video coding system (ISO/IEC, JTC1/SC29/WG11, N2202) may be considered as the object according to the definition given above. In this case, the segment corresponds to a video object plane (VOP) of the MPEG-4 video. Strictly speaking, the definition given to the video object plane in the MPEG-4 standard does not match those given to the segment according to the present invention. Conceptually, however, the horizontal and vertical sizes of the video object correspond to the horizontal and vertical sizes of the segment. In contrast, MPEG-1 and MPEG-2 lack the concept of object, the segment being determined only when extraction is made in the key frame.
Segment extraction is a process whereby a segment is extracted from the key frame image 123 so that its feature is determined and captured. The segment extraction is performed by the segment extracting unit 23 of
The segment extraction processing unit 51 of the segment extracting unit 23 of
Extraction of the segment may be performed by clustering whereby similar colors are collected in a color component space. The invention does not, however, concern with specific implementations of the extraction process. It is simply assumed that the segment extraction processing unit 51 produces the segment in the form of an image area which contains a distinctive content and which is circumscribed by a rectangle.
The segment extraction processing unit 51 counts image areas (segments) thus extracted by assigning a number to each segment. The count is output as the segment number 126 (step ST3).
The segment extraction processing unit 51 outputs the intra-segment image sample value 151 to the representative color assigning unit 52 so that the representative color assigning unit 52 determines the representative color 125 (step ST4). For example, in the case where the intra-segment image sample value 151 is formatted as RGB representation with eight bits assigned to R, G and B, respectively, R, G, B averages of the R, G, B spaces of the segment are computed so that a set of R, G, B averages is assigned as the representative color. Alternatively, pixels that are included in a representative area in the segment are specified so that the average is computed in that area.
Assuming that the VOP of MPEG-4 corresponds to the segment according to the invention, an area representing the segment is determined on the basis of the alpha plane which depicts the configuration of the VOP.
An alternative method of determining the representative color is for the segment extraction processing unit 51 to extract the segment on the basis of the color information so that the color information assigned to the segment as a result of clustering is used to determine the representative color.
(C) Coding of Segment
The kth key frame, indicated as KF(k) hereinafter, has a header which includes the sequential position (key frame number 122) in the video contents 111 and the number of segments (M) found in the screen. A total of M sets of segment data are provided subsequent to the header. The key frame KF(k) also includes data for a reduced image described later for the browsing purpose. The mth segment, indicated as SG(m) hereinafter, consists of the representative color 125 and the size 127. The representative color 125 is given by coding an index value in the color map table 128.
An index color is made to correspond to a set of R, G, B values. By increasing the number (n) of index colors, the gradation becomes richer.
The size 127 indicates a relative area in percentage given by a figure between 1 to 100 and requires seven bits at most.
Referring back to
(D) Generation of Reduced Image
Referring back to
A reduced image is produced by determining an average value for each of the N×N pixels of the key frame image 123 (step ST8 of
Since the key frame image 123 is usually produced by decoding a bit stream subject to non-reversible compression, the compression by the reduced image coding unit 26 is preferably a simple coding scheme using low compression, such as differential pulse code modulation (DPCM). By determining the DC value for a set of N×N pixels, the number of samples can be reduced by at least 1/N2 so that the feature stream 103 does not incur a load of heavy code volume.
The coded reduced image data 133 is transmitted to the multiplexing unit 27 so as to produce the feature stream 103 having the format of
As has been described, with the construction of the feature encoding unit 5 according to the first embodiment, the user can generate the feature stream 103 in which the feature of the video content 111 is described. Moreover, the user can specify the key frame in the video content 111 automatically or manually. The feature is set in the image area (segment) found in each key frame, in the form of the representative color 125, the size 127 and the like. By using the feature as the search key, the video content search process can be automated to a certain extent. The candidates yielded as a result of the automatic search may be browsed using thumbnail images so that the efficiency in retrieving the video content is improved.
The definition of a segment according to the first embodiment is derived from considering the frame image as a still image. Therefore, the search process according to the first embodiment is applicable to a search for a desired image in a large library of still images. In the case of still images, the key frame is at the top of the hierarchy depicted in
In this example, segments from one key frame are checked against segments in other key frames. The segments are associated with objects in the video content. That is, the key frame is not considered as a closed domain as far as the segments therein are concerned. Segments are extracted as image areas in which the objects constituting the video content 111 are captured from moment to moment.
When the segment extraction processing unit 61 extracts the segment data 161 associated with a plurality of segments from the key frame image 123, the segment identification processing unit 62 checks each segment against segments from the existing key frame image 123 stored in the reference image memory 63 for a match so as to give identification to the segments. The segments given the identification are output with the segment number 126 attached, where the segment number 126 is the same as that of the existing matching segment. When a match is not found, the segment is considered as a new segment and is output with new segment number 126.
The segment identification processing unit 62 also outputs the intra-segment image sample value 151 and the horizontal and vertical segment sizes 152. The representative color assigning unit 52 and the size computing unit 53 compute the representative color 125 and the size 127, respectively, as in the construction of
The segment SG(m) is provided with Flag (1). Flag (1) indicates whether the segment SG(m) is found in the key frame KF(k). It is assumed that each key frame has a total of M coded segments at most. When SG(m) is not actually found in KF(k), Flag (1) is turned off so that the representative color 125 and the size 127 are not coded. Flag (1) is attached by the multiplexing unit 27 of
When SG(m) is found in KF(k) but not in KF(k−1), that is, when SG(m) makes an appearance for the first time in frame k, a unique flag indicating entrance to the scene may be used. When SG(m) is found in KF(k) but not in KF(k+1), that is, SG(m) disappears in frame k, a unique flag indicating exit from the scene maybe used.
The coded feature data thus produced is transmitted to the multiplexing unit 27 so that the feature stream 103 having the format of
As has been described, with the construction of the segment extracting unit 23 of
In this example, the segment is obtained as an image area found in the key frame as a result of tracking an object in the video content 111. Object tracking is performed by the segment tracking processing unit 71.
Various approaches for object tracing are proposed. Selection of one of these approaches is not the subject matter of the present invention. By using an appropriate algorithm, an object can be tracked even when it disappears and then reappears in the screen.
The segment extracting unit 23 of the third embodiment is no different from the segment extracting unit 23 of
With this construction, a portion of the video content 111 in which an object appears for the first time includes information relating to the subsequent movement of the object. Thus, for example, the apparatus and method according to the third embodiment can respond quickly to a search key such as “moved from left to right”. Although not shown in
As has been described, by providing the movement information 171 according to the third embodiment, objects that changes its position from frame to frame is retrieved properly.
A description will now be given of a video content 111 search according to a fourth embodiment using the client 1 of
The parameters prepared by the client 1 may include color information such as “blue” and “red”, brightness information, relative area of the segment, shape information (such as “round” or “rectangular”) of the segment and position information (such as “top” or “bottom right” of the screen).
By using a combination of parameters that specifies “blue” and “80%”, description requesting a “segment with a representative color of blue occupying 80% of the frame screen” is effected. By specifying that a rectangular segment which has a representative color of red and which occupies 20% of the bottom of the screen in the frame, a description indicating the aforementioned red car is effected. A complex search for the video content 111 that includes “red car” and “blue sky” can also be made by combining features of a plurality of segments. When the parameter prepared by the client 1 is selected, the result of selection is output as the search key 108 from the search processing unit 9.
The decoded feature 109 output from the feature decoding unit 10 is checked against the search key 108 for a match in the feature identifying unit 11.
The matching processing units 81 a-81 e are responsible for respective features. For example, the matching processing unit 81 a checks the decoded feature 109 to locate the feature “blue”. Likewise, the matching processing unit 81 b may check the decoded feature 109 to locate the feature “80%”. In this case, an image with the feature “light blue” or “dark blue” may meet the requirement of the user desiring the image with the feature “blue”. Also, an image with the feature “70%” or “90%” may meet the requirement of the user desiring the image with the feature “80%”. The feature identifying unit 11 not only looks for the perfect match but also considers the feature producing a substantial match with the search key 108 as a candidate feature.
The checking results yielded by the matching processing units 81 a-81 e are forwarded to the matching determination unit 82, where the degree of matching with respect to the respective features are examined in its entirety. The resultant output from the matching determination unit 82 indicates the degree of matching between the decoded feature 109 and the search key 108 provided as criteria of the search. A threshold value defining a margin for determination of a match may be specified according to a default value standardized in the system. Alternatively, the threshold value may be preset by the user in a manner not shown in the figures.
The feature identifying unit 11 the pointer 110 indicating the video content 111 producing the highest degree of match to the server 2. In response, the server 2 outputs the video content 111 to the client 1.
The client 1 displays the video content 111 via the user interface 8. If the video content 111 is the content desired by the user, the search process is terminated. If not, the user selects parameters so that another search key 108 is generated.
The image data delivered to the client 1 may not be the video content 111 itself stored in the video contents storage unit 3. The delivered image may be the reduced image (thumbnail) in the feature stream 103. By using the thumbnail, the data volume of the video content 111 delivered from the server 2 to the client 1 may be reduced. The size of the screen output via the user interface 8 is limited. Simultaneous display of a plurality of candidate images is possible using thumbnail images. With this, the operability of the search process is improved.
If the video contents storage unit 3 stores a limited amount of images, thumbnail images in the feature steam 103 stored in the feature stream storage unit 7 may be displayed via the user interface 8 as parameters for initiating the search.
As has been described, according to the fourth embodiment, the client 1, the feature decoding unit 10, the feature identifying unit 11, which are involved in the search, allow the user to automatically and efficiently retrieve the video content 111 that is a candidate for the desired video content 111. The data volume of the feature stream 103 is generally smaller than that of the video content 111 so that the process performed by the feature decoding unit 10 is a process with only limited complexity as compared to the expansion/decoding of the video signal 102.
In accordance with the fourth embodiment, when the feature stream 103 includes thumbnail images, a large number of contents from the video content 111 may be simultaneously displayed for browsing. This helps the user to search for a desired image with an increased efficiency.
In the fourth embodiment, it is assumed that the client 1 performs a search process using the system of
The feature stream 103 may also be transmitted to a remote place over a network. If the receiving end is provided not only with the search processing unit 9 but also with a feature stream generating function of the feature coding unit 5, the receiving end may rewrite the existing feature stream 103 so as to create the new feature stream 103. Given this capability, the receiving end may exercise a control over the video content by changing a rule governing how the video content 111 is displayed. It is of course possible to construct an apparatus in which the functions of the client 1 and the server 2 are provided.
Referring back to
As has been described, the fifth embodiment is adapted to present prioritized search target candidates so that the user can efficiently search for the content that matches his or her search request.
A description will now be given of an alternative search criteria input method using the user interface 8 according to a sixth embodiment. The user may input, via the user interface 8, a general outlook of a target image by using, for example, a pointing device such as a mouse to draw a linear figure or color the figure.
As shown in second candidate segment of
The search processing unit 9 divides the input general outlook into individual segments with reference to color information so as to compute an area filled by the color or determine a position of the segment in the screen. As a result of this process, the color information indicating, for example, “blue” or “red”, the relative area filled by the color, the configuration of the segment filled by the color, the position of the segment filled by the color are extracted and output as the search key 108.
As has been described, according to the sixth embodiment, by enabling the user to provide an input on the intuitive basis, the video content 111 can be efficiently searched.
When the movement information 171 of the segment as described in the third embodiment is extracted, it is possible to use the movement information 171 as the search key 108. The user is presented via the user interface 8 with selectable parameters in the form of the movement information 107 such as “from left to rights”, “from top to bottom” and “zoom in”. When a time-dependent variation of the image signal is extracted, parameters such as variation in color and variation in brightness may be presented to the user for selection.
The user may also be allowed to input a general outlook of the image twice, instead of only once, and also input time that elapses between the two images. The search processing unit 9 can extract information relating to the movement of objects and time-dependent variation of the image signal, by referring to the two input images and a time interval therebetween, so as to generate the search key 109.
As has been described, according to the seventh embodiment, the movement information 171 may be used to search for the video content 111 desired by the user.
The present invention is not limited to the above-described embodiments, and variations and modifications may be made without departing from the scope of the present invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5579471||23 Mar 1994||26 Nov 1996||International Business Machines Corporation||Image query system and method|
|US5619338||28 Oct 1994||8 Apr 1997||Kabushiki Kaisha Toshiba||Reproduction apparatus with a search function|
|US5748789 *||31 Oct 1996||5 May 1998||Microsoft Corporation||Transparent block skipping in object-based video coding systems|
|US5778098 *||22 Mar 1996||7 Jul 1998||Microsoft Corporation||Sprite coding|
|US5787203 *||19 Jan 1996||28 Jul 1998||Microsoft Corporation||Method and system for filtering compressed video images|
|US5802361||30 Sep 1994||1 Sep 1998||Apple Computer, Inc.||Method and system for searching graphic images and videos|
|US5862272||30 Apr 1997||19 Jan 1999||Matsushita Electric Industrial Co., Ltd.||Image detector|
|US5936673 *||26 May 1995||10 Aug 1999||Intel Corporation||Temporal tile staggering for block based video compression|
|US5956026 *||19 Dec 1997||21 Sep 1999||Sharp Laboratories Of America, Inc.||Method for hierarchical summarization and browsing of digital video|
|US5995095 *||21 May 1999||30 Nov 1999||Sharp Laboratories Of America, Inc.||Method for hierarchical summarization and browsing of digital video|
|US6141442||21 Jul 1999||31 Oct 2000||At&T Corp||Method and apparatus for coding segmented regions which may be transparent in video sequences for content-based scalability|
|US6246719 *||19 Apr 1999||12 Jun 2001||Intel Corporation||Temporal tile staggering for block based video compression|
|US6611628 *||17 Nov 2000||26 Aug 2003||Mitsubishi Denki Kabushiki Kaisha||Method of image feature coding and method of image search|
|US20010012405||5 Jun 1997||9 Aug 2001||Makoto Hagai||Image coding method, image decoding method, image coding apparatus, image decoding apparatus using the same methods, and recording medium for recording the same methods|
|CN1167399A||30 Apr 1997||10 Dec 1997||松下电器产业株式会社||Image taking device|
|CN1171018A||6 Jun 1997||21 Jan 1998||松下电器产业株式会社||Image coding and decoding method, coding and decoding device and recording medium for recording said method|
|JPH10320400A||Title not available|
|1||A. Aydin Alatan et al. A Rule-based Method of Object Segmentation in Video Sequences;Center for Image Processing Research ECSE Department, Rensselaer Ploytech Institute, Troy, NJ 1210-3590, 1997.|
|2||A. Müfit Ferman, A. Günsel, and A. Murat Tekalp, "Motion and Shape Signatures for Object-Based Indexing of MPEG-4 Compressed Videa," IEEE ICASSP-97, vol. 4, pp. 2601-2604 (Apr. 21-24, 1997).|
|3||Alatan et al., IEEE, vol. 8, No. 7, pp. 802-813 (1998).|
|4||Bilge Günsel, A. Murat Tekalp, Peter J.L. van Beek, "Content-based access to video objects: Temporal segmentation, visual summarization, and feature extraction," Signal Processing, vol. 66, issue 2, Apr. 30, 1998, pp. 261-280.|
|5||D. Zhong and S.-F. Chang, (Video Object Model and Segmentation for Content-Based Video Indexing, 1997 IEEE International Symposium on Circuits and Systems, Jun. 9-12, 1997, Hong Kong.|
|6||Haskell, Barry G., et al. Image and Video Coding - Emerging Standards and Beyond.IEEE Transactions on Circuits and Systems for Video Technology, vol. 8, No. 7, Nov. 1998.|
|7||Miki, Application of MPEG-7, pp. 283-290 (1998).|
|8||Myron Flickner, et al.; Query by Image and Video Content: The QBIC System; Computer vol. 28, Issue 9; Sep. 1995; pp. 23-32, San Jose, CA.|
|9||Nagasak et al, Denshi Jouhou Tssushin Gakkai Rombunshi D-II, vol. J79, No. 4, pp. 531-537 (1996).|
|10||Ono et al., Denshi Jouhou Tssushin Gakkai Rombunshi D-II, vol. J79, No. 4, pp. 476-483 (1996).|
|11||Rui, et al. Digital Image/Video Library and MPEG-7: Standardization and Research Issues.Dept. of ECE & Beckman Institute University of Illinois at Urbana-Champaign Urbana, IL 61801, 1988.|
|12||Tanaka et al., Nikkei Electronics, No. 77, pp. 149-154 (1998).|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8112376||7 Feb 2012||Cortica Ltd.||Signature based system and methods for generation of personalized multimedia channels|
|US8266185||21 Oct 2009||11 Sep 2012||Cortica Ltd.||System and methods thereof for generation of searchable structures respective of multimedia data content|
|US8312031||10 Aug 2009||13 Nov 2012||Cortica Ltd.||System and method for generation of complex signatures for multimedia data content|
|US8326775||4 Dec 2012||Cortica Ltd.||Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof|
|US8386400||22 Jul 2009||26 Feb 2013||Cortica Ltd.||Unsupervised clustering of multimedia data using a large-scale matching system|
|US8799195||31 Dec 2012||5 Aug 2014||Cortica, Ltd.||Method for unsupervised clustering of multimedia data using a large-scale matching system|
|US8799196||31 Dec 2012||5 Aug 2014||Cortica, Ltd.||Method for reducing an amount of storage required for maintaining large-scale collection of multimedia data elements by unsupervised clustering of multimedia data elements|
|US8818916||23 Jun 2010||26 Aug 2014||Cortica, Ltd.||System and method for linking multimedia data elements to web pages|
|US8868619||4 Sep 2012||21 Oct 2014||Cortica, Ltd.||System and methods thereof for generation of searchable structures respective of multimedia data content|
|US8959037||5 Jan 2012||17 Feb 2015||Cortica, Ltd.||Signature based system and methods for generation of personalized multimedia channels|
|US8990125||20 Nov 2012||24 Mar 2015||Cortica, Ltd.||Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof|
|US9009086||18 Jul 2014||14 Apr 2015||Cortica, Ltd.||Method for unsupervised clustering of multimedia data using a large-scale matching system|
|US9031999||13 Feb 2013||12 May 2015||Cortica, Ltd.||System and methods for generation of a concept based database|
|US9087049||21 Feb 2013||21 Jul 2015||Cortica, Ltd.||System and method for context translation of natural language|
|US9104747||18 Jul 2014||11 Aug 2015||Cortica, Ltd.||System and method for signature-based unsupervised clustering of data elements|
|US9191626||21 Sep 2012||17 Nov 2015||Cortica, Ltd.||System and methods thereof for visual analysis of an image on a web-page and matching an advertisement thereto|
|US20090043818 *||21 Aug 2008||12 Feb 2009||Cortica, Ltd.||Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof|
|US20090112864 *||5 Jan 2009||30 Apr 2009||Cortica, Ltd.||Methods for Identifying Relevant Metadata for Multimedia Data of a Large-Scale Matching System|
|US20090282218 *||22 Jul 2009||12 Nov 2009||Cortica, Ltd.||Unsupervised Clustering of Multimedia Data Using a Large-Scale Matching System|
|US20090313305 *||17 Dec 2009||Cortica, Ltd.||System and Method for Generation of Complex Signatures for Multimedia Data Content|
|US20100262609 *||14 Oct 2010||Cortica, Ltd.||System and method for linking multimedia data elements to web pages|
|U.S. Classification||382/305, 707/E17.028|
|International Classification||H04N19/21, H04N19/90, H04N19/00, H04N19/51, H04N19/513, H04N19/20, G06K9/00, G06T7/20, H04N7/24, G11B27/28, G11B27/10, H04N5/45, G06K9/54, G06F17/30|
|Cooperative Classification||H04N19/20, H04N19/46, G11B2220/41, G11B27/107, G06F17/3025, G06F17/30811, G06F17/30256, G06K9/00711, G11B2220/90, G06F17/30805, G06F17/30802, G06F17/3079, H04N5/45, G11B27/28, G06T7/20, H04N7/24, G06F17/30817, G06T9/001|
|European Classification||G06F17/30V2, G06F17/30V1V1, G06F17/30V1V2, H04N7/26A10S, H04N7/26J, H04N7/24, G11B27/10A2, G11B27/28, G06T7/20, H04N7/26J2, G06K9/00V3, G06F17/30M1C, G06F17/30M1H, G06F17/30V1R, G06F17/30V1V4|
|8 Jul 2008||CC||Certificate of correction|
|29 Jul 2008||CC||Certificate of correction|
|27 Apr 2011||FPAY||Fee payment|
Year of fee payment: 4
|20 May 2015||FPAY||Fee payment|
Year of fee payment: 8