US6005643A - Data hiding and extraction methods - Google Patents

Data hiding and extraction methods Download PDF

Info

Publication number
US6005643A
US6005643A US08/922,701 US92270197A US6005643A US 6005643 A US6005643 A US 6005643A US 92270197 A US92270197 A US 92270197A US 6005643 A US6005643 A US 6005643A
Authority
US
United States
Prior art keywords
embedding
prediction
data
embedded
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/922,701
Inventor
Norishige Morimoto
Junji Maeda
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
RPX Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MAEDA, JUNJI, MORIMOTO, NORISHIGE
Application granted granted Critical
Publication of US6005643A publication Critical patent/US6005643A/en
Assigned to RPX CORPORATION reassignment RPX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/08Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/0085Time domain based watermarking, e.g. watermarks spread over several images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2389Multiplex stream processing, e.g. multiplex stream encrypting
    • H04N21/23892Multiplex stream processing, e.g. multiplex stream encrypting involving embedding information at multiplex stream level, e.g. embedding a watermark at packet level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark

Definitions

  • the present invention relates to a data hiding method for hiding message data into media data and a data extracting method for extracting hidden data.
  • FIG. 1 shows a half-tone image obtained when digital data is displayed on a display.
  • FIG. 1(a) which is a digital image
  • messages such as a nurse, a river, kindergarten pupil, and birds
  • Media data is obtained by segmenting an image (obtained, for example, from a photograph) into very fine parts and numerically expressing brightness and a hue for each part.
  • the original numerical value of the image is slightly changed intentionally. If there is a small change in the numerical value, there will be almost no disturbance in the image and humans will not sense the disturbance. If this nature is skillfully utilized, entirely different information (message data) can be hidden in original video.
  • This message data may be any information, for example, lattice patterns, rulers, or signatures of video creators.
  • the message data hidden in media data can be extracted by processing it with a special program. Therefore, based on the extracted message data, it can be checked whether the media data has been altered.
  • MPEG Moving Picture Experts Group
  • video data video data
  • MPEG Moving Picture Experts Group
  • a method of hiding additional information into a user data field has generally been employed. In such a method, however, the field can be easily separated from the media data, so there is the problem that the detection and removal of additional hidden information are easy.
  • the present invention is related to a data hiding method which embeds information into a motion image constituted by a plurality of frames.
  • the data hiding method comprises the steps of: specifying at least one embedding region in the frame for embedding information; and determining a type of interframe prediction of the embedding region in correspondence with information to be embedded by referring to an embedding rule where a content of data to be embedded is caused to correspond to the type of interframe prediction of the embedding region. It is desirable that the frame in which the embedding region exists is a bidirectionally predictive-coded frame.
  • the type of interframe prediction is selected from forward prediction, backward prediction, bidirectional prediction, and intraframe coding. It is also desirable that the embedding rule causes one of bit values to correspond to the bidirectional prediction and the other bit value to correspond to the forward prediction or the backward prediction. Furthermore, the embedding rule may cause data embedding inhibition to correspond to the intraframe coding.
  • a threshold value is provided as a standard of judgment for picture quality, and by employing the embedding rule, it is effective that data is not embedded at a position where a prediction error exceeds a threshold value.
  • a number of references of the forward prediction or a number of references of the backward prediction in the bidirectionally predictive-coded frame is less than a predetermined number
  • embedding of data to the embedding region of the bidirectionally predictive-coded frame may be inhibited. For example, if scene change takes place, the number of references of forward prediction or the number of references of backward prediction in the frame related to the change will be considerably reduced. In such a case, if the embedding rule is applied and the prediction type is forcibly determined, there will be the possibility that picture quality will be considerably degraded. Therefore, if the number of references of prediction in a frame is counted and less than a predetermined threshold value, it is desirable that data is not embedded into the frame.
  • another invention provides a data hiding method which embeds information into a motion image constituted by a plurality of frames.
  • the data hiding method comprises the steps of: counting a number of references of forward prediction or a number of references of backward prediction in a frame having an embedding region for embedding information; determining characteristics of the respective embedding regions in correspondence with information to be embedded by referring to an embedding rule where a content of data to be embedded is caused to correspond to a characteristic of the embedding region, when the number of references is greater than a predetermined number; and inhibiting embedding of data to the embedding region of the frame when the number of references is less than the predetermined number.
  • the present invention further relates to a data hiding method which embeds information with redundancy into an image.
  • a plurality of embedding regions are specified in the image for embedding the same information.
  • the same data is embedded in respective embedding regions in correspondence with information to be embedded by referring to an embedding rule. For example, consider the case where a data bit of 1 is embedded in three embedding regions.
  • an embedding rule which prescribes the value of a data bit and the characteristic (e.g., type of prediction) of an embedding region, three embedding regions are determined so that they have the same characteristic corresponding to the value of the data bit of 1.
  • the present invention still further relates to a data extraction method which extracts information embedded into an encoded motion image. At least one embedding region, is specified in a frame. Data, embedded in the specified embedding region, is extracted by referring to an extraction rule where the type of interframe prediction is caused to correspond to a content of data to be extracted.
  • the frame in which the embedding region exists is a bidirectionally predictive-coded frame. Also, it is desirable that the type of the interframe prediction be selected from forward prediction, backward prediction, bidirectional prediction, and intraframe coding.
  • the extraction rule may cause one of bit values to correspond to the bidirectional prediction and the other bit value to correspond to the forward prediction or the backward prediction. Furthermore, the extraction rule may cause the intraframe coding to correspond to data embedding inhibition.
  • the present invention relates to a data extraction method which extracts information with redundancy embedded into an image.
  • the data extraction method comprises the steps of specifying in the image a plurality of embedding regions embedded with certain data (e.g., a data bit of 1) and extracting the embedded data (aforementioned data bit of 1), based on characteristics of the respective embedding regions, by referring to an extraction rule where a characteristic of the embedding region is caused to correspond to a data bit to be extracted.
  • the number of the embedding regions may be compared for each of the extracted different data bits, and the data bit with a greater number may be specified as embedded information (so-called decision by majority).
  • the data bits, extracted from three embedding regions A, B, and C in which the same data bit (a data bit of 1) should have been embedded are a bit value of 1, a bit value of 1, and a bit value of 0.
  • the number of the embedding regions which extracted a bit value of 1 is 2, and the number of the embedding regions which extracted a bit value of 0 is 1. Since it can be said that a bit value, where the aforementioned number is greater, is more accurate, it is recognized that a data bit of 1 has been embedded. If a data bit of 1 has been embedded, the data bit value which is extracted from three embedding regions will be 1. However, for some reason, there are cases where embedded information is changed. Also, extraction employing a statistical method is considered. Hence, a method of extracting information with redundancy embedded into an image, such as the fourth invention, is effective.
  • the present invention relates to a motion image coding system for embedding information into videodata which is constituted by a plurality of frames and which employs interframe prediction.
  • the system comprises an error calculator which calculates a first prediction error, based on both an embedding region specified in a first frame for embedding information and a reference region in a second frame which is referred to by the embedding region, by employing forward prediction, which also calculates a second prediction error, based on both the embedding region and a reference region in a third frame which is referred to by the embedding region, by employing backward prediction, and which furthermore calculates a third prediction error, based on the embedding region and reference regions in the second and third frames which are referred to by the embedding region, by employing bidirectional prediction.
  • the system further comprises a decider.
  • the decider decides a type of interframe prediction in the embedding region in correspondence with a content of information to be embedded by referring to an embedding rule which prescribes that when one data bit is embedded in the embedding region, the type of interframe prediction in the embedding region employs either the forward prediction or the backward prediction and which also prescribes that when another data bit is embedded in the embedding region, the type of the interframe prediction employs the bidirectional prediction.
  • the decider also specifies any one of the first, the second, or the third prediction error in correspondence with the decided type of interframe prediction.
  • the aforementioned decider may include first inhibition means which inhibits embedding of data to a certain embedding region when a prediction error in the type of interframe prediction of the certain embedding region, determined based on the embedding rule, exceeds a predetermined threshold value. Also, the decider may include second inhibition means which inhibits embedding of data to the embedding region of the bidirectionally predictive-coded frame when a number of references of the forward prediction or a number of references of the backward prediction in the bidirectionally predictive-coded frame is less than a predetermined number.
  • the present invention is also directed to a motion image decoding system for extracting information embedded into a coded motion image.
  • This system comprises a specifier which specifies at least one region in which information is embedded.
  • the system comprises an extractor which extracts the embedded information from a type of the interframe prediction in the embedding region by referring to an extraction rule where the type of the interframe prediction in the embedding region is caused to correspond to a content of information to be embedded.
  • the present invention is still further directed to a program storage medium for executing a data hiding process, which embeds information into a motion image constituted by a plurality of frames, by a computer.
  • the program storage medium has the steps of: specifying in a frame at least one embedding region into which information is embedded; and deciding a type of the interframe prediction in the embedding region in correspondence with information to be embedded by referring to an embedding rule where a content of information to be embedded is caused to correspond to the type of interframe prediction in the embedding region.
  • the present invention also relates to a program storage medium for executing a data extracting process, which extracts information embedded into an encoded motion image, by a computer.
  • the program storage medium has the steps of: specifying in a frame at least one embedding region in which information is embedded and extracting the embedded information in correspondence with a type of the interframe prediction in the embedding region by referring to an extraction rule where the type of interframe prediction in the embedding region is caused to correspond to a content of data to be extracted.
  • FIG. 1 is a half-one image obtained when digital data is displayed on a display
  • FIG. 2 is a diagram showing an example of arrangement of type of interframe prediction
  • FIG. 3 is a diagram showing an example of macroblocks in a B-frame
  • FIG. 4 is a diagram for explaining the relationship between the prediction type and the prediction error of a macroblock
  • FIG. 5 is a diagram for explaining reference images in the case when scene change takes place
  • FIG. 6 is a block diagram of a motion image coding system
  • FIG. 7 is a block diagram of a motion image decoding system.
  • the MPEG employs the forward prediction based on a reference frame in the past, the backward prediction based on a reference frame in the future, and the bidirectional prediction based on reference frames both in the past and the future.
  • FIG. 2 shows a sequence of frames. As shown in the figure, the sequence of frames contain three types of frames, an I-frame, P-frames, and B-frames, in order to realize bidirectional prediction.
  • the I-frame is an intracoded frame, and all macroblocks within this frame are compressed by intraframe coding (without interframe prediction).
  • the P-frame is a (forward) predictive-coded frame, and all macroblocks within this frame are compressed by intraframe coding or forward predictive coding.
  • the B-frame is a bidirectionally predicted, interpolative-coded frame.
  • the macroblocks within the B-frame can be basically encoded by employing forward prediction, backward prediction, bidirectional prediction or intraframe coding.
  • the I-frame and P-frames are encoded in the same order as the original motion image.
  • the B-frames are inserted between the I-frame and the P-frames, and after the I- and P-frames are processed the B-frames are encoded.
  • the information (message data) embedding region is the macroblocks of the B-frame, and 1 bit of information can be embedded with respect to 1 macroblock. Therefore, when message data is constituted by a number of bits, there is the need to perform the embedding process with respect to the macroblocks corresponding in number to the bits.
  • FIG. 3 is a diagram showing the arrangement of macroblocks in a B-frame.
  • the macroblock is the unit of prediction.
  • the macroblock is the 16 ⁇ 16 unit of motion-compensation, which compresses videodata by reducing their temporal redundancy.
  • the macroblocks within the B-frame can be classified into the following four groups as prediction types.
  • the intracoded macroblock is a macroblock that is coded only by the information in the macroblock itself without performing interframe prediction.
  • the forward predicted macroblock is a macroblock that is forwardly predicted and encoded by referring to either the intracoded frame (I-frame) in the past or the forward predictive-coded frame (P-frame) in the past.
  • I-frame intracoded frame
  • P-frame forward predictive-coded frame
  • AP prediction error
  • the prediction error ⁇ P is expressed as the brightness difference or the color difference obtained for 16 pixels ⁇ 16 pixels. Note that how a similar square region is selected depends upon encoders.
  • the backward predicted macroblock is a macroblock that is backwardly predicted and encoded by referring to either the intracoded frame (I-frame) in the future or the forward predictive-coded frame (P-frame) in the future.
  • a region, which is most similar in the future reference frame, is retrieved, and this macroblock has a prediction error ( ⁇ N) which is the difference between it and the retrieved region and also has information about a spatial relative position (a motion vector).
  • the bidirectionally predicted macroblock is a macroblock that is bidirectionally predicted and encoded by referring to the past reference frame and the future reference frame. Both a region most similar in the past reference frame and a region most similar in the future reference frame are retrieved, and this macroblock has a prediction error (( ⁇ N+ ⁇ P)/2) which is the difference between it and the average (per pixel) of these two regions and also has information about a spatial relative position (two motion vectors) between them.
  • At least one macroblock which is given an embedding process, must first be specified in a B-frame.
  • This may be defined, for example, as the respective macroblocks (embedding regions) which exist between the first line and the third line of the B-frame, or it may be defined as the all macroblocks in a certain frame.
  • the macroblock being previously defined as format in this way, it can also be determined by employing an algorithm which generates a position sequence. Note that the algorithm for generating a position sequence can employ the algorithm disclosed, for example, in Japanese Patent Application No. 8-159330.
  • This embedding rule is one where bit information is caused to correspond to the prediction type of macroblock. For example, there is the following rule.
  • the first data bit is a 1
  • the prediction type of leftmost macroblock is determined to be bidirectional prediction (B) in accordance with the aforementioned embedding rule.
  • the prediction error in this case becomes a prediction error which is the difference relative to the average of a region which is most similar in the past reference frame and a region which is most similar in the future reference frame.
  • the prediction type of the second macroblock is either forward prediction (P) or backward prediction (N) in accordance with the embedding rule.
  • the prediction error in the forward prediction and the prediction error in the backward prediction are compared to select the type where the prediction error is smaller.
  • the forward prediction (P) is selected for the second macroblock.
  • the prediction type of the third macroblock becomes bidirectional prediction (B), and the prediction type of the fourth macroblock is determined to be backward prediction (N) because the prediction error in the backward prediction is smaller.
  • the interframe prediction types of the first to the fourth embedding regions are BPBN, and 4 data bits 1010 (4 bits of message data) are embedded in these regions. If an attempt is made to embed data bits into a certain embedding region, there will be cases where image quality is considerably degraded. In such cases, the embedding of data bits into the embedding region is not performed, and the prediction type of the embedding region is an intracoded macroblock which represents "embedding inhibition.”
  • information for specifying a macroblock in which the message data has been embedded must first be given.
  • the specifying information may be given by an outside unit. Also, it is possible to previously embed the specifying information in data itself.
  • message data can be extracted.
  • the technique disclosed in the aforementioned Japanese Patent Application No. 8-159330, for example can be employed.
  • This extraction rule is a rule where the prediction type of macroblock is caused to correspond to bit information, and this extraction rule has to be given as information when extraction is performed.
  • this extraction rule there is the following rule. It is noted that the corresponding relation between the prediction type in this extraction rule and bit information is the same as that of the aforementioned embedding rule. Also, in the case where the prediction type is an intracoded macroblock, it is judged that in the embedding region, a data bit has not been embedded.
  • the prediction type of the rightmost macroblock is an intracoded macroblock, it will be judged, according to the aforementioned extraction rule, that a data bit has not been embedded in this macroblock. As a consequence, the message data bits become 101.
  • An encoder can freely select the prediction type of macroblock in the range allowed for each frame. Generally, the prediction type of macroblock where the prediction error is smallest is selected. However, the feature of this embodiment is that the prediction type of macroblock is selected according to the aforementioned embedding rule. Since the relationship between the prediction type and bit information in the extraction rule is identical with that prescribed by the embedding rule, embedded data can be accurately extracted by referring to the extraction rule.
  • prediction type is determined according to the embedding rule, there will be the possibility that there will be selected the prediction type where the prediction error is so large that quality degradation in an image can be visually recognized.
  • the prediction error there are many cases where the sum of absolute values or the sum of squares of prediction errors for each pixel is employed, but MPEG does not prescribe what standard is used, so an encoder is free to use any standard for prediction errors.
  • MPEG does not prescribe what standard is used, so an encoder is free to use any standard for prediction errors.
  • a certain threshold value is previously set to a prediction error.
  • the prediction type of macroblock is made an intracoded macroblock in accordance with the aforementioned embedding rule. This point will be described in further detail in reference to FIG. 4.
  • FIG. 4 is a diagram for explaining the relationship between the prediction type and the prediction error of a macroblock.
  • the axis of ordinate represents the prediction error, and a larger prediction error indicates a larger degradation of picture quality.
  • a threshold value has been set as the degree of an allowable prediction error, that is, a standard of judgment where no perceptible degradation of picture quality occurs.
  • 3 short horizontal bars (i), (ii), and (iii), linked by a single vertical bar indicates any of 3 prediction errors of the forward prediction, backward prediction, and bidirectional prediction of a macroblock. From the relationship between a threshold value and 3 prediction errors, a macroblock can be classified into 4 types (a), (b), (c), and (d).
  • the 4 types are the case (type (a)) where 3 prediction errors are all less than a threshold value, the case (type (b)) where any one of prediction errors exceeds a threshold value, the case (type (c)) where 2 prediction errors exceed a threshold value, and the case (type (d)) where all prediction errors exceed a threshold value.
  • both a bit value of 1 and a bit value of 0 can be embedded into this type of block without exceeding a threshold value. Therefore, when the aforementioned embedding rule is employed, data bits can be embedded into the macroblocks of the types (a) and (b) without substantially causing degradation of picture quality.
  • the prediction type of the macroblock into which the embedding of data is inhibited is intraframe coding in accordance with the aforementioned embedding rule.
  • the intracoded macroblocks of type (c) and (d) become invalid bits which cannot be used as a data embedding region.
  • the actual rate of occurrence is low, so such invalid bits can be compensated by error correction coding, in the case where information to be embedded is allowed to have redundancy.
  • the type of macroblock and a data bit to be embedded are correlated and decided in encoding a motion image. Therefore, message data can be embedded into a motion image without substantially having an influence on the compression efficiency of the motion image and also without substantially causing degradation of picture quality. In addition, it is very difficult to remove message data embedded in this way from a motion image. Furthermore, since the quantity of information to be embedded is almost independent of the content of an image, it is possible to efficiently embed message data.
  • FIG. 5 is a diagram for explaining reference images in the case when scene change takes place.
  • FIG. 5(a) shows the case when there is no scene change
  • FIG. 5(b) shows the case when scene change takes place between frame 2 and frame 3.
  • two opposite end frames are I- or P-frames and two center frames are B-frames.
  • an arrow shown in the figures indicates a reference relation between frames.
  • the number of forwardly predicted macroblocks and the number of backwardly predicted macroblocks are monitored, and when these numbers are less than a certain threshold value, it is judged that scene change has taken place. In such a case, it is desirable not to embed data into such frames (i.e., according to the embedding rule, it is desirable that prediction type be made an intracoded macroblock.)
  • Occlusion means that, by moving a certain object, something hidden behind the object suddenly appears or conversely is hidden.
  • the macroblocks related to the occlusion within the entire frame are type (c) shown in FIG. 4.
  • the prediction error of the prediction type determined according to the embedding rule is less than a threshold value, there will be no problem, but in the case other than that, perceptible degradation of picture quality will occur.
  • the degradation can be avoided by employing an error correcting code. That is, 1 bit of information is not expressed with a single macroblock, but rather, redundancy is given to information, and information equivalent to 1 bit is expressed with a plurality of macroblocks.
  • a single embedding region is constituted by a set of macroblocks.
  • a plurality of embedding regions (e.g., 100 regions) are prepared, and 1 bit of information may be expressed with a plurality of regions.
  • the redundancy in the present invention means that 1 bit of information is not caused to correspond to a single region to be processed with a relation of one-to-one, but rather it is caused to correspond to a plurality of regions.
  • the present invention is not limited to MPEG. It is taken as a matter of course that the present invention is applicable to some other image compression methods using an interframe prediction coding technique. In that sense, the embedding region in the present invention is not limited to macroblocks.
  • the aforementioned embedding rule and extraction rule are merely examples, so the present invention are not limited to these rules but it can employ various rules.
  • the aforementioned embodiment has been described with reference to the B-frame, that is, a bidirectionally predicted frame, it is also possible to embed data into the aforementioned P-frame. Since the macroblocks constituting the P-frame are forwardly predicted macroblocks and an intracoded macroblock, bit values can be caused to correspond to these macroblocks. However, from the viewpoint of suppressing degradation of picture quality and an increase in a quantity of data, as previously described, it is desirable to embed data into the B-frame rather than into the P-frame. The reason is that if a macroblock which is an intracoded macroblock is forcibly made a forwardly predicted macroblock by the embedding rule, the picture quality will be degraded and, in the opposite case, the data quantity will be increased.
  • FIG. 6 is a block diagram of a motion image coding system employing the present invention.
  • Memory 61 stores motion image data consisting of a plurality of frames.
  • Frame memory 62 stores a past reference frame and frame memory 63 stores a future reference frame in display order.
  • a region specifier 64 specifies a position at which data is embedded as additional information. Therefore, at least one region is specified in a frame.
  • An error calculator 65 calculates a forward prediction error, a backward prediction error, and a bidirectional prediction error, based on the data stored on the frame memories 62 and 63.
  • the forward prediction error is calculated, from both an embedding region and a reference region in the past reference frame which is referred to by the embedding region, by employing forward prediction.
  • the backward prediction error is calculated, from both an embedding region and a reference region in the future reference frame which is referred to, by the embedding region by employing backward prediction.
  • the bidirectional prediction error is calculated, from an embedding region and reference regions in both the past and future reference frames which are referred to by the embedding region, by employing bidirectional prediction.
  • a decider 66 embeds data to be embedded into an embedding region by controlling the characteristic of a region which is an embedding region by referring to an embedding rule. Specifically, the embedding rule prescribes that when a single data bit is embedded in an embedding region, the prediction type in the embedding region employs either forward prediction or backward prediction.
  • the decider 66 decides the type of interframe prediction in an embedding region in correspondence with the content of information to be embedded, also specifies a reference region which is referred to by an embedding region in correspondence with the decided type of interframe prediction, and furthermore specifies either one of the first, the second, or the third prediction error. Thereafter, an encoder 67 encodes the signal which was output from the decider 66.
  • the decider 66 is designed so that, for a certain embedding region, when the prediction error in the type of interframe prediction decided based on the embedding rule exceeds a predetermined threshold value, the embedding of data to that embedding region is inhibited. With this, picture quality is prevented from being degraded by embedding.
  • the decider 66 is also designed so that when the number of references of forward prediction or the number of references of backward prediction in a bidirectionally predicted frame is less than a predetermined number, the embedding of data to an embedding region in that frame is inhibited. By counting the number of references, scene change can be detected. Thus, when scene change takes place, the embedding of data to the frame related to the change is inhibited. As a consequence, picture quality degradation can be prevented.
  • FIG. 7 is a block diagram of a motion image decoding system employing the present invention.
  • Memory 71 stores encoded motion image data into which additional information is embedded.
  • a region specifier 72 specifies at least one embedding region into which additional information is embedded in a frame.
  • Extracting means 73 extracts additional information, embedded in an embedding region, from the type of interframe prediction in the embedding region by referring to the extraction rule. Then, an decoder 74 decodes the encoded data which was output from the extractor 73, thereby reconstructing a motion image.
  • a fingerprint can be embedded.
  • the fingerprint is specific information different for each owner.
  • a typical utilization example of the fingerprint is the case where, when motion image data is issued to a third party, the issuer previously embeds a mark in the motion image data so that the third party which is a receiving station can be specified. If done in this way, the source of a copy can be specified when an illegal act, such as illegal copy, is performed. Therefore, if the videodata is illegally circulated, there could be a fee for any illegal copy.
  • a cryptographed video product is given legal owner's registration information, and a fingerprint can be embedded in correspondence with the registration information.
  • a fingerprint In the case where a fingerprint is embedded, basically both "bidirectionally predicted macroblock" and “forwardly predicted macroblock or backwardly predicted macroblock where the prediction error is smaller,” which are generated when MPEG encoding is performed, have been held. Then, a suitable macroblock is selected in correspondence with a third party which is a receiving station. Even if done in this way, there would be no influence on other frames or a data layer (e.g., a slice layer) which is higher than the macroblock layer of the corresponding frame. According to the present invention, message data can be embedded into a motion image without substantially having an influence on the compression efficiency of the motion image and also without substantially causing degradation of picture quality.

Abstract

A method for embedding additional information into a video movie without substantially having an influence on the compression efficiency of the video movie and also without substantially causing degradation of the picture quality. Particularly, the method of the present invention involves specifying at least one embedding region in the frame of the video movie for embedding information, and determining a type of interframe prediction of the embedding region in correspondence with information to be embedded by referring to an embedding rule where a content of data to be embedded is caused to correspond to the type of interframe prediction of the embedding region. It is desirable that the frame in which the embedding region exists is a bidirectionally predictive-coded frame.

Description

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to a data hiding method for hiding message data into media data and a data extracting method for extracting hidden data.
2. Prior Art
With the development of multimedia society, large quantities of digital video and audio information have been circulated on internet systems or as CD-ROM software. For digital video and audio information, anybody can easily create a perfect copy without degradation, so the illegal use and the protection of the copyright are becoming problematic. In order to prevent a third party from illegally copying media data such as video and audio data, the technique of hiding additional information, such as the signature of a creator (author), into original media data is becoming the focus of attention. When digital video data or other similar data is illegally copied, it can be known whether or not the copy is an illegal one by confirming the signature hidden in the copy and specifying the source. A hiding technique such as this is called data hiding. FIG. 1 shows a half-tone image obtained when digital data is displayed on a display. In the media data of FIG. 1(a) which is a digital image, messages, such as a nurse, a river, kindergarten pupil, and birds, have been hidden as shown in FIG. 1(b). Media data is obtained by segmenting an image (obtained, for example, from a photograph) into very fine parts and numerically expressing brightness and a hue for each part. At that time, the original numerical value of the image is slightly changed intentionally. If there is a small change in the numerical value, there will be almost no disturbance in the image and humans will not sense the disturbance. If this nature is skillfully utilized, entirely different information (message data) can be hidden in original video. This message data may be any information, for example, lattice patterns, rulers, or signatures of video creators. The message data hidden in media data can be extracted by processing it with a special program. Therefore, based on the extracted message data, it can be checked whether the media data has been altered.
Incidentally, the Moving Picture Experts Group (MPEG) is well known as one of the methods of compression for motion images (video data). In the case where some additional information is put into an MPEG video bitstream, a method of hiding additional information into a user data field has generally been employed. In such a method, however, the field can be easily separated from the media data, so there is the problem that the detection and removal of additional hidden information are easy.
SUMMARY OF THE INVENTION
In view of the aforementioned problem, an objective of the present invention is to provide a novel method for embedding additional information into a motion image compressed by employing interframe prediction. Another objective of the present invention is to provide a method where there will be almost no degradation in picture quality even if additional information is embedded in a motion image. Still another objective of the present invention is to make it difficult to remove embedded information from a motion image.
Specifically, the present invention is related to a data hiding method which embeds information into a motion image constituted by a plurality of frames. The data hiding method comprises the steps of: specifying at least one embedding region in the frame for embedding information; and determining a type of interframe prediction of the embedding region in correspondence with information to be embedded by referring to an embedding rule where a content of data to be embedded is caused to correspond to the type of interframe prediction of the embedding region. It is desirable that the frame in which the embedding region exists is a bidirectionally predictive-coded frame.
In this case, it is desirable that the type of interframe prediction is selected from forward prediction, backward prediction, bidirectional prediction, and intraframe coding. It is also desirable that the embedding rule causes one of bit values to correspond to the bidirectional prediction and the other bit value to correspond to the forward prediction or the backward prediction. Furthermore, the embedding rule may cause data embedding inhibition to correspond to the intraframe coding.
Also, when a prediction error in the type of interframe prediction of the embedding region, determined based on the embedding rule, exceeds a predetermined threshold value, it is desirable to inhibit embedding of data to the embedding region. Since the prediction type of the embedding region is forcibly decided based on the embedding rule, there is the possibility that degradation of picture quality will occur. Hence, a threshold value is provided as a standard of judgment for picture quality, and by employing the embedding rule, it is effective that data is not embedded at a position where a prediction error exceeds a threshold value. In addition, when a number of references of the forward prediction or a number of references of the backward prediction in the bidirectionally predictive-coded frame is less than a predetermined number, embedding of data to the embedding region of the bidirectionally predictive-coded frame may be inhibited. For example, if scene change takes place, the number of references of forward prediction or the number of references of backward prediction in the frame related to the change will be considerably reduced. In such a case, if the embedding rule is applied and the prediction type is forcibly determined, there will be the possibility that picture quality will be considerably degraded. Therefore, if the number of references of prediction in a frame is counted and less than a predetermined threshold value, it is desirable that data is not embedded into the frame. Therefore, another invention provides a data hiding method which embeds information into a motion image constituted by a plurality of frames. The data hiding method comprises the steps of: counting a number of references of forward prediction or a number of references of backward prediction in a frame having an embedding region for embedding information; determining characteristics of the respective embedding regions in correspondence with information to be embedded by referring to an embedding rule where a content of data to be embedded is caused to correspond to a characteristic of the embedding region, when the number of references is greater than a predetermined number; and inhibiting embedding of data to the embedding region of the frame when the number of references is less than the predetermined number.
The present invention further relates to a data hiding method which embeds information with redundancy into an image. First, a plurality of embedding regions are specified in the image for embedding the same information. Then, the same data is embedded in respective embedding regions in correspondence with information to be embedded by referring to an embedding rule. For example, consider the case where a data bit of 1 is embedded in three embedding regions. By referring to an embedding rule which prescribes the value of a data bit and the characteristic (e.g., type of prediction) of an embedding region, three embedding regions are determined so that they have the same characteristic corresponding to the value of the data bit of 1.
The present invention still further relates to a data extraction method which extracts information embedded into an encoded motion image. At least one embedding region, is specified in a frame. Data, embedded in the specified embedding region, is extracted by referring to an extraction rule where the type of interframe prediction is caused to correspond to a content of data to be extracted.
It is desirable that the frame in which the embedding region exists is a bidirectionally predictive-coded frame. Also, it is desirable that the type of the interframe prediction be selected from forward prediction, backward prediction, bidirectional prediction, and intraframe coding.
Also, the extraction rule may cause one of bit values to correspond to the bidirectional prediction and the other bit value to correspond to the forward prediction or the backward prediction. Furthermore, the extraction rule may cause the intraframe coding to correspond to data embedding inhibition.
Still even further, the present invention relates to a data extraction method which extracts information with redundancy embedded into an image. The data extraction method comprises the steps of specifying in the image a plurality of embedding regions embedded with certain data (e.g., a data bit of 1) and extracting the embedded data (aforementioned data bit of 1), based on characteristics of the respective embedding regions, by referring to an extraction rule where a characteristic of the embedding region is caused to correspond to a data bit to be extracted. Here, when different data bits are extracted from the respective embedding regions, the number of the embedding regions may be compared for each of the extracted different data bits, and the data bit with a greater number may be specified as embedded information (so-called decision by majority). For example, assume that the data bits, extracted from three embedding regions A, B, and C in which the same data bit (a data bit of 1) should have been embedded, are a bit value of 1, a bit value of 1, and a bit value of 0. In this case, the number of the embedding regions which extracted a bit value of 1 is 2, and the number of the embedding regions which extracted a bit value of 0 is 1. Since it can be said that a bit value, where the aforementioned number is greater, is more accurate, it is recognized that a data bit of 1 has been embedded. If a data bit of 1 has been embedded, the data bit value which is extracted from three embedding regions will be 1. However, for some reason, there are cases where embedded information is changed. Also, extraction employing a statistical method is considered. Hence, a method of extracting information with redundancy embedded into an image, such as the fourth invention, is effective.
Still further, the present invention relates to a motion image coding system for embedding information into videodata which is constituted by a plurality of frames and which employs interframe prediction. The system comprises an error calculator which calculates a first prediction error, based on both an embedding region specified in a first frame for embedding information and a reference region in a second frame which is referred to by the embedding region, by employing forward prediction, which also calculates a second prediction error, based on both the embedding region and a reference region in a third frame which is referred to by the embedding region, by employing backward prediction, and which furthermore calculates a third prediction error, based on the embedding region and reference regions in the second and third frames which are referred to by the embedding region, by employing bidirectional prediction. The system further comprises a decider. The decider decides a type of interframe prediction in the embedding region in correspondence with a content of information to be embedded by referring to an embedding rule which prescribes that when one data bit is embedded in the embedding region, the type of interframe prediction in the embedding region employs either the forward prediction or the backward prediction and which also prescribes that when another data bit is embedded in the embedding region, the type of the interframe prediction employs the bidirectional prediction. The decider also specifies any one of the first, the second, or the third prediction error in correspondence with the decided type of interframe prediction. Here, the aforementioned decider may include first inhibition means which inhibits embedding of data to a certain embedding region when a prediction error in the type of interframe prediction of the certain embedding region, determined based on the embedding rule, exceeds a predetermined threshold value. Also, the decider may include second inhibition means which inhibits embedding of data to the embedding region of the bidirectionally predictive-coded frame when a number of references of the forward prediction or a number of references of the backward prediction in the bidirectionally predictive-coded frame is less than a predetermined number.
The present invention is also directed to a motion image decoding system for extracting information embedded into a coded motion image. This system comprises a specifier which specifies at least one region in which information is embedded. Also, the system comprises an extractor which extracts the embedded information from a type of the interframe prediction in the embedding region by referring to an extraction rule where the type of the interframe prediction in the embedding region is caused to correspond to a content of information to be embedded.
The present invention is still further directed to a program storage medium for executing a data hiding process, which embeds information into a motion image constituted by a plurality of frames, by a computer. The program storage medium has the steps of: specifying in a frame at least one embedding region into which information is embedded; and deciding a type of the interframe prediction in the embedding region in correspondence with information to be embedded by referring to an embedding rule where a content of information to be embedded is caused to correspond to the type of interframe prediction in the embedding region.
The present invention also relates to a program storage medium for executing a data extracting process, which extracts information embedded into an encoded motion image, by a computer. The program storage medium has the steps of: specifying in a frame at least one embedding region in which information is embedded and extracting the embedded information in correspondence with a type of the interframe prediction in the embedding region by referring to an extraction rule where the type of interframe prediction in the embedding region is caused to correspond to a content of data to be extracted.
BRIEF DESCRIPTION OF THE DRAWINGS
Preferred embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawins, in which:
FIG. 1 is a half-one image obtained when digital data is displayed on a display;
FIG. 2 is a diagram showing an example of arrangement of type of interframe prediction;
FIG. 3 is a diagram showing an example of macroblocks in a B-frame;
FIG. 4 is a diagram for explaining the relationship between the prediction type and the prediction error of a macroblock;
FIG. 5 is a diagram for explaining reference images in the case when scene change takes place;
FIG. 6 is a block diagram of a motion image coding system; and
FIG. 7 is a block diagram of a motion image decoding system.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT OF THE INVENTION
(1) Data Embedding
Initially, a description will be made of the case where some additional information (message data) is embedded into an MPEG video bitstream (media data).
The MPEG employs the forward prediction based on a reference frame in the past, the backward prediction based on a reference frame in the future, and the bidirectional prediction based on reference frames both in the past and the future. FIG. 2 shows a sequence of frames. As shown in the figure, the sequence of frames contain three types of frames, an I-frame, P-frames, and B-frames, in order to realize bidirectional prediction.
Here, the I-frame is an intracoded frame, and all macroblocks within this frame are compressed by intraframe coding (without interframe prediction). The P-frame is a (forward) predictive-coded frame, and all macroblocks within this frame are compressed by intraframe coding or forward predictive coding. Furthermore, the B-frame is a bidirectionally predicted, interpolative-coded frame. The macroblocks within the B-frame can be basically encoded by employing forward prediction, backward prediction, bidirectional prediction or intraframe coding. The I-frame and P-frames are encoded in the same order as the original motion image. On the other hand, the B-frames are inserted between the I-frame and the P-frames, and after the I- and P-frames are processed the B-frames are encoded.
The information (message data) embedding region is the macroblocks of the B-frame, and 1 bit of information can be embedded with respect to 1 macroblock. Therefore, when message data is constituted by a number of bits, there is the need to perform the embedding process with respect to the macroblocks corresponding in number to the bits. FIG. 3 is a diagram showing the arrangement of macroblocks in a B-frame. The macroblock is the unit of prediction. The macroblock is the 16×16 unit of motion-compensation, which compresses videodata by reducing their temporal redundancy.
The macroblocks within the B-frame can be classified into the following four groups as prediction types.
Intracoded Macroblock
The intracoded macroblock is a macroblock that is coded only by the information in the macroblock itself without performing interframe prediction.
Forwardly Predicted Macroblock
The forward predicted macroblock is a macroblock that is forwardly predicted and encoded by referring to either the intracoded frame (I-frame) in the past or the forward predictive-coded frame (P-frame) in the past. Specifically, a square region of 16 pixels×16 pixels, which is most similar in the past reference frame, is retrieved, and the macroblock has a prediction error (AP) which is the difference between it and the retrieved square region and also has information about a spatial relative position (a motion vector). Here, the prediction error ΔP is expressed as the brightness difference or the color difference obtained for 16 pixels×16 pixels. Note that how a similar square region is selected depends upon encoders.
Backwardly Predicted Macroblock
The backward predicted macroblock is a macroblock that is backwardly predicted and encoded by referring to either the intracoded frame (I-frame) in the future or the forward predictive-coded frame (P-frame) in the future. A region, which is most similar in the future reference frame, is retrieved, and this macroblock has a prediction error (ΔN) which is the difference between it and the retrieved region and also has information about a spatial relative position (a motion vector).
Bidirectionally Predicted Macroblock
The bidirectionally predicted macroblock is a macroblock that is bidirectionally predicted and encoded by referring to the past reference frame and the future reference frame. Both a region most similar in the past reference frame and a region most similar in the future reference frame are retrieved, and this macroblock has a prediction error ((ΔN+ΔP)/2) which is the difference between it and the average (per pixel) of these two regions and also has information about a spatial relative position (two motion vectors) between them.
To embed message data, at least one macroblock, which is given an embedding process, must first be specified in a B-frame. This may be defined, for example, as the respective macroblocks (embedding regions) which exist between the first line and the third line of the B-frame, or it may be defined as the all macroblocks in a certain frame. In addition to the macroblock being previously defined as format in this way, it can also be determined by employing an algorithm which generates a position sequence. Note that the algorithm for generating a position sequence can employ the algorithm disclosed, for example, in Japanese Patent Application No. 8-159330.
Next, with respect to the macroblock specified as an object of an embedding process, 1 bit of data is embedded into 1 macroblock, based on an embedding rule. This embedding rule is one where bit information is caused to correspond to the prediction type of macroblock. For example, there is the following rule.
______________________________________                                    
(Embedding Rule)                                                          
______________________________________                                    
Bit information to                                                        
             Interframe prediction type of macroblock                     
be embedded                                                               
Bit 1                      Bidirectionally predicted macroblock           
                   (represented by B)                                     
Bit 0                      Forwardly predicted macroblock                 
                   (represented by P) or backwardly                       
                   predicted macroblock (represented by N)                
Embedding              Intracoded macroblock                              
inhibition                                                                
______________________________________                                    
For example, consider the case where message data, 1010, are embedded. The 4 bits of data are embedded in sequence in 4 embedding regions (macroblocks) between the left first macroblock and the left fourth macroblock of the first line shown in FIG. 3. First, the first data bit is a 1, so the prediction type of leftmost macroblock (the first embedding region) is determined to be bidirectional prediction (B) in accordance with the aforementioned embedding rule. The prediction error in this case becomes a prediction error which is the difference relative to the average of a region which is most similar in the past reference frame and a region which is most similar in the future reference frame.
The next data bit is a 0. Therefore, the prediction type of the second macroblock (the second embedding region) is either forward prediction (P) or backward prediction (N) in accordance with the embedding rule. In this case, in order to suppress the quality degradation of an image, the prediction error in the forward prediction and the prediction error in the backward prediction are compared to select the type where the prediction error is smaller. In the example of FIG. 3, since the prediction error in the forward prediction is smaller than that in the backward prediction, the forward prediction (P) is selected for the second macroblock.
Similar procedure is repeatedly applied to the third embedding region and the fourth embedding region. As a consequence, the prediction type of the third macroblock becomes bidirectional prediction (B), and the prediction type of the fourth macroblock is determined to be backward prediction (N) because the prediction error in the backward prediction is smaller.
In the aforementioned way, the interframe prediction types of the first to the fourth embedding regions are BPBN, and 4 data bits 1010 (4 bits of message data) are embedded in these regions. If an attempt is made to embed data bits into a certain embedding region, there will be cases where image quality is considerably degraded. In such cases, the embedding of data bits into the embedding region is not performed, and the prediction type of the embedding region is an intracoded macroblock which represents "embedding inhibition."
(2) Data Extraction
A description will be made of a method of extracting the message data embedded in the aforementioned procedure. In the case where message data is extracted, information for specifying a macroblock in which the message data has been embedded must first be given. The specifying information may be given by an outside unit. Also, it is possible to previously embed the specifying information in data itself. In addition, in the case where the position of an embedding region is standardized or if an algorithm for generating a position sequence is known, message data can be extracted. For a message data extracting method using a position sequence, the technique disclosed in the aforementioned Japanese Patent Application No. 8-159330, for example, can be employed.
Next, from the prediction type in the specified embedding region, the information embedded in that region is extracted by referring to an extraction rule. This extraction rule is a rule where the prediction type of macroblock is caused to correspond to bit information, and this extraction rule has to be given as information when extraction is performed. For example, there is the following rule. It is noted that the corresponding relation between the prediction type in this extraction rule and bit information is the same as that of the aforementioned embedding rule. Also, in the case where the prediction type is an intracoded macroblock, it is judged that in the embedding region, a data bit has not been embedded.
______________________________________                                    
(Extraction Rule)                                                         
______________________________________                                    
Interframe prediction type                                                
                     Bit information to be                                
of macroblock                      extracted                              
Bidirectionally predicted                                                 
                       Bit 1                                              
macroblock (represented by B)                                             
Forwardly predicted          Bit 0                                        
macroblock (represented by                                                
P) or backwardly predicted                                                
macroblock (represented by N)                                             
Intracoded macroblock                                                     
                           A data bit has not been                        
                            embedded                                      
______________________________________                                    
A description will be made of the case where message data has been embedded as shown in FIG. 3. Assume that it has already been known that message data bits have been embedded as a premise in the embedding regions from the first macroblock, on the left side, to the fourth macroblock of the first line of FIG. 3. Because the prediction type of the leftmost macroblock is bidirectional prediction (B), a bit value of 1 is extracted by referring to the aforementioned extraction rule. The prediction type of the second macroblock is forward prediction (P), so a bit value of 0 is extracted according to the extraction rule. By repeatedly applying the same procedure to the two other macroblocks, a bit value of 1 and a bit value of 0 are extracted in sequence. As a consequence, message data bits 1010 are extracted from these regions.
If the prediction type of the rightmost macroblock is an intracoded macroblock, it will be judged, according to the aforementioned extraction rule, that a data bit has not been embedded in this macroblock. As a consequence, the message data bits become 101.
(3) Attention to Consideration of Picture Quality (Introduction of a Threshold Value)
An encoder can freely select the prediction type of macroblock in the range allowed for each frame. Generally, the prediction type of macroblock where the prediction error is smallest is selected. However, the feature of this embodiment is that the prediction type of macroblock is selected according to the aforementioned embedding rule. Since the relationship between the prediction type and bit information in the extraction rule is identical with that prescribed by the embedding rule, embedded data can be accurately extracted by referring to the extraction rule.
However, if prediction type is determined according to the embedding rule, there will be the possibility that there will be selected the prediction type where the prediction error is so large that quality degradation in an image can be visually recognized. For the prediction error, there are many cases where the sum of absolute values or the sum of squares of prediction errors for each pixel is employed, but MPEG does not prescribe what standard is used, so an encoder is free to use any standard for prediction errors. However, no matter what standard there is, if a prediction type where the prediction error is too large is selected, the quality degradation of an image will occur. Hence, a certain threshold value is previously set to a prediction error. In the case where the prediction error in a selected prediction type is greater than the threshold value, it is desirable that the embedding of a data bit is not performed in the macroblock. In this case, the prediction type of macroblock is made an intracoded macroblock in accordance with the aforementioned embedding rule. This point will be described in further detail in reference to FIG. 4.
FIG. 4 is a diagram for explaining the relationship between the prediction type and the prediction error of a macroblock. In the figure, the axis of ordinate represents the prediction error, and a larger prediction error indicates a larger degradation of picture quality. Also, a threshold value has been set as the degree of an allowable prediction error, that is, a standard of judgment where no perceptible degradation of picture quality occurs. In FIG. 4, 3 short horizontal bars (i), (ii), and (iii), linked by a single vertical bar, indicates any of 3 prediction errors of the forward prediction, backward prediction, and bidirectional prediction of a macroblock. From the relationship between a threshold value and 3 prediction errors, a macroblock can be classified into 4 types (a), (b), (c), and (d). That is, the 4 types are the case (type (a)) where 3 prediction errors are all less than a threshold value, the case (type (b)) where any one of prediction errors exceeds a threshold value, the case (type (c)) where 2 prediction errors exceed a threshold value, and the case (type (d)) where all prediction errors exceed a threshold value.
For the macroblock of type (d), even if any type of prediction is selected, all prediction errors will exceed a threshold value and there will be a considerable degradation in picture quality, so it is undesirable to use this type of macroblock as an embedding region. In this case, this macroblock becomes an intracoded macroblock in accordance with the embedding rule. However, in MPEG, most macroblocks in a B-frame have interframe prediction (forward prediction, backward prediction, or bidirectional prediction), and actually there is a low probability that this type of macroblock appears.
For the macroblock of type (a), even if any kind of prediction is selected, there will be no possibility that prediction error will exceed a threshold value. That is, even if any data is embedded, degradation of picture quality will not be conspicuous, so this type of macroblock can be used as an embedding region. Also, even for type (b), even if this block is used as an embedding region, it will appear that perceptible degradation of picture quality will not occur. Usually, there is no possibility that a bidirectional prediction error becomes worst among three prediction errors (i.e., there is no possibility that a bidirectional prediction error is horizontal bar (i)). According to the aforementioned embedding rule, both a bit value of 1 and a bit value of 0 can be embedded into this type of block without exceeding a threshold value. Therefore, when the aforementioned embedding rule is employed, data bits can be embedded into the macroblocks of the types (a) and (b) without substantially causing degradation of picture quality.
For type (c), it is undesirable as a rule to employ this type of block as an embedding region. In the case where a bidirectional prediction error is horizontal bar (iii), a forward prediction error and a backward prediction error both exceed a threshold value, and consequently, picture quality degradation will occur depending on a data bit to be embedded. However, even in this case, when the prediction error in the prediction type corresponding to a data bit to be actually embedded do not exceed a threshold value (e.g., the case where the prediction error, when a certain bit value is embedded according to an embedding rule, is less than a threshold value indicated by horizontal bar (iii)), it is possible to use this type of block as an embedding region.
In view of the aforementioned points, three types of prediction errors are obtained for a certain macroblock in which data is embedded. Then, in the case where the prediction error of the prediction type, decided based on the embedding rule, exceeds a predetermined threshold value, it is desirable to inhibit the embedding of data into this macroblock. In this case, the prediction type of the macroblock into which the embedding of data is inhibited is intraframe coding in accordance with the aforementioned embedding rule.
Note that the intracoded macroblocks of type (c) and (d) become invalid bits which cannot be used as a data embedding region. However, as previously described, the actual rate of occurrence is low, so such invalid bits can be compensated by error correction coding, in the case where information to be embedded is allowed to have redundancy.
According to the embodiment of the present invention, the type of macroblock and a data bit to be embedded are correlated and decided in encoding a motion image. Therefore, message data can be embedded into a motion image without substantially having an influence on the compression efficiency of the motion image and also without substantially causing degradation of picture quality. In addition, it is very difficult to remove message data embedded in this way from a motion image. Furthermore, since the quantity of information to be embedded is almost independent of the content of an image, it is possible to efficiently embed message data.
(4) Attention to Consideration of Picture Quality (Countermeasure to Scene Change)
When scene change takes place, it is known that most macroblocks in the B-frames between the I-frame and the P-frame or between the P-frames before and after the change are the types (c) shown in FIG. 4. FIG. 5 is a diagram for explaining reference images in the case when scene change takes place. FIG. 5(a) shows the case when there is no scene change, and FIG. 5(b) shows the case when scene change takes place between frame 2 and frame 3. In the figures, two opposite end frames are I- or P-frames and two center frames are B-frames. Also, an arrow shown in the figures indicates a reference relation between frames.
If there is no scene change, a great number of bidirectionally directed macroblocks are present in the B-frame. However, if scene change, such as that shown in FIG. 5(b), occurs, the number of backwardly and bidirectionally predicted macroblocks in the frame 2 will be considerably reduced and most of the macroblocks will become forwardly predicted macroblocks so that prediction errors will become smaller than the threshold value of FIG. 4. Also, the number of forwardly and bidirectionally predicted macroblocks in the frame 3 will be considerably reduced and most of the macroblocks will become backwardly predicted macroblocks so that prediction errors will become smaller than the threshold value of FIG. 4. Therefore, it is undesirable to embed data into such frames. Hence, the number of forwardly predicted macroblocks and the number of backwardly predicted macroblocks are monitored, and when these numbers are less than a certain threshold value, it is judged that scene change has taken place. In such a case, it is desirable not to embed data into such frames (i.e., according to the embedding rule, it is desirable that prediction type be made an intracoded macroblock.)
(5) Attention to Consideration of Picture Quality (Countermeasure to Occlusion)
Occlusion means that, by moving a certain object, something hidden behind the object suddenly appears or conversely is hidden. When occlusion takes place, the macroblocks related to the occlusion within the entire frame are type (c) shown in FIG. 4. In this case, as previously described, if the prediction error of the prediction type determined according to the embedding rule is less than a threshold value, there will be no problem, but in the case other than that, perceptible degradation of picture quality will occur. When picture quality is of great importance, the degradation can be avoided by employing an error correcting code. That is, 1 bit of information is not expressed with a single macroblock, but rather, redundancy is given to information, and information equivalent to 1 bit is expressed with a plurality of macroblocks. In this case, a single embedding region is constituted by a set of macroblocks.
For example, consider the case where information equivalent to 1 bit is expressed with three macroblocks. In this case, even if one of the three macroblocks were of the prediction type opposite to a data bit to be expressed, the data bit could be accurately expressed with the two remaining macroblocks. If intracoded macroblocks more than a predetermined number are contained in a certain set of macroblocks which express 1 bit of information, a data bit will not be embedded into the set of macroblocks. Conversely, if two or more macroblocks are of type (c) shown in FIG. 4, it is necessary that some of the macroblocks become intracoded macroblocks to clearly indicate that data has not been embedded. This can also be utilized in the embedding and extraction using a statistical technique. That is, each time statistical nature appears, a plurality of embedding regions (e.g., 100 regions) are prepared, and 1 bit of information may be expressed with a plurality of regions. In that sense, the redundancy in the present invention means that 1 bit of information is not caused to correspond to a single region to be processed with a relation of one-to-one, but rather it is caused to correspond to a plurality of regions.
When occlusion takes place, it is considered that a plurality of mutually adjacent macroblocks existing in the portion related to the occlusion are of the type (c) shown in FIG. 4. In view of such a point, for a set of macroblocks which constitutes an error correcting code, it is desirable to utilize macroblocks which are present at a position away from each other in the frame.
While the aforementioned embodiment has been described with reference to MPEG, the present invention is not limited to MPEG. It is taken as a matter of course that the present invention is applicable to some other image compression methods using an interframe prediction coding technique. In that sense, the embedding region in the present invention is not limited to macroblocks.
The aforementioned embedding rule and extraction rule are merely examples, so the present invention are not limited to these rules but it can employ various rules. For example, it is also possible to embed three data values into a single macroblock by causing a forwardly predicted macroblock, a backwardly predicted macroblock, and a bidirectionally predicted macroblock to correspond to bit values of 0, 1, and 2, respectively.
Furthermore, while the aforementioned embodiment has been described with reference to the B-frame, that is, a bidirectionally predicted frame, it is also possible to embed data into the aforementioned P-frame. Since the macroblocks constituting the P-frame are forwardly predicted macroblocks and an intracoded macroblock, bit values can be caused to correspond to these macroblocks. However, from the viewpoint of suppressing degradation of picture quality and an increase in a quantity of data, as previously described, it is desirable to embed data into the B-frame rather than into the P-frame. The reason is that if a macroblock which is an intracoded macroblock is forcibly made a forwardly predicted macroblock by the embedding rule, the picture quality will be degraded and, in the opposite case, the data quantity will be increased.
(6) Motion image coding system
FIG. 6 is a block diagram of a motion image coding system employing the present invention. Memory 61 stores motion image data consisting of a plurality of frames. Frame memory 62 stores a past reference frame and frame memory 63 stores a future reference frame in display order. A region specifier 64 specifies a position at which data is embedded as additional information. Therefore, at least one region is specified in a frame. An error calculator 65 calculates a forward prediction error, a backward prediction error, and a bidirectional prediction error, based on the data stored on the frame memories 62 and 63. The forward prediction error is calculated, from both an embedding region and a reference region in the past reference frame which is referred to by the embedding region, by employing forward prediction. The backward prediction error is calculated, from both an embedding region and a reference region in the future reference frame which is referred to, by the embedding region by employing backward prediction. Furthermore, the bidirectional prediction error is calculated, from an embedding region and reference regions in both the past and future reference frames which are referred to by the embedding region, by employing bidirectional prediction. A decider 66 embeds data to be embedded into an embedding region by controlling the characteristic of a region which is an embedding region by referring to an embedding rule. Specifically, the embedding rule prescribes that when a single data bit is embedded in an embedding region, the prediction type in the embedding region employs either forward prediction or backward prediction. Also, when the other data is embedded, it prescribes that the prediction type employs bidirectional prediction. The decider 66 decides the type of interframe prediction in an embedding region in correspondence with the content of information to be embedded, also specifies a reference region which is referred to by an embedding region in correspondence with the decided type of interframe prediction, and furthermore specifies either one of the first, the second, or the third prediction error. Thereafter, an encoder 67 encodes the signal which was output from the decider 66.
Here, the decider 66 is designed so that, for a certain embedding region, when the prediction error in the type of interframe prediction decided based on the embedding rule exceeds a predetermined threshold value, the embedding of data to that embedding region is inhibited. With this, picture quality is prevented from being degraded by embedding. The decider 66 is also designed so that when the number of references of forward prediction or the number of references of backward prediction in a bidirectionally predicted frame is less than a predetermined number, the embedding of data to an embedding region in that frame is inhibited. By counting the number of references, scene change can be detected. Thus, when scene change takes place, the embedding of data to the frame related to the change is inhibited. As a consequence, picture quality degradation can be prevented.
(7) Motion Image Decoding System
FIG. 7 is a block diagram of a motion image decoding system employing the present invention. Memory 71 stores encoded motion image data into which additional information is embedded. A region specifier 72 specifies at least one embedding region into which additional information is embedded in a frame. Extracting means 73 extracts additional information, embedded in an embedding region, from the type of interframe prediction in the embedding region by referring to the extraction rule. Then, an decoder 74 decodes the encoded data which was output from the extractor 73, thereby reconstructing a motion image.
(8) Embedding of Fingerprint
In MPEG, B-frames are not referred to by other frames, so even if the prediction type of macroblock in the B-frame were changed, there would be no possibility that the change would have an influence on other frames. By utilizing this fact, a fingerprint can be embedded. The fingerprint is specific information different for each owner. A typical utilization example of the fingerprint is the case where, when motion image data is issued to a third party, the issuer previously embeds a mark in the motion image data so that the third party which is a receiving station can be specified. If done in this way, the source of a copy can be specified when an illegal act, such as illegal copy, is performed. Therefore, if the videodata is illegally circulated, there could be a fee for any illegal copy. Also, a cryptographed video product is given legal owner's registration information, and a fingerprint can be embedded in correspondence with the registration information.
In the case where a fingerprint is embedded, basically both "bidirectionally predicted macroblock" and "forwardly predicted macroblock or backwardly predicted macroblock where the prediction error is smaller," which are generated when MPEG encoding is performed, have been held. Then, a suitable macroblock is selected in correspondence with a third party which is a receiving station. Even if done in this way, there would be no influence on other frames or a data layer (e.g., a slice layer) which is higher than the macroblock layer of the corresponding frame. According to the present invention, message data can be embedded into a motion image without substantially having an influence on the compression efficiency of the motion image and also without substantially causing degradation of picture quality. In addition, since message data is embedded in a part essential to the code of the motion image, it is difficult to remove the message data from the motion image without degrading picture quality. Furthermore, since the quantity of information to be embedded is almost independent of the content of the videodata, it is possible to efficiently embed message data.
While the invention has been particularly shown and described with respect to preferred embodiments thereof, it will be understood by those skilled in the art that the foregoing and other changes in form and details may be made therein withour departing from the spirit and scope of the invention.

Claims (22)

Having thus described our invention, what we claim as new, and desire to secure by Letters Patent is:
1. A data hiding method for embedding information into a motion image constituted by a plurality of frames, comprising the steps of:
specifying at least one embedding region in the frame for embedding information; and
determining a type of interframe prediction of said embedding region in correspondence with information to be embedded by referring to an embedding rule where a content of data to be embedded is caused to correspond to the type of interframe prediction of said embedding region.
2. The data hiding method as set forth in claim 1, wherein said frame in which said embedding region exists is a bidirectionally predictive-coded frame.
3. The data hiding method as set forth in claim 2, wherein said type of interframe prediction is selected from forward prediction, backward prediction, bidirectional prediction, and intraframe coding.
4. The data hiding method as set forth in claim 3, wherein said embedding rule causes one of bit values to correspond to said bidirectional prediction and the other bit value to correspond to said forward prediction or said backward prediction.
5. The data hiding method as set forth in claim 4, wherein said embedding rule further causes data embedding inhibition to correspond to said intraframe coding.
6. The data hiding method as set forth in claim 1, further comprising:
a step of inhibiting embedding of data to a certain embedding region when a prediction error in the type of interframe prediction of said certain embedding region, determined based on said embedding rule, exceeds a predetermined threshold value.
7. The data hiding method as set forth in claim 6, further comprising:
a step of inhibiting embedding of data to said embedding region of said bidirectionally predictive-coded frame when a number of references of said forward prediction or a number of references of said backward prediction in said bidirectionally predictive-coded frame is less than a predetermined number.
8. A data hiding method for embedding information into a motion image constituted by a plurality of frames, comprising the steps of:
counting a number of references of forward prediction or a number of references of backward prediction in a frame having an embedding region for embedding information;
determining characteristics of said respective embedding regions in correspondence with information to be embedded by referring to an embedding rule where a content of data to be embedded is caused to correspond to a characteristic of said embedding region, when said number of references is greater than a predetermined number; and
inhibiting embedding of data to said embedding region of said frame when said number of references is less than said predetermined number.
9. A data hiding method for embedding information into an image, comprising the steps of:
specifying a plurality of embedding regions in said image for embedding the same information; and
determining said respective embedding regions so that they have the same characteristic in correspondence with information to be embedded by referring to an embedding rule where a content of data to be embedded is caused to correspond to a characteristic of said embedding region, and embedding the same data into said respective embedding regions.
10. A data extraction method for extracting information embedded into a motion image, comprising the steps of:
specifying at least one embedding region embedded with information; and
extracting the embedded information from a type of said interframe prediction in the embedding region by referring to an extraction rule where the type of said interframe prediction in said embedding region is caused to correspond to a content of data to be extracted.
11. The data extraction method as set forth in claim 10, wherein said frame in which said embedding region exists is a bidirectionally predictive-coded frame.
12. The data hiding method as set forth in claim 11, wherein the type of said interframe prediction is selected from forward prediction, backward prediction, bidirectional prediction, and intraframe coding.
13. The data hiding method as set forth in claim 12, wherein said extraction rule causes one of bit values to correspond to said bidirectional prediction and the other bit value to correspond to said forward prediction or said backward prediction.
14. The data hiding method as set forth in claim 13, wherein said extraction rule further causes said intraframe coding to correspond to data embedding inhibition.
15. A data extraction method for extracting information with redundancy embedded into an image, comprising the steps of: specifying in said image a plurality of embedding regions embedded with the same data bit;
extracting data bits embedded in said respective embedding regions from characteristics of said respective embedding regions by referring to an extraction rule where a characteristic of said embedding region is caused to correspond to a data bit to be extracted;
when different data bits are extracted from said respective embedding regions, comparing a number of said embedding regions for each of the extracted different data bits, and specifying the data bit with a greater number, as embedded information; and
thereby extracting said information with redundancy embedded into an image.
16. A data extraction method for extracting information embedded into an image, comprising the steps of:
specifying in said image a plurality of embedding regions, embedded with a data bit of 1; and
extracting the embedded data bit of 1, based on characteristics of the specified plurality of embedding regions, by referring to an extraction rule where a characteristic of said embedding region is caused to correspond to a data bit to be extracted.
17. A motion image coding system for embedding information into an image which is constituted by a plurality of frames and which employs interframe prediction, comprising:
an error calculator for calculating a first prediction error, based on both an embedding region specified in a first frame for embedding information and a reference region in a second frame which is referred to by said embedding region, by employing forward prediction, also calculating a second prediction error, based on both said embedding region and a reference region in a third frame which is referred to by the embedding region, by employing backward prediction, and furthermore calculating a third prediction error, based on said embedding region and reference regions in the second and third frames which are referred to by said embedding region, by employing bidirectional prediction; and a decider for deciding a type of interframe prediction in said embedding region in correspondence with a content of information to be embedded by referring to an embedding rule which prescribes that when one data bit is embedded in said embedding region, said type of interframe prediction in said embedding region employs either said forward prediction or said backward prediction and which also prescribes that when another data bit is embedded in said embedding region, the type of said interframe prediction employs said bidirectional prediction, and for specifying any one of the first, the second, or the third prediction error in correspondence with the decided type of interframe prediction.
18. The motion image coding system as set forth in claim 17, wherein said decider includes first inhibition means which inhibits embedding of data to a certain embedding region when a prediction error in the type of interframe prediction of said certain embedding region, determined based on said embedding rule, exceeds a predetermined threshold value.
19. The motion image coding system as set forth in claim 18, wherein said decider includes second inhibition means which inhibits embedding of data to said embedding region of said bidirectionally predictive-coded frame when a number of references of said forward prediction or a number of references of said backward prediction in said bidirectionally predictive-coded frame is less than a predetermined number.
20. A motion image decoding system for extracting information embedded into a coded motion image, comprising:
a specifier for specifying at least one embedding region embedded with information; and
an extractor for extracting said embedded information from a type of said interframe prediction in the embedding region by referring to an extraction rule where the type of said interframe prediction in said embedding region is caused to correspond to a content of data to be embedded.
21. A program storage medium for executing a data hiding process, which embeds information into a motion image constituted by a plurality of frames, by a computer, the program storage medium having the steps of:
specifying in a frame at least one embedding region into which information is embedded; and
deciding a type of interframe prediction in said embedding region in correspondence with information to be embedded by referring to an embedding rule where a content of data to be embedded is caused to correspond to the type of interframe prediction in said embedding region.
22. A program storage medium for executing a data extracting process, which extracts information embedded into a coded motion image, by a computer, the program storage medium having the steps of:
specifying in a frame at least one embedding region embedded with said information; and
extracting the embedded information in correspondence with a type of interframe prediction in said embedding region by referring to an extraction rule where said type of interframe prediction in said embedding region is caused to correspond to a content of data to be extracted.
US08/922,701 1996-10-15 1997-09-02 Data hiding and extraction methods Expired - Lifetime US6005643A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP27272196 1996-10-15
JP8-272721 1996-10-15

Publications (1)

Publication Number Publication Date
US6005643A true US6005643A (en) 1999-12-21

Family

ID=17517865

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/922,701 Expired - Lifetime US6005643A (en) 1996-10-15 1997-09-02 Data hiding and extraction methods

Country Status (17)

Country Link
US (1) US6005643A (en)
EP (1) EP0935392B1 (en)
JP (1) JP3315413B2 (en)
KR (1) KR100368352B1 (en)
CN (1) CN1115876C (en)
AT (1) ATE385649T1 (en)
CA (1) CA2264625C (en)
CZ (1) CZ290838B6 (en)
DE (1) DE69738502T2 (en)
ES (1) ES2297848T3 (en)
HK (1) HK1022066A1 (en)
HU (1) HU223895B1 (en)
MY (1) MY122045A (en)
PL (3) PL183090B1 (en)
RU (1) RU2181930C2 (en)
TW (1) TW312770B (en)
WO (1) WO1998017061A1 (en)

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6128411A (en) * 1998-08-25 2000-10-03 Xerox Corporation Method for embedding one or more digital images within another digital image
US6226041B1 (en) * 1998-07-28 2001-05-01 Sarnoff Corporation Logo insertion using only disposable frames
WO2001082217A1 (en) * 2000-04-25 2001-11-01 Hewlett-Packard Company Image sequence compression featuring independently coded regions
US20010043750A1 (en) * 2000-04-04 2001-11-22 Tetsujiro Kondo Embedded coding unit and embedded coding method, decoding unit and decoding method, and storage medium
WO2002011036A1 (en) * 2000-08-01 2002-02-07 Driessen James L Retail point of sale (rpos) apparatus for internet merchandising
US20020023058A1 (en) * 2000-05-18 2002-02-21 Masayuki Taniguchi System and method for distributing digital content
US20020037051A1 (en) * 2000-09-25 2002-03-28 Yuuji Takenaka Image control apparatus
US20020048282A1 (en) * 1997-09-02 2002-04-25 Osamu Kawamae Data transmission method for embedded data, data transmitting and reproducing apparatuses and information recording medium therefor
US20020076083A1 (en) * 2000-09-11 2002-06-20 Levy Kenneth L. Time and object based masking for video watermarking
US6434322B1 (en) 1997-09-17 2002-08-13 Hitachi, Ltd. Reproducing method and apparatus, for a video signal having copy control information
US6441859B2 (en) * 1997-01-20 2002-08-27 Sony Corporation Image signal transmitting method, superimposed signal extracting method, image signal output apparatus, image signal receiving apparatus and image signal recording medium
US20020138736A1 (en) * 2001-01-22 2002-09-26 Marc Morin Method and system for digitally signing MPEG streams
US20030001757A1 (en) * 2000-10-19 2003-01-02 Tetsujiro Kondo Data processing device
US20030053657A1 (en) * 2001-09-13 2003-03-20 Canon Kabushiki Kaisha Insertion of a message in a sequence of digital images
US20030064258A1 (en) * 2001-09-28 2003-04-03 Pan Alfred I-Tsung Fuel additives for fuel cell
US6584210B1 (en) 1998-03-27 2003-06-24 Hitachi, Ltd. Digital watermark image processing method
US6728408B1 (en) 1997-09-03 2004-04-27 Hitachi, Ltd. Water-mark embedding method and system
US6771193B2 (en) 2002-08-22 2004-08-03 International Business Machines Corporation System and methods for embedding additional data in compressed data streams
US6798893B1 (en) * 1999-08-20 2004-09-28 Nec Corporation Digital watermarking technique
US6804372B1 (en) * 1998-10-07 2004-10-12 Sony Corporation Coding apparatus and method, decoding apparatus and method, data processing system, storage medium, and signal
US20040236716A1 (en) * 2001-06-12 2004-11-25 Carro Fernando Incerits Methods of invisibly embedding and hiding data into soft-copy text documents
US6826291B2 (en) 1997-09-03 2004-11-30 Hitachi, Ltd. Method and system for embedding information into contents
US20050008192A1 (en) * 1999-06-08 2005-01-13 Tetsujiro Kondo Image processing apparatus, image processing method, and storage medium
US6965697B1 (en) 1998-07-15 2005-11-15 Sony Corporation Coding apparatus and method, decoding apparatus and method, data processing system, storage medium, and signal
US6970510B1 (en) 2000-04-25 2005-11-29 Wee Susie J Method for downstream editing of compressed video
US6973130B1 (en) 2000-04-25 2005-12-06 Wee Susie J Compressed video signal including information for independently coded regions
US20060015464A1 (en) * 2004-07-16 2006-01-19 Dewolde Jeffrey H Program encoding and counterfeit tracking system and method
US20060188129A1 (en) * 2001-03-28 2006-08-24 Mayboroda A L Method of embedding watermark into digital image
US20060193492A1 (en) * 2001-02-21 2006-08-31 Kuzmich Vsevolod M Proprietary watermark system for secure digital media and content distribution
US20070092103A1 (en) * 2005-10-21 2007-04-26 Microsoft Corporation Video fingerprinting using watermarks
US20070100707A1 (en) * 2005-10-31 2007-05-03 Vibme Llc Scart-card (secure consumer advantaged retail trading)
US20070150447A1 (en) * 2005-12-23 2007-06-28 Anish Shah Techniques for generic data extraction
US20070271469A1 (en) * 2001-05-11 2007-11-22 Lg Elextronics Inc. Copy protection method and system for digital media
US20080074277A1 (en) * 2006-09-27 2008-03-27 Mona Singh Methods, systems, and computer program products for presenting a message on a display based on a display based on video frame types presented on the display
US20080098022A1 (en) * 2006-10-18 2008-04-24 Vestergaard Steven Erik Methods for watermarking media data
US20080275763A1 (en) * 2007-05-03 2008-11-06 Thai Tran Monetization of Digital Content Contributions
CN100452884C (en) * 2005-07-14 2009-01-14 上海交通大学 Method for detecting GIF infomration hidden
US20090129625A1 (en) * 2007-11-21 2009-05-21 Ali Zandifar Extracting Data From Images
US20090177891A1 (en) * 2001-06-12 2009-07-09 Fernando Incertis Carro Method and system for invisibly embedding into a text document the license identification of the generating licensed software
US7567721B2 (en) 2002-01-22 2009-07-28 Digimarc Corporation Digital watermarking of low bit rate video
US7577841B2 (en) 2002-08-15 2009-08-18 Digimarc Corporation Watermark placement in watermarking of time varying media signals
US20090232214A1 (en) * 1999-11-26 2009-09-17 British Telecommunications Plc Video coding and decoding
US7751683B1 (en) * 2000-11-10 2010-07-06 International Business Machines Corporation Scene change marking for thumbnail extraction
US7949147B2 (en) 1997-08-26 2011-05-24 Digimarc Corporation Watermarking compressed data
US8094872B1 (en) * 2007-05-09 2012-01-10 Google Inc. Three-dimensional wavelet based video fingerprinting
US8443101B1 (en) * 2005-05-24 2013-05-14 The United States Of America As Represented By The Secretary Of The Navy Method for identifying and blocking embedded communications
US8584182B2 (en) 2000-01-27 2013-11-12 Time Warner Cable Enterprises Llc System and method for providing broadcast programming, a virtual VCR, and a video scrapbook to programming subscribers
CN104125467A (en) * 2014-08-01 2014-10-29 郑州师范学院 Embedding and extracting methods for video steganography information
US8938763B2 (en) 2007-02-28 2015-01-20 Time Warner Cable Enterprises Llc Personal content server apparatus and methods
USRE45406E1 (en) * 2003-09-08 2015-03-03 Deluxe Laboratories, Inc. Program encoding and counterfeit tracking system and method
US9021535B2 (en) 2006-06-13 2015-04-28 Time Warner Cable Enterprises Llc Methods and apparatus for providing virtual content over a network
US9135674B1 (en) 2007-06-19 2015-09-15 Google Inc. Endpoint based video fingerprinting
US9325710B2 (en) 2006-05-24 2016-04-26 Time Warner Cable Enterprises Llc Personal content server apparatus and methods
US9336367B2 (en) 2006-11-03 2016-05-10 Google Inc. Site directed management of audio components of uploaded video files
US9386327B2 (en) 2006-05-24 2016-07-05 Time Warner Cable Enterprises Llc Secondary content insertion apparatus and methods
US9503691B2 (en) 2008-02-19 2016-11-22 Time Warner Cable Enterprises Llc Methods and apparatus for enhanced advertising and promotional delivery in a network
US20170124379A1 (en) * 2015-10-28 2017-05-04 Xiaomi Inc. Fingerprint recognition method and apparatus
US10304052B2 (en) 2000-06-30 2019-05-28 James Leonard Driessen Retail point of sale (RPOS) apparatus for internet merchandising
US10412429B1 (en) * 2015-09-25 2019-09-10 Amazon Technologies, Inc. Predictive transmitting of video stream data
US10778867B1 (en) * 2016-03-23 2020-09-15 Amazon Technologies, Inc. Steganographic camera communication
US10943030B2 (en) 2008-12-15 2021-03-09 Ibailbonding.Com Securable independent electronic document
US11076203B2 (en) 2013-03-12 2021-07-27 Time Warner Cable Enterprises Llc Methods and apparatus for providing and uploading content to personalized network storage

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3601566B2 (en) * 1996-12-18 2004-12-15 日本電信電話株式会社 Information multiplexing method and copyright protection system
US6373530B1 (en) * 1998-07-31 2002-04-16 Sarnoff Corporation Logo insertion based on constrained encoding
GB2347295A (en) * 1999-02-11 2000-08-30 Central Research Lab Ltd Encoding and decoding of watermarks into moving images using edge detection
JP2002539487A (en) * 1999-03-10 2002-11-19 ディジマーク コーポレイション Signal processing method and apparatus
KR20010044804A (en) * 2001-03-27 2001-06-05 왕성현 A method saving and restoring secret data into/from graphic files
CN1913633B (en) * 2001-11-06 2011-06-01 松下电器产业株式会社 Moving picture decoding method
JP4330346B2 (en) * 2002-02-04 2009-09-16 富士通株式会社 Data embedding / extraction method and apparatus and system for speech code
US20030158730A1 (en) * 2002-02-04 2003-08-21 Yasuji Ota Method and apparatus for embedding data in and extracting data from voice code
JP4225752B2 (en) 2002-08-13 2009-02-18 富士通株式会社 Data embedding device, data retrieval device
US6856551B2 (en) * 2003-02-06 2005-02-15 Sandisk Corporation System and method for programming cells in non-volatile integrated memory devices
KR101141897B1 (en) 2004-10-25 2012-05-03 성균관대학교산학협력단 Encoding/Decoding Method for Data Hiding And Encoder/Decoder using the method
JP4762938B2 (en) * 2007-03-06 2011-08-31 三菱電機株式会社 Data embedding device, data extracting device, data embedding method, and data extracting method
KR101006864B1 (en) * 2008-10-15 2011-01-12 고려대학교 산학협력단 Lossless data compression method using data hiding
JP5394212B2 (en) * 2008-12-19 2014-01-22 トムソン ライセンシング How to insert data, how to read the inserted data
CN102223540B (en) * 2011-07-01 2012-12-05 宁波大学 Information hiding method facing to H.264/AVC (automatic volume control) video
CN110430432B (en) 2014-01-03 2022-12-20 庆熙大学校产学协力团 Method and apparatus for deriving motion information between time points of sub-prediction units

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08159330A (en) * 1994-12-01 1996-06-21 Kubota Corp Valve disc open/close device for slide valve
US5689587A (en) * 1996-02-09 1997-11-18 Massachusetts Institute Of Technology Method and apparatus for data hiding in images
US5721788A (en) * 1992-07-31 1998-02-24 Corbis Corporation Method and system for digital image signatures
US5768426A (en) * 1993-11-18 1998-06-16 Digimarc Corporation Graphics processing system employing embedded code signals

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5565921A (en) * 1993-03-16 1996-10-15 Olympus Optical Co., Ltd. Motion-adaptive image signal processing system
JPH07203458A (en) * 1993-12-27 1995-08-04 Olympus Optical Co Ltd Dynamic image coding device
EP0651574B1 (en) * 1993-03-24 2001-08-22 Sony Corporation Method and apparatus for coding/decoding motion vector, and method and apparatus for coding/decoding image signal
JP3729421B2 (en) * 1994-03-18 2005-12-21 富士通株式会社 Unauthorized use prevention method and unauthorized use prevention system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5721788A (en) * 1992-07-31 1998-02-24 Corbis Corporation Method and system for digital image signatures
US5768426A (en) * 1993-11-18 1998-06-16 Digimarc Corporation Graphics processing system employing embedded code signals
JPH08159330A (en) * 1994-12-01 1996-06-21 Kubota Corp Valve disc open/close device for slide valve
US5689587A (en) * 1996-02-09 1997-11-18 Massachusetts Institute Of Technology Method and apparatus for data hiding in images

Cited By (116)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6441859B2 (en) * 1997-01-20 2002-08-27 Sony Corporation Image signal transmitting method, superimposed signal extracting method, image signal output apparatus, image signal receiving apparatus and image signal recording medium
US7949147B2 (en) 1997-08-26 2011-05-24 Digimarc Corporation Watermarking compressed data
US7248607B2 (en) * 1997-09-02 2007-07-24 Hitachi, Ltd. Data transmission method for embedded data, data transmitting and reproducing apparatuses and information recording medium therefor
US7317738B2 (en) * 1997-09-02 2008-01-08 Hitachi, Ltd. Data transmission method for embedded data, data transmitting and reproducing apparatuses and information recording medium therefor
US20020048282A1 (en) * 1997-09-02 2002-04-25 Osamu Kawamae Data transmission method for embedded data, data transmitting and reproducing apparatuses and information recording medium therefor
US20020049953A1 (en) * 1997-09-02 2002-04-25 Osamu Kawamae Data transmission method for embedded data, data transmitting and reproducing apparatuses and information recording medium therefor
US6404781B1 (en) * 1997-09-02 2002-06-11 Hitachi, Ltd. Data transmission method for embedded data, data transmitting and reproducing apparatuses and information recording medium therefor
US6826291B2 (en) 1997-09-03 2004-11-30 Hitachi, Ltd. Method and system for embedding information into contents
US6728408B1 (en) 1997-09-03 2004-04-27 Hitachi, Ltd. Water-mark embedding method and system
US6434322B1 (en) 1997-09-17 2002-08-13 Hitachi, Ltd. Reproducing method and apparatus, for a video signal having copy control information
US6584210B1 (en) 1998-03-27 2003-06-24 Hitachi, Ltd. Digital watermark image processing method
US20050265443A1 (en) * 1998-07-15 2005-12-01 Tetsujiro Kondo Coding apparatus and method, decoding apparatus and method, data processing system, storage medium, and signal
US7738711B2 (en) 1998-07-15 2010-06-15 Sony Corporation Coding apparatus and method, decoding apparatus and method, data processing system, storage medium, and signal
US6965697B1 (en) 1998-07-15 2005-11-15 Sony Corporation Coding apparatus and method, decoding apparatus and method, data processing system, storage medium, and signal
US6226041B1 (en) * 1998-07-28 2001-05-01 Sarnoff Corporation Logo insertion using only disposable frames
US6128411A (en) * 1998-08-25 2000-10-03 Xerox Corporation Method for embedding one or more digital images within another digital image
US7424130B2 (en) 1998-10-07 2008-09-09 Sony Corporation Coding apparatus and method, decoding apparatus and method, data processing system, storage medium, and signal
US6804372B1 (en) * 1998-10-07 2004-10-12 Sony Corporation Coding apparatus and method, decoding apparatus and method, data processing system, storage medium, and signal
US7606431B2 (en) * 1999-06-08 2009-10-20 Sony Corporation Image processing apparatus, image processing method, and storage medium
US20050008192A1 (en) * 1999-06-08 2005-01-13 Tetsujiro Kondo Image processing apparatus, image processing method, and storage medium
US7092546B2 (en) 1999-08-20 2006-08-15 Nec Corporation Digital watermarking technique
US6798893B1 (en) * 1999-08-20 2004-09-28 Nec Corporation Digital watermarking technique
US20040202350A1 (en) * 1999-08-20 2004-10-14 Nec Corporation Digital watermarking technique
US20050013436A1 (en) * 1999-08-20 2005-01-20 Nec Corporation Digital watermarking technique
US8149916B2 (en) * 1999-11-26 2012-04-03 British Telecommunications Plc Video coding and decoding
US20090232214A1 (en) * 1999-11-26 2009-09-17 British Telecommunications Plc Video coding and decoding
US8584182B2 (en) 2000-01-27 2013-11-12 Time Warner Cable Enterprises Llc System and method for providing broadcast programming, a virtual VCR, and a video scrapbook to programming subscribers
US6975770B2 (en) * 2000-04-04 2005-12-13 Sony Corporation Image compression and decompression with predictor selection based on embedding data
US20010043750A1 (en) * 2000-04-04 2001-11-22 Tetsujiro Kondo Embedded coding unit and embedded coding method, decoding unit and decoding method, and storage medium
WO2001082217A1 (en) * 2000-04-25 2001-11-01 Hewlett-Packard Company Image sequence compression featuring independently coded regions
US6970510B1 (en) 2000-04-25 2005-11-29 Wee Susie J Method for downstream editing of compressed video
US6973130B1 (en) 2000-04-25 2005-12-06 Wee Susie J Compressed video signal including information for independently coded regions
US6553150B1 (en) 2000-04-25 2003-04-22 Hewlett-Packard Development Co., Lp Image sequence compression featuring independently coded regions
US6978020B2 (en) * 2000-05-18 2005-12-20 Oki Electric Industry Co., Ltd. System and method for distributing digital content
US20020023058A1 (en) * 2000-05-18 2002-02-21 Masayuki Taniguchi System and method for distributing digital content
US7636695B2 (en) 2000-06-30 2009-12-22 James Leonard Driessen Retail point of sale (RPOS) apparatus for internet merchandising
US10304052B2 (en) 2000-06-30 2019-05-28 James Leonard Driessen Retail point of sale (RPOS) apparatus for internet merchandising
WO2002011036A1 (en) * 2000-08-01 2002-02-07 Driessen James L Retail point of sale (rpos) apparatus for internet merchandising
US7003500B1 (en) 2000-08-01 2006-02-21 James Leonard Driessen Retail point of sale (RPOS) apparatus for internet merchandising
US6961444B2 (en) * 2000-09-11 2005-11-01 Digimarc Corporation Time and object based masking for video watermarking
US20020076083A1 (en) * 2000-09-11 2002-06-20 Levy Kenneth L. Time and object based masking for video watermarking
US7197164B2 (en) 2000-09-11 2007-03-27 Digimarc Corporation Time-varying video watermark
US20020037051A1 (en) * 2000-09-25 2002-03-28 Yuuji Takenaka Image control apparatus
US6914937B2 (en) * 2000-09-25 2005-07-05 Fujitsu Limited Image control apparatus
US20030001757A1 (en) * 2000-10-19 2003-01-02 Tetsujiro Kondo Data processing device
US6859155B2 (en) 2000-10-19 2005-02-22 Sony Corporation Data processing device
US7751683B1 (en) * 2000-11-10 2010-07-06 International Business Machines Corporation Scene change marking for thumbnail extraction
US20020138736A1 (en) * 2001-01-22 2002-09-26 Marc Morin Method and system for digitally signing MPEG streams
US7058815B2 (en) 2001-01-22 2006-06-06 Cisco Technology, Inc. Method and system for digitally signing MPEG streams
US7760904B2 (en) 2001-02-21 2010-07-20 Lg Electronics Inc. Proprietary watermark system for secure digital media and content distribution
US20060193492A1 (en) * 2001-02-21 2006-08-31 Kuzmich Vsevolod M Proprietary watermark system for secure digital media and content distribution
US20060188129A1 (en) * 2001-03-28 2006-08-24 Mayboroda A L Method of embedding watermark into digital image
US7697717B2 (en) * 2001-03-28 2010-04-13 Lg Electronics Inc. Method of embedding watermark into digital image
US20070271469A1 (en) * 2001-05-11 2007-11-22 Lg Elextronics Inc. Copy protection method and system for digital media
US7877813B2 (en) 2001-05-11 2011-01-25 Lg Electronics Inc. Copy protection method and system for digital media
US20040236716A1 (en) * 2001-06-12 2004-11-25 Carro Fernando Incerits Methods of invisibly embedding and hiding data into soft-copy text documents
US7240209B2 (en) 2001-06-12 2007-07-03 International Business Machines Corporation Methods of invisibly embedding and hiding data into soft-copy text documents
US7913313B2 (en) 2001-06-12 2011-03-22 International Business Machines Corporation Method and system for invisibly embedding into a text document the license identification of the generating licensed software
US20090177891A1 (en) * 2001-06-12 2009-07-09 Fernando Incertis Carro Method and system for invisibly embedding into a text document the license identification of the generating licensed software
US7386146B2 (en) * 2001-09-13 2008-06-10 Canon Kabushiki Kaisha Insertion of a message in a sequence of digital images
US20030053657A1 (en) * 2001-09-13 2003-03-20 Canon Kabushiki Kaisha Insertion of a message in a sequence of digital images
US20030064258A1 (en) * 2001-09-28 2003-04-03 Pan Alfred I-Tsung Fuel additives for fuel cell
US7567721B2 (en) 2002-01-22 2009-07-28 Digimarc Corporation Digital watermarking of low bit rate video
US8638978B2 (en) 2002-01-22 2014-01-28 Digimarc Corporation Digital watermarking of low bit rate video
US7577841B2 (en) 2002-08-15 2009-08-18 Digimarc Corporation Watermark placement in watermarking of time varying media signals
US6771193B2 (en) 2002-08-22 2004-08-03 International Business Machines Corporation System and methods for embedding additional data in compressed data streams
USRE45406E1 (en) * 2003-09-08 2015-03-03 Deluxe Laboratories, Inc. Program encoding and counterfeit tracking system and method
US20060015464A1 (en) * 2004-07-16 2006-01-19 Dewolde Jeffrey H Program encoding and counterfeit tracking system and method
USRE46918E1 (en) * 2004-07-16 2018-06-26 Deluxe Laboratories Llc Program encoding and counterfeit tracking system and method
US7818257B2 (en) * 2004-07-16 2010-10-19 Deluxe Laboratories, Inc. Program encoding and counterfeit tracking system and method
US8443101B1 (en) * 2005-05-24 2013-05-14 The United States Of America As Represented By The Secretary Of The Navy Method for identifying and blocking embedded communications
CN100452884C (en) * 2005-07-14 2009-01-14 上海交通大学 Method for detecting GIF infomration hidden
US20100202652A1 (en) * 2005-10-21 2010-08-12 Mircosoft Corporation Video Fingerprinting Using Watermarks
US7912244B2 (en) 2005-10-21 2011-03-22 Microsoft Corporation Video fingerprinting using watermarks
US7702127B2 (en) * 2005-10-21 2010-04-20 Microsoft Corporation Video fingerprinting using complexity-regularized video watermarking by statistics quantization
US20070092103A1 (en) * 2005-10-21 2007-04-26 Microsoft Corporation Video fingerprinting using watermarks
US20070100707A1 (en) * 2005-10-31 2007-05-03 Vibme Llc Scart-card (secure consumer advantaged retail trading)
US7742993B2 (en) 2005-10-31 2010-06-22 James Leonard Driessen SCART-card (secure consumer advantaged retail trading)
US20070150447A1 (en) * 2005-12-23 2007-06-28 Anish Shah Techniques for generic data extraction
US7860903B2 (en) 2005-12-23 2010-12-28 Teradata Us, Inc. Techniques for generic data extraction
US11082723B2 (en) 2006-05-24 2021-08-03 Time Warner Cable Enterprises Llc Secondary content insertion apparatus and methods
US9325710B2 (en) 2006-05-24 2016-04-26 Time Warner Cable Enterprises Llc Personal content server apparatus and methods
US9386327B2 (en) 2006-05-24 2016-07-05 Time Warner Cable Enterprises Llc Secondary content insertion apparatus and methods
US9832246B2 (en) 2006-05-24 2017-11-28 Time Warner Cable Enterprises Llc Personal content server apparatus and methods
US10623462B2 (en) 2006-05-24 2020-04-14 Time Warner Cable Enterprises Llc Personal content server apparatus and methods
US9021535B2 (en) 2006-06-13 2015-04-28 Time Warner Cable Enterprises Llc Methods and apparatus for providing virtual content over a network
US10129576B2 (en) 2006-06-13 2018-11-13 Time Warner Cable Enterprises Llc Methods and apparatus for providing virtual content over a network
US11388461B2 (en) 2006-06-13 2022-07-12 Time Warner Cable Enterprises Llc Methods and apparatus for providing virtual content over a network
US20080074277A1 (en) * 2006-09-27 2008-03-27 Mona Singh Methods, systems, and computer program products for presenting a message on a display based on a display based on video frame types presented on the display
US8732744B2 (en) 2006-09-27 2014-05-20 Scenera Technologies, Llc Methods, systems, and computer program products for presenting a message on a display based on a type of video image data for presentation on the display
US20110210981A1 (en) * 2006-09-27 2011-09-01 Mona Singh Methods, Systems, And Computer Program Products For Presenting A Message On A Display Based On A Type Of Video Image Data For Presentation On The Display
US7962932B2 (en) 2006-09-27 2011-06-14 Scenera Technologies, Llc Methods, systems, and computer program products for presenting a message on a display based on a display based on video frame types presented on the display
US9679574B2 (en) 2006-10-18 2017-06-13 Destiny Software Productions Inc. Methods for watermarking media data
US20080098022A1 (en) * 2006-10-18 2008-04-24 Vestergaard Steven Erik Methods for watermarking media data
US9165560B2 (en) 2006-10-18 2015-10-20 Destiny Software Productions Inc. Methods for watermarking media data
US7983441B2 (en) 2006-10-18 2011-07-19 Destiny Software Productions Inc. Methods for watermarking media data
US8300885B2 (en) 2006-10-18 2012-10-30 Destiny Software Productions Inc. Methods for watermarking media data
US9336367B2 (en) 2006-11-03 2016-05-10 Google Inc. Site directed management of audio components of uploaded video files
US9769513B2 (en) 2007-02-28 2017-09-19 Time Warner Cable Enterprises Llc Personal content server apparatus and methods
US8938763B2 (en) 2007-02-28 2015-01-20 Time Warner Cable Enterprises Llc Personal content server apparatus and methods
US8924270B2 (en) 2007-05-03 2014-12-30 Google Inc. Monetization of digital content contributions
US10643249B2 (en) 2007-05-03 2020-05-05 Google Llc Categorizing digital content providers
US20080275763A1 (en) * 2007-05-03 2008-11-06 Thai Tran Monetization of Digital Content Contributions
US8094872B1 (en) * 2007-05-09 2012-01-10 Google Inc. Three-dimensional wavelet based video fingerprinting
US9135674B1 (en) 2007-06-19 2015-09-15 Google Inc. Endpoint based video fingerprinting
US20090129625A1 (en) * 2007-11-21 2009-05-21 Ali Zandifar Extracting Data From Images
US8031905B2 (en) * 2007-11-21 2011-10-04 Seiko Epson Corporation Extracting data from images
US9503691B2 (en) 2008-02-19 2016-11-22 Time Warner Cable Enterprises Llc Methods and apparatus for enhanced advertising and promotional delivery in a network
US10943030B2 (en) 2008-12-15 2021-03-09 Ibailbonding.Com Securable independent electronic document
US11076203B2 (en) 2013-03-12 2021-07-27 Time Warner Cable Enterprises Llc Methods and apparatus for providing and uploading content to personalized network storage
CN104125467B (en) * 2014-08-01 2015-06-17 郑州师范学院 Embedding and extracting methods for video steganography information
CN104125467A (en) * 2014-08-01 2014-10-29 郑州师范学院 Embedding and extracting methods for video steganography information
US10412429B1 (en) * 2015-09-25 2019-09-10 Amazon Technologies, Inc. Predictive transmitting of video stream data
US9904840B2 (en) * 2015-10-28 2018-02-27 Xiaomi Inc. Fingerprint recognition method and apparatus
US20170124379A1 (en) * 2015-10-28 2017-05-04 Xiaomi Inc. Fingerprint recognition method and apparatus
US10778867B1 (en) * 2016-03-23 2020-09-15 Amazon Technologies, Inc. Steganographic camera communication

Also Published As

Publication number Publication date
HUP0100650A2 (en) 2001-06-28
CA2264625C (en) 2002-04-02
DE69738502D1 (en) 2008-03-20
HUP0100650A3 (en) 2002-02-28
EP0935392A1 (en) 1999-08-11
ES2297848T3 (en) 2008-05-01
JP3315413B2 (en) 2002-08-19
PL332701A1 (en) 1999-09-27
EP0935392A4 (en) 2001-04-18
CA2264625A1 (en) 1998-04-23
CN1115876C (en) 2003-07-23
CZ290838B6 (en) 2002-10-16
KR100368352B1 (en) 2003-01-24
DE69738502T2 (en) 2009-01-29
KR20000036133A (en) 2000-06-26
TW312770B (en) 1997-08-11
WO1998017061A1 (en) 1998-04-23
EP0935392B1 (en) 2008-02-06
PL183090B1 (en) 2002-05-31
HK1022066A1 (en) 2000-07-21
PL183593B1 (en) 2002-06-28
HU223895B1 (en) 2005-03-29
CN1233371A (en) 1999-10-27
ATE385649T1 (en) 2008-02-15
MY122045A (en) 2006-03-31
CZ128999A3 (en) 1999-08-11
PL183642B1 (en) 2002-06-28
RU2181930C2 (en) 2002-04-27

Similar Documents

Publication Publication Date Title
US6005643A (en) Data hiding and extraction methods
US7159117B2 (en) Electronic watermark data insertion apparatus and electronic watermark data detection apparatus
US6850567B1 (en) Embedding supplemental data in a digital video signal
US6687384B1 (en) Method and apparatus for embedding data in encoded digital bitstreams
EP1139660B1 (en) System for embedding additional information in video data, and embedding method
US20120237078A1 (en) Watermarking and Fingerprinting Digital Content Using Alternative Blocks to Embed Information
US6553070B2 (en) Video-data encoder and recording media wherein a video-data encode program is recorded
US9100654B2 (en) Method and apparatus for inserting video watermark in compression domain
US20020126762A1 (en) Moving-picture data reproducing system
JP2001292428A (en) Data hiding method of moving picture and data extract method
Sherly et al. A novel approach for compressed video steganography
Echizen et al. Use of statistically adaptive accumulation to improve video watermark detection

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MORIMOTO, NORISHIGE;MAEDA, JUNJI;REEL/FRAME:008784/0580

Effective date: 19970807

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: RPX CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022951/0408

Effective date: 20090619

FPAY Fee payment

Year of fee payment: 12