CN100452028C - Data processing device and method - Google Patents

Data processing device and method Download PDF

Info

Publication number
CN100452028C
CN100452028C CNB2004100566330A CN200410056633A CN100452028C CN 100452028 C CN100452028 C CN 100452028C CN B2004100566330 A CNB2004100566330 A CN B2004100566330A CN 200410056633 A CN200410056633 A CN 200410056633A CN 100452028 C CN100452028 C CN 100452028C
Authority
CN
China
Prior art keywords
data
section
media content
data processing
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004100566330A
Other languages
Chinese (zh)
Other versions
CN1821996A (en
Inventor
宗续敏彦
荣藤稔
荒木昭一
江村恒一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of CN1821996A publication Critical patent/CN1821996A/en
Application granted granted Critical
Publication of CN100452028C publication Critical patent/CN100452028C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

A context of media content is represented by context description data having a hierarchical stratum. The context description data has the highest hierarchical layer, the lowest hierarchical layer, and other hierarchical layers. The highest hierarchical layer is formed from a single element representing content. The lowest hierarchical layer is formed from an element representing a segment of media content which corresponds to a change between scenes of video data or a change in audible tones. The remaining hierarchical layers are formed from an element representing a scene or a collection of scenes. A score corresponding to the context of a scene of interest is appended, as an attribute, to the element in each of the remaining hierarchical layers. A score relating to the time information about a corresponding media segment and a context is appended, as an attribute, to individual elements in the lowest hierarchical layer. In a selection step of a data processing method, the context of the media content is expressed, and one or a plurality of scenes of the media content is or are selected on the basis of the score of the context description data. Further, in the extraction step of the data processing method, only data pertaining to the scenes selected in the selection step are extracted.

Description

Data processing equipment and method
The application is to be on Dec 25th, 1999 applying date, and application number is 99122988.6, and denomination of invention is divided an application for the application for a patent for invention of " data processing equipment and method and medium and carry out the program of this method ".
Technical field
The present invention relates to a kind of media content data treating apparatus, a kind of data processing method, a kind of medium and a program, all these all relate to observation, broadcast and the transmission such as the continuous audio frequency viewdata (media content) of moving image, video frequency program or audio program, wherein, only play and the summary of transfers media content high brightness scene or the scene of the desirable media content of spectators.
Background technology
Traditional media content is play traditionally, is transmitted or stored on the basis of unique file storing media content.
Described at Japanese unauthorized patented claim No.Hei-10-111872,, detect variation between the scene (after this being referred to as " scene is cut apart ") of two moving images according to the method for extracting a moving image special scenes.Being added to each scene such as the additional data of the timing code of the timing code of start frame, end frame and described scene key word cuts apart.
As a kind of replacement method, Carnegie Mellon university (CMU) attempts to cut apart, detect people's face or explain captions and moving image of a phrase indexing summary of process speech recognition detection [Mochael A.Smith and Takeo KANADE " strengthen video clipping and the characteristic that makes up through image and language " CMU-CS-97-111, on February 3rd, 97] by the scene that detects a moving image.
When with each file serving as the described moving image of basis broadcast, the summary of observing described moving image is impossible.In addition, even when extracting the desirable a plurality of scene of a brightness scene or user, also must begin to search for described scene or described a plurality of scene from the head of media content.In addition, under the situation that transmits a moving image, all data sets of a file all must be transmitted, thereby need the very long delivery time.
According to the method for in Japanese unauthorized patented claim No.Hei-10-111872, describing, can extract a plurality of scenes by using a key word that helps to extract the desirable scene of user.Described additional data does not comprise relation or the contact between the described scene.For this reason, described method runs into a lot of difficulties aspect the sub-plot of a story for example extracting.In addition, when only extracting scene on the basis of a key word, the user is obtaining for running into a lot of difficulties aspect the very important consciousness of understanding scene context.Therefore, the preparation of summary or the high brightness scene very difficulty that becomes.
Method by the CMU exploitation can be summarized a moving image.But this summary has caused single, fixed mode summary.For this reason, a moving image is summarized in the summary that needs different reproduction times, for example suppose that reproduction time is that three or five minutes summary is difficult.In addition, summarize the desirable for example selection of user and comprise that the moving image of the scene of a specific character also is a difficulty.
Summary of the invention
An object of the present invention is to provide and a kind ofly can in the media content reproduction time, only select, play and transmit the device that a summary, high brightness scene or spectators wish scene.
Another object of the present invention provides a kind of device that can play the scene that a summary, high brightness scene or spectators wish user in the desirable time cycle, at the time place of selecting summary, high brightness scene or a desirable scene.
A further object of the present invention provides and a kind ofly only transmits the device of described summary, high brightness scene or the desirable scene collection of user in the desirable time cycle, when the user asks the user in during transfers media content.
A further object of the present invention provides a kind of relying according to the user and controls the device of the data volume that will be transmitted with the busy extent that server is set up line communicate by letter.
In order to solve the problem that prior art exists, according to an aspect of the present invention, a kind of data processing equipment is provided, comprise: input media, be used to import content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described section attribute information comprises the temporal information of representing scene boundary and based on the contextual score of media content, described score is represented described section significance level; Selecting arrangement is used for selecting described section according to described score and described temporal information; The content input media is used to import described media content; And extraction element, be used for extracting the described media content portion of having imported according to the described temporal information that is associated with selected section.
In the data processing equipment of the present invention, also comprise memory storage, it stores described content description data and described media content.
In the data processing equipment of the present invention, described content description data comprises the link information of described media content, and described extraction element is extracted in the media content of link destination.
In the data processing equipment of the present invention, described temporal information comprises the start time and the concluding time of each scene.
In the data processing equipment of the present invention, described temporal information comprises the start time of each scene and data processing generating apparatus when lasting, and wherein said section by hierarchical description.
In the data processing equipment of the present invention, described section by hierarchical description.
In the data processing equipment of the present invention, the score that is endowed in described each section of described selecting arrangement selection is greater than the section of predetermined threshold.
In the data processing equipment of the present invention, described selecting arrangement is just selected described section with score so that described period duration and be maximum, but below predetermined threshold value.
In the data processing equipment of the present invention, described selecting arrangement is just selected described section with score so that described period duration and around predetermined threshold value.
In the data processing equipment of the present invention, also comprise the playing device of the media content of playing described extraction.
According to another aspect of the present invention, a kind of data processing equipment is provided, comprise: input media, be used to import content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described section attribute information comprises the temporal information of representing scene boundary and the viewpoint represented by at least one key word of describing scene and represent the score of each section based on described viewpoint, and described score is represented described section significance level; Selecting arrangement is used for selecting described section according at least one and described temporal information of described viewpoint and described score; The content input media is used to import described media content; And extraction element, be used for extracting the described media content portion of having imported according to the described temporal information that is associated with selected section.
In described data processing equipment, in described content description data, described many as described section attribute information to described viewpoint and described score.
In described data processing equipment, also comprise the playing device of the media content of playing described extraction.
In described data processing equipment, also comprise memory storage, it stores described content description data and described media content.
In described data processing equipment, described content description data comprises the link information of described media content, and described extraction element is extracted in the media content of link destination.
In described data processing equipment, described temporal information comprises the start time and the concluding time of each scene.
In described data processing equipment, described temporal information comprises the start time of each scene and data processing generating apparatus when lasting, and wherein said section by hierarchical description.
In described data processing equipment, described section by hierarchical description.
In described data processing equipment, described selecting arrangement is selected the section of the score of described viewpoint greater than predetermined threshold value.
In described data processing equipment, described selecting arrangement is according to the score of the selected viewpoint section of selection just so that the duration of section and for maximum below predetermined threshold.
In described data processing equipment, the alternative condition of described selecting arrangement when selecting described viewpoint and described score is to import from the user profile of having described its condition.
In described data processing equipment, described selecting arrangement is according to the plural logic operation result in the score of described viewpoint and described viewpoint is selected described section.
According to an aspect of the present invention, a kind of data processing method is provided, comprise step: the input content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described section attribute information comprises the temporal information of representing scene boundary and based on the contextual score of media content, described score is represented described section significance level; According to described score and the described temporal information section of selection; Import described media content; With according to extracting the described media content portion of having imported with selected section temporal information that is associated.
According to an aspect of the present invention, a kind of data processing method is provided, comprise step: the input content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described section attribute information comprises the temporal information of representing scene boundary and the viewpoint represented by at least one key word of describing scene and based on the score of described viewpoint, and described score is represented described section significance level; At least one and described temporal information according to described viewpoint and described score are selected described section; Import described media content; With according to extracting the described media content portion of having imported with selected section temporal information that is associated.
According to a scheme of the present invention, a kind of data processing equipment is provided, comprise: input media, be used to import content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described attribute information comprises the contextual score based on media content, described score is represented described section significance level; Selecting arrangement is used for selecting at least one section according to described score from described a plurality of sections.
In above-mentioned data processing equipment of the present invention, described each section is hierarchical description.
In above-mentioned data processing equipment of the present invention, described content description data includes the additional information of closing the context content.
In above-mentioned data processing equipment of the present invention, represent the link destination of the representative data of each section to be affixed to described each section.
In above-mentioned data processing equipment of the present invention, described representative data be video information with and/or audio-frequency information.
According to a scheme of the present invention, a kind of data processing method is provided, may further comprise the steps: input step, the input content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described attribute information is based on the contextual score of media content, and described score is represented described section significance level; With the selection step, select at least one section from described a plurality of sections according to described score.
Described data processing equipment preferably also comprises a playing device, is used for only playing the corresponding data of the section of selecting with described selecting arrangement from described media content.
The context importance of the best presentation medium content of described score.
Described score preferably represents to select wherein to use from least one angle from the context significance level of an interested scene of key word angle and described selecting arrangement a scene of described score.
Described media content is preferably corresponding to video data or voice data.
Described media content is preferably corresponding to the data that comprise mutual synchronized video data and voice data.
Described description data is preferably described the structure of video data or voice data.
Described description data is preferably described sets of video data and voice data and is concentrated the structure of each.
Described selecting arrangement is preferably by selecting a scene with reference to the description data relevant with video data or voice data.
Described selecting arrangement preferably includes the audio selection device that a description data that is used for by reference video data is selected the video selecting arrangement of a video scene or is used for selecting by the description data of reference audio data an audio scene.
Described selecting arrangement preferably includes the audio selection device that a description data that is used for by reference video data is selected the video selecting arrangement of a video scene and is used for selecting by the description data of reference audio data an audio scene.
By extracting data that data extract preferably corresponding to video data or voice data.
By extracting data that data extract preferably corresponding to the data that comprise mutual synchronized video data and voice data.
Described media content preferably includes a plurality of different media datas that are provided with in a single time cycle.In addition, described data processing equipment number comprises a definite device, is used to receive the structrual description data of the data structure with the media content that is described therein and select target determines really that described a plurality of media datas concentrate on the basis of fixed condition which will be got as select target will being used to data are defined as.In addition, described selecting arrangement is by only selecting data from the data centralization that is defined as select target by described definite device with reference to described structrual description data.
Described media content preferably include a plurality of different medium collection that in a single time cycle, are provided with and described definite device receive the data structure with media content of describing therein data the structrual description data and determine described sets of video data and/or voice data is concentrated which will be got as select target.In addition, described selecting arrangement is by only selecting data from the data centralization that is defined as select target by described definite device with reference to described structrual description data.
Each data relevant with the respective media section preferably are used as attribute and append on the element of description data in the lowest hierarchical layer.In addition, described selecting arrangement is selected total data relevant with described media segment and/or the expression data relevant with the respective media section.
The total data relevant with described media segment is preferably corresponding to described media data, and described media content preferably includes a plurality of different media data collection that are provided with in a single time cycle.Described data processing equipment preferably also comprises a definite device, be used to receive the data structure that has therein the media content of describing data the structrual description data and determine described media data collection and/or which of expression data centralization will be got as select target; Described selecting arrangement is by only selecting data from the data centralization that is defined as select target by described definite device with reference to described structrual description data.
Described definite condition preferably includes the ability of receiving terminal, the degree of crowding of conveyer line, user's request and at least one in user's temperament and interest and their the mutual combination etc.
Described data processing equipment comprises also that preferably forms a device, is used for forming a data stream of media content according to the data of being extracted by described extraction element.
Described data processing equipment preferably also comprises a conveyer, is used for transmitting by described through line forming the data stream that device forms.
Described data processing equipment preferably also comprises a pen recorder, is used for forming traffic logging to the data recording medium that device forms by described.
Described data processing equipment preferably also comprises a data recording medium management apparatus, is used for reorganizing according to the disk space of available described data recording equipment the media content of stored media content and/or new record.
Described data processing equipment preferably also comprises a memory contents management devices, is used for reorganizing the media content that is stored in described data medium according to the memory cycle of described media content.
Described data processing method comprises also that is preferably play a step, is used for only playing and the corresponding data of selecting during selecting step of section from described media content.
The context importance of the best presentation medium content of described score.
Described score is preferably represented from the interested scene context of the angle of key word significance level, selects a scene in described selection step, in this scene, uses described score from least one angle.
Described media content is preferably corresponding to video data or voice data.
Described media content is preferably corresponding to the data that comprise mutual synchronized video data and voice data.
Described description data preferably describes video data or audio frequency relates to structure.
Described description data is preferably described sets of video data and voice data and is concentrated the structure of each.
Described media content preferably includes a plurality of different media data collection that are provided with in a single time cycle.In addition, described data processing method preferably include structrual description data that are used to receive the data structure that has therein the media content of describing and determine on the basis of the condition that will be used to data are defined as select target that described media data concentrates which will be got definite step as select target.In addition, in described selection step, by only selecting data from the data centralization that is defined as select target by described definite device with reference to described structrual description data.
Described data processing method preferably also comprise the structrual description data that are used to receive the data structure that has therein the media content of describing and on the basis of the condition that will be used to data are defined as select target, determine be only with video data, only with voice data or video data and both audio definite step as select target.In addition, in described selection step, by only selecting data from the data centralization that determining step, is confirmed as select target with reference to described structrual description data.Described media content preferably includes a plurality of different media data collection that is provided with in a single time cycle.In described determining step, reception has the structrual description data of the data structure of the described media content of describing therein and determines that also which quilt that sets of video data and/or voice data are concentrated is got as select target.In addition, in described selection step, by only selecting data from the data centralization that described determining step, is confirmed as select target with reference to described structrual description data.
The expression data relevant with the respective media section preferably are used as attribute and append on the element of description data in the lowest hierarchical layer; In described selection step, select total data relevant and/or the expression data relevant with the respective media section with described media segment.
The described total data relevant with described media segment is preferably corresponding to media data, described media content preferably includes a plurality of different media data collection that are provided with in a single time cycle, described data processing method preferably includes a determining step, is used to receive the structrual description data of the data structure that has therein the media content of describing and determines described media data collection and/or which of expression data centralization will be taken and be done select target.In addition, in described selection step, by only selecting data from the data centralization that determining step, is confirmed as select target with reference to described structrual description data.
Described definite condition preferably includes the ability of receiving terminal, the degree of crowding of conveyer line, user's request and at least one in user's temperament and interest or their the mutual combination etc.
Described data processing method comprises also that preferably one forms step, is used for forming according to the data of extracting at extraction step the data stream of media content.
Described data processing method preferably also comprises a transfer step, is used for being transmitted in the data stream that described formation step forms through a line.
Described data processing method preferably also comprises a recording step, is used for traffic logging to a data recording medium that will form in described formation step.
Described data processing method preferably also comprises a data recording medium management step, is used for reorganizing the media content that has been recorded or the media content of new record according to the disk space of available described data medium.
Described data processing method preferably also comprises a memory contents management process, is used for reorganizing the media content that is stored in described data medium according to the memory cycle of described media content.
According to a further aspect of the invention, provide a kind of computer-readable recording medium, on this recording medium, recorded data processing method in preceding description with the executable form of computing machine.
In data processing equipment of the present invention, data processing method, recording medium and program, selecting arrangement (step is corresponding with selecting) comprises by use that on the basis of the score of the lowest hierarchical layer that appends to description data as attribute or other hierarchical layer the description data of the hierarchical layer of utilizing highest ranked layer, lowest hierarchical layer and other hierarchical layer that input media (corresponding to input step) obtains selects at least one section from described media content.
Described extraction element (corresponding to described extraction step) preferably only extracts and the data of selecting in described selecting arrangement (corresponding to described selection step) that section is relevant.
Described playing device (corresponding to described broadcast step) is preferably only play and the described section relevant data of selecting in described selecting arrangement (corresponding to described selection step).
Therefore, can from described media content, select scene more importantly arbitrarily and the important segment so selected can be extracted or play.In addition, classification stratum of described description data hypothesis comprises described highest ranked layer, lowest hierarchical layer and other hierarchical layer.Can select scene in the unit arbitrarily on the basis of every chapter or on the basis at every joint.Can use various selection forms, delete unnecessary paragraph such as the selection of some chapters and sections with from described chapters and sections.
In data processing equipment of the present invention, data processing method, recording medium and program, the significance level of a score presentation medium content context.Go to select important scene as long as be provided with this score, just can for example gather the important scenes of a program at an easy rate.
In addition, go to represent importance, can select a plurality of sections in high flexible ground by determining a key word from the interested scene of angle of key word as long as be provided with described score.For example, as long as determined key word, so, have only the desirable scene of user can be selected such as a character or an incident from a specific viewpoint.
In data processing equipment of the present invention, data processing method, recording medium and program, described media content is corresponding to video data and/or voice data, and described description data is described the structure of each sets of video data and/or audio data set.Described video selecting arrangement (selecting step corresponding to described video) is by selecting a scene with reference to the description data relevant with video data.Described audio selection device (corresponding to described audio selection step) is by selecting a scene with reference to the description data relevant with voice data.
In addition, described extraction element (corresponding to described extraction step) extracts video data and/or voice data.
From video data and/or voice data, can select an important section, can extract and relevant video data and/or the voice data of so selecting of section.
In data processing equipment of the present invention, data processing method, recording medium and program, comprise at described media content under the situation of a plurality of different pieces of information collection that in a single time cycle, are provided with that described definite device (corresponding to described determining step) determines that on the basis of the condition that will be used to data are defined as select target which media data collection will be got as select target.Described selecting arrangement (corresponding to described selection step) is only from being selected data set described definite device (corresponding to described determining step) established data.
Described definite condition comprises the transfer capability of receiving terminal ability, conveyer line, user's request and at least one in user's interest or the mutual combination between them etc.For example, the ability of receiving terminal is corresponding to video display capabilities, voice playing ability or the decompressed speed of packed data.The transfer capability of conveyer line is corresponding to the degree of congestion of described line.
Be split into for example a plurality of channels at media content and be assigned under the situation of described channel and described layer with a plurality of layers and different media data collection, described definite device (corresponding to described determining step) can be determined and a relevant media data of best section according to described definite condition.Therefore, described selecting arrangement (corresponding to described selection step) can be selected the media data of right quantity.Be used as under the situation of best section at a plurality of channels and layer, the video data with standard resolution can be assigned to channel-1/ layer-1 to transmit a moving image, has high-resolution video data and can be assigned to channel-1/ layers-2.In addition, stereo data can be assigned to channel-1 with the transmission voice data, and mono data can be assigned to channel-2.
In data processing equipment of the present invention, data processing method, recording medium and program, described definite device (corresponding to described determining step) is determined it only is video data, only is that voice data or video and both audio will be got as select target on the basis of described definite condition.
Before described selecting arrangement (corresponding to described selection step) was selected a section, described definite device (corresponding to described determining step) determined which media data collection will be got as a select target or only is that video data, voice data or video data and both audio will be got as a select target.The result can shorten described selecting arrangement (corresponding to described selection step) to select a needed time of section.
In data processing equipment of the present invention, data processing method, recording medium and program, the expression data are used as attribute and append on the element of description data in the lowest hierarchical layer, and described selecting arrangement is selected total data relevant with media segment and/or the expression data of being correlated with the respective media section.
In data processing equipment of the present invention, data processing method, recording medium and program, the total data relevant with media segment is corresponding to media data, and described media content comprises a plurality of different media data collection that are provided with in a single time cycle.Described definite device (corresponding to described determining step) on the basis of structrual description data and definite condition, determine described media data collection and/or expression data centralization which will be got as select target.
Described media content for example is split into a plurality of channels and a plurality of layer, and different media data collection is assigned to described channel and layer.Described definite device can determine that condition is determined and best section (channel or layer) relevant media data according to these.
In data processing equipment of the present invention, data processing method, recording medium and program, described definite device (corresponding to described determining step) on the basis of described definite condition, determine only be the total data relevant with the respective media section, only be the expression data relevant with the respective media section or with the respective media section relevant total data and represent that data will be got as select target.
Before described selecting arrangement (corresponding to described selection step) was selected a section, described definite device (corresponding to described determining step) determined which media data collection will be got as select target or only is described total data or only is that described expression data or described total data and described expression data will be got as select target.The result can shorten described selecting arrangement (corresponding to described selection step) to select the time that section is required.
In data processing equipment of the present invention, data processing method, recording medium and program, form device (corresponding to described formation step) and form a data stream of media content according to the data of extracting by described extraction element (corresponding to described extraction step).Therefore, can prepare to be used to describe data stream or file corresponding to one section content of the section of so selecting.
In data processing equipment of the present invention, data processing method, recording medium and program, described conveyer (corresponding to described transfer step) transmits by described through line and forms the data stream that device (corresponding to the described step that forms) forms.Therefore, only relevant with important segment data can be sent to described user.
In data processing equipment of the present invention, data processing method, recording medium and program, described data medium management devices (corresponding to described data medium management process) reorganizes media content of having stored so far and/or the media content that will newly be stored according to the available disk space of described data medium.Particularly, in data processing equipment of the present invention, data processing method, recording medium and program, described memory contents management devices (corresponding to described memory contents management process) reappeared according to the memory cycle of described content organizes the media content that is stored in the described data medium.Therefore, in described data medium, can store a large amount of media contents.
Description of drawings
The block diagram of Fig. 1 shows the data processing method according to first embodiment of the invention;
Fig. 2 shows the structure according to the description data of described first embodiment;
Fig. 3 shows the part of the example of the document type definition (DTD) that is used to use XML to describe description data according to described first embodiment in computing machine, and a part of using the example of the description data that DTD describes according to described first embodiment;
Fig. 4-9 shows the part that continues of the description data of example shown in Figure 3;
Figure 10 shows the part by the example of the XML file that forms to the data of description data additional representation shown in Fig. 3-9, and the part of example that is used for describing at computing machine the DTD that describes with extensible markup language (XML) of description data;
Figure 11-21 shows the part that continues of description data shown in Figure 10;
Figure 22 is used to describe the method that is used to specify significance level according to described first embodiment;
The process flow diagram of Figure 23 shows according to first embodiment and the relevant processing of described selection step;
The block diagram of Figure 24 shows the formation according to the extraction step of first embodiment;
The flow process of Figure 25 show according to first embodiment in described extraction step by the processing of going multiplex machine to carry out;
The flow process of Figure 26 shows the processing of being carried out by the video clipping device according to first embodiment in described extraction step;
Figure 27 shows the structure of MPEG-1 video data stream;
The flow process of Figure 28 shows the processing of being carried out by the audio clips device according to first embodiment in described extraction step;
Figure 29 shows the structure of the AAU of described MPEG-1 audio data stream;
The block diagram of Figure 30 shows the application according to the media processing method of first embodiment;
Figure 31 shows the processing according to the significance level of second embodiment of the invention;
The flow process of Figure 32 shows according to described second embodiment and the relevant processing of described selection step;
The flow process of Figure 33 shows according to third embodiment of the invention and the relevant processing of described selection step;
Figure 34 is used to describe the method for specifying significance level according to fourth embodiment of the invention;
The flow process of Figure 35 shows according to fourth embodiment of the invention and the relevant processing of described selection step;
The block diagram of Figure 36 shows the media processing method according to fifth embodiment of the invention,
Figure 37 shows the structure according to fifth embodiment of the invention structrual description data;
Figure 38 shows the structure according to the fifth embodiment of the invention description data;
Figure 39 shows the part of example of using the document type definition (DTD) of XML description scheme data of description according to the 5th embodiment in computing machine, and according to an example of an XML file of fifth embodiment of the invention;
Figure 40 shows according to the 5th embodiment and uses XML to describe the part of example of the document type definition (DTD) of described description data in computing machine, and according to the first half of the example of an XML file of the 5th embodiment;
Figure 41-45 shows the part that continues of description data shown in Figure 40;
Figure 46 shows an example according to the selection step output of the 5th embodiment;
The block diagram of Figure 47 shows the extraction step according to the 5th embodiment;
The flow process of Figure 48 shows the processing of being carried out by interface arrangement according to the 5th embodiment in described extraction step;
An example that is born results when the described interface arrangement that provides in described extraction step is changed described output in described selection step according to the 5th embodiment is provided Figure 49;
The flow process of Figure 50 show according to the 5th embodiment in described extraction step by described processing of going multiplex machine to carry out;
The flow process of Figure 51 shows the processing of being carried out by described video clipping device according to the 5th embodiment in described extraction step;
The flow process of Figure 52 shows the processing of being carried out by described audio clips device according to the 5th embodiment in described extraction step;
Another process flow diagram of Figure 53 shows the processing of being carried out by described video clipping device according to the 5th embodiment in described extraction step;
The block diagram of Figure 54 shows the data processing method according to sixth embodiment of the invention;
The block diagram of Figure 55 shows formation step and the transfer step according to the 6th embodiment;
The block diagram of Figure 56 shows the media processing method according to seventh embodiment of the invention;
Figure 57 shows the structure according to the 5th embodiment description data;
Figure 58 shows according to the 7th embodiment and uses XML to describe the part of example of the document type definition (DTD) of description data in computing machine, and a part of using the example of the description data that XML describes according to the 7th embodiment;
Figure 59-66 shows the part that continues of description data shown in Figure 58;
Figure 67 shows by will representing that data append to the part of the example of the XML file that forms on the description data shown in Figure 58-66, and the part of the example of the DTD that describes with the XML that is used to describe described description data in computing machine;
Figure 68-80 shows the part that continues of description data shown in Figure 67;
The flow process of Figure 81 shows according to the 7th embodiment and the relevant processing of described selection step;
The block diagram of Figure 82 shows the application according to media processing method shown in the 7th embodiment;
The flow process of Figure 83 shows according to eighth embodiment of the invention and the relevant processing of described selection step;
The flow process of Figure 84 shows according to ninth embodiment of the invention and the relevant processing of described selection step;
The flow process of Figure 85 shows according to tenth embodiment of the invention and the relevant processing of described selection step;
The block diagram of Figure 86 shows the data processing method according to twelveth embodiment of the invention;
Figure 87 shows the structure according to the twelveth embodiment of the invention description data;
Figure 88 shows according to the 5th embodiment and uses XML to describe the part of example of the document type definition (DTD) of description data in computing machine, and according to the part of an XML file of the 5th embodiment example;
Figure 89-96 shows the part that continues of description data shown in Figure 88;
The block diagram of Figure 97 shows the data processing method according to thriteenth embodiment of the invention;
The block diagram of Figure 98 shows the data processing method according to fourteenth embodiment of the invention;
The block diagram of Figure 99 shows the data processing method according to fifteenth embodiment of the invention;
The block diagram of Figure 100 shows the data processing method according to sixteenth embodiment of the invention;
The block diagram of Figure 101 shows the data processing method according to seventeenth embodiment of the invention;
Figure 102 shows a plurality of channels and a plurality of layer;
Figure 103 shows the part of the example of the document type definition (DTD) that uses XML description scheme data of description, and the part of the example of the structrual description data of describing in DTD;
Figure 104 shows the part that continues in the data of structrual description shown in Figure 103;
The flow process of Figure 105 shows according to seventeenth embodiment of the invention processing relevant with determining step in example 1;
The flow process of Figure 106 shows according to the 17 embodiment and respond definite processing that user's request will be performed in definite step of example 1;
The flow process of Figure 107 shows the definite processing relevant with video data in definite step of example 1 according to the 17 embodiment;
The flow process of Figure 108 shows the processing relevant with voice data in definite step of example 1 according to the 17 embodiment;
The flow process of Figure 109 shows the first half according to the seventeenth embodiment of the invention processing relevant with definite step in the example 2;
The flow process of Figure 110 show according to the seventeenth embodiment of the invention processing relevant with definite step in the example 2 back half;
The flow process of Figure 111 shows according to seventeenth embodiment of the invention and the relevant processing of definite step in example 3;
The flow process of Figure 112 shows according to the 17 embodiment and the relevant definite processing of video data in definite step of example 3;
The flow process of Figure 113 shows according to the 17 embodiment and the relevant definite processing of voice data in definite step of example 3;
The flow process of Figure 114 shows the first half according to the seventeenth embodiment of the invention processing relevant with definite step in example 4;
The flow process of Figure 115 show according to the seventeenth embodiment of the invention processing relevant with definite step in example 4 back half;
The flow process of Figure 116 shows according to the 17 embodiment and respond definite processing that user's request is carried out in definite step of example 4;
The flow process of Figure 117 shows the definite processing relevant with video data in definite step of example 4 according to the 17 embodiment;
The flow process of Figure 118 shows the definite processing relevant with voice data in definite step of example 4 according to the 17 embodiment;
The flow process of Figure 119 shows the first half according to the 17 embodiment processing relevant with definite step in example 5;
The flow process of Figure 120 show according to the 17 embodiment processing relevant with definite step in example 5 back half;
The flow process of Figure 121 shows according to the 17 embodiment and respond definite processing that user's request is carried out in definite step of example 5;
The block diagram of Figure 122 shows the data processing method according to eighteenth embodiment of the invention;
The block diagram of Figure 123 shows the data processing method according to nineteenth embodiment of the invention;
The block diagram of Figure 124 shows the data processing method according to twentieth embodiment of the invention;
The block diagram of Figure 125 shows the data processing method according to 21st embodiment of the invention;
The block diagram of Figure 126 shows the data processing method according to 22nd embodiment of the invention;
Figure 127 shows and will be associated with the example of DTD of described context data and described structrual description data and the example of XML file;
Figure 128-132 shows the part that continues of the file of XML shown in Figure 127;
Figure 133 shows the structure according to the eleventh embodiment of the invention description data;
Figure 134 shows a viewpoint of using in the 11 embodiment;
Figure 135 shows the significance level according to the 11 embodiment;
Figure 136 shows to be used for using and will be used to describe the example of DTD of description data of the 11 embodiment and the example of the described part description data described with XML at the XML that computing machine is expressed description data;
Figure 137-163 shows the part that continues of description data shown in Figure 136;
Figure 164 shows to be used for using and will be used to describe another example of DTD of described description data of the 11 embodiment and the example of the described part description data described with XML at the XML that computing machine is expressed described context data;
Figure 165-196 shows the part that continues of description data shown in Figure 164;
Figure 197 shows the another kind of structure according to the described description data of eleventh embodiment of the invention;
Figure 198 shows to be used for using and will be used to describe the example of DTD of described description data (corresponding to Figure 197) of the 11 embodiment and the example of the part description data described with XML at the XML that computing machine is expressed described description data;
Figure 199-222 shows the part that continues of description data shown in Figure 164;
Figure 22 3 shows to be used for using and will be used to describe the example of DTD of the described description data of the 11 embodiment (corresponding to Figure 197) and an example of the part description data described with XML at the XML that computing machine is expressed described description data; With
Figure 22 4-252 shows the part that continues of description data shown in Figure 164.
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are described.
First embodiment
Various details first embodiment.In this embodiment, the moving image of the data stream of MPEG-1 system is used as described media content.In this case, a media segment is cut apart corresponding to a single scene, and a score is represented the objective degree of scene of interest context importance.
The block diagram of Fig. 1 shows the data processing method according to first embodiment of the invention.In Fig. 1, label 101 is pointed out described selection step; Label 102 is pointed out described extraction step.In selecting step 101, from described description data, select a scene of media content, and the start time and the concluding time of exporting described scene.In extraction step 102, extract and the data of stipulating by start time of in selecting step 101, exporting and concluding time that the media content section is relevant.
Fig. 2 shows the structure according to the described description data of described first embodiment.In this embodiment, described context is according to three kinds of structrual descriptions.Three kinds of interior elements of structure are arranged from left to right according to chronological order.In Fig. 2, root<content of designated tree〉single content part of expression, the exercise question of described content is used as attribute and is assigned to described.
Utilization<joint〉appointment<program son (program).The priority of expression scene of interest context significance level is used as attribute and appends to described element<joint〉on.Described significance level hypothesis is from 1 to 5 round values, wherein, and 1 minimum significance level of expression and the maximum significance level of 5 expressions.
Utilization<joint〉or<section appointment<joint son (joint).Here, element<joint〉can be used as another height<joint son<joint.But, single-element<joint〉can not have son<joint and son<section potpourri.
An element<section〉single scene of expression cuts apart, and, the priority that is assigned to it be assigned to its mother<joint priority identical.Additional giving<section〉attribute be " beginning " of expression start time and " end " of representing the concluding time.Use commercial available software or the available software of process network to cut to scene.In addition, also can use manually described scene is cut.Though be start time of cutting apart according to a scene and concluding time express time information in current embodiment, but, also can realize similar result during express time information when duration according to start time of scene of interest and this scene of interest.In this case, the concluding time of scene of interest is gone up acquisition by being added to the start time the described duration.
Under situation, by using the element<joint in the multi-layer classification section such as a photoplay 〉, chapter, joint and the paragraph of described story can be described on the basis of described description data.In the another one example, when describing baseball, the element<joint in highest ranked〉can be used to description office, their son<joint〉can be used to describe half innings.In addition, described element<joint〉the second generation<joint can be used to describe swinging of each baseball player, described element<joint the third generation<joint can also be used to describe the time cycle between each throwing, twice throwing and the result that swings.
Description data with this structure can be used for example open-ended markup language (XML) expression in computing machine.Described XML is a kind of data description language (DDL), and its standardization is the target that World Wide Web Consortium (Wor1d Wide Web Consortium) is pursued.Recommend version 1.0 to recommend February 10 in 1998.The explanation of XML1.0 version can obtain from http://WWW.W3.org/TR/1998/rec-XM1-19980210.Fig. 3 shows an example of the document type definition (DTD) that is used to according to the present invention to use XML to describe described description data and uses an example of the description data that DTD describes to Fig. 9.Figure 10 shows an example by will appending to the description data that Fig. 3 prepared on the description data shown in Figure 9 such as the expression data of the media segment of presentation video (being video data) and key word (voice data) and the example of the DTD that is used to use XML to describe described description data to Fig. 9.
To describe below and the relevant processing of selection step 101.The processing relevant with described selection step 101 is particularly related to the form of description data and a score is assigned to the method for the context of each scene.In current embodiment, the processing relevant with described selection step 101 only is at having son<section〉element<joint carry out (step S1 shown in Figure 23, S4 and S5) as shown in figure 22.Select its priority to surpass the element<joint of certain threshold value〉(step S2 shown in Figure 23), and output element<joint of selection like this start time and concluding time (step S3 shown in Figure 23).Be assigned to and have son<section〉described element<joint priority corresponding to all elements<joint in the described content in the middle of the significance level shared, described element<joint〉in each all have son<joint.Specifically, among Figure 22 by the element<joint of dotted line in the middle of the significance level shared be set to priority.Be assigned to except preceding surface element<joint〉element<joint and<section priority can be provided with arbitrarily.Thereby described significance level is not must be provided with to suppose a unique value, and identical significance level can be assigned to different elements.The flow process of Figure 23 shows the processing relevant according to the described selection of first embodiment step 101.Consider element<joint of so being selected 〉, by described element<joint start time of scene of expression and concluding time can be according to element<joints of so being selected the element<section of son joint definite.Output is by start time and the concluding time so determined.
Though in that to select described in the current embodiment be all to have son<section at wherein each〉element<joint execution,, described selection also can be at element<section〉execution.In this case, priority is corresponding to all elements<section in described content〉the central significance level of sharing.In addition, selecting also can be at from not having son<section〉the element<joint of higher classification the element<joint of central same hierarchical execution.Specifically, described selection can be at from given mother<content〉or give stator<section〉element<joint the same paths that begins to count number〉execution.
With reference now to Figure 24, the processing relevant with described extraction step 102 described.The block diagram of Figure 24 shows the extraction step 102 according to described first embodiment.As shown in figure 24, according to the extraction step 102 of this first embodiment by going multiplex machine 601, video clipping device 602 and audio clips device 603 to realize.In current embodiment, the MPEG-1 system data flow is got as media content.Described MPEG-1 data stream is to form by a video data stream and an audio data stream are multiplexed in the single data stream.Go multiplex machine 601 with described video data stream and audio data stream from being separated the multiplexed system data flow.Video clipping device 602 receive the video data stream of so being separated and in described selection step 101 a selected section, and only output and section relevant data of so being selected from the video data stream that is received.Audio clips device 603 receives separated audio data stream and in selecting step 101 selected described section, and from the audio data stream that is received only output and selected section relevant data.
Describe by the processing of going multiplex machine 601 to carry out below with reference to accompanying drawing.The flow process of Figure 25 shows by the processing of going multiplex machine 601 to carry out.The method of multiplexed described MPEG-1 system data flow meets international standard TSO/IEC IS 11172-1 standardization.Wrap by means of described video and audio data stream are divided into the data stream of the suitable length that is referred to as bag and will append to each such as the additional information of title, video data stream and audio data stream are multiplexed in the bag.A plurality of video data streams and a plurality of audio data stream also can be multiplexed in the single signal in an identical manner.In the title of each bag, all described one and be used for that a bag is identified as the data stream ID of video data stream or audio data stream and one and be used for video data is incorporated into the timestamp synchronous with described voice data.Described data stream ID does not limit to and is used for a bag is identified as video data stream or audio data stream.When a plurality of video data streams when multiplexed, described data stream ID can be used to that identification has the video data stream of bag interested from a plurality of video data streams.Similarly, when a plurality of audio data streams when multiplexed, described data stream ID can be used to that identification has the audio data stream of bag interested from described a plurality of audio data streams.In described MPEG-1 system, a plurality of bags are processed into a single bag and are used as title with multiplexed speed that acts on the reference time of carrying out synchronous playing and additional information and append to described wrapping.In addition, be used as system's title by the relevant additional information of the quantity of multiplexed video and audio data stream and append to described first and wrap.Go multiplex machine 601 from system's title of described first bag, to read and guarantee to be used to store the Data Position (S3 and S4) of the data set of each data stream by multiplexed video and the quantity of audio data stream (S1 and S2).Then, the data of going multiplex machine 601 to check that the data stream ID of each bag also will be included in the described bag are written to storage by in the Data Position of described data stream ID predetermined data stream (S5 and S6).All bags all are carried out above-mentioned processing (S8, S9 and S10).After all bags all had been carried out above-mentioned processing, video data stream was exported to video clipping device 602 on the basis of each data stream, and audio data stream is exported to audio clips device 603 (S11) in an identical manner.
The operation of video clipping device 602 will be described below.The flow process of Figure 26 shows the processing of being carried out by video clipping device 602.Described MPEG-1 system data flow is standardized by international standard ISO/IEC IS11172-2.As shown in figure 27, described video data stream comprises a sequential layer, an image sets layer (GOP), an image layer, a bit slice layer, a macrodata piece layer and a data block layer.On the basis of GOP layer that is minimum unit, carry out random access, be included in the described image layer each the layer corresponding to a single frame.Video clipping device 602 is deal with data on the basis of each GOP.The counter C that is used for that the quantity of output frame is counted is initialized to 0 (S3).At first, the title of the described video data stream of video clipping device 602 affirmations is also stored the data (S5) that are included in the described title corresponding to the title (S2 and S4) of described sequential layer.Then, described video clipping device is exported described data.The title of described sequential layer can occur during follow-up processing.Unless described value relates to a quantization matrix, otherwise the value of described title is allowed to change.Therefore, when the described order title of input, the value that is transfused to title compares (S8 and S14) with the value of the title of storing.If the title of being imported is different with the title of being stored aspect the value except the value relevant with described quantization matrix, the title of being imported will be considered to (S15) of mistake.Then, described video clipping device 602 detects the title (S9) of input data GOP layer.Described in the title of described GOP layer and the data (S10) that timing code is relevant, this timing code is described the time cycle that has passed from the start of header of described order.Video clipping device 602 compares (S1) (S11) with described timing code and the section of selecting step 101 output.If described timing code is confirmed as being not included in described section, then video clipping device 602 is discarded in all data sets that occur before the next GOP layer of described sequential layer.On the contrary, if described timing code is included in selected section, so, all data sets (S13) that 602 outputs of video clipping device occurred before the next GOP layer of described sequential layer.For the data set and the current data set that is being output that continue to guarantee to be output, the timing code of described GOP layer must be changed (S12).The value that the timing code of utilizing counter C to calculate described GOP layer will be changed.Counter C keeps the quantity of the frame that has been output.According to equation 1, the time T v that shows the heading frame of the current described GOP layer of exporting describes by counter C and in described order title and the expression per second calculates the image rate " Pr " of the quantity of the frame that is shown.
Tv = C pr . . . ( 1 )
" Tv " is that unit specifies a value with 1/ per second, and then, the value of described Tv is changed according to the timing code form of MPEG-1.The value of so being changed is arranged on quilt in the timing code of the described GOP layer of this time output.When exporting the data relevant with described GOP layer, the quantity of described image layer is added on the value of described counter C.Repeat previously described processing, finish (S7 and S16) up to described video-frequency band.Export under the situation of a plurality of video data streams at the described multiplex machine 601 that goes, carry out the above-mentioned processing relevant with each video data stream.
Below with the processing of description audio montage device 603.The flow process of Figure 28 relates to the processing of being carried out by described audio clips device 603.Described MPEG-1 audio data stream is standardized according to international standard ISO/IEC IS11172-3.Described audio data stream is to be formed by a series of frames that are referred to as Audio Access Unit (AAU).Figure 29 shows the structure of an AAU.Described AAU is the minimum unit that voice data can be deciphered separately, and it comprises the sampled data collection Sn to determined number.The reproduction time of single AAU can calculate according to bit rate " br ", sampling frequency " Fs " and the bit quantity L of the expression transfer rate of described AAU.At first, detect the title (S2 and S5) that is included in the AAU in the described audio data stream, whereby to obtain the described bit quantity L of a single AAU.And then, described bit rate " br " and sampling frequency Fs are described in the title of described AUU.Calculate the quantity of sampling quantity Sn of a single AAU according to equation 2.
Sn = L × Fs br . . . ( 2 )
Calculate the reproduction time of a single AAU according to equation 3.
Tu = Sn Fs = L br - - - ( 3 )
As long as calculated the value of Tu, by the time that can obtain described AAU counting to have passed from the start of header of described data stream.The number count of 603 couples of AAU that occurred of described audio clips device is also calculated the time (S7) that has passed from the start of header of described data stream.The time of so being calculated compare with the section of in selecting step, exporting (S8).If the time that described AAU occurs is included in selected section, described audio clips device 603 outputs all data sets (S9) relevant with that AAU.On the contrary, if the time that described AAU occurs is not included in selected section, described audio clips device 603 will abandon the data set relevant with described AAU.Repeat aforementioned processing till described audio data stream finishes (S6 and S11).When going a plurality of audio data stream of multiplex machine 601 outputs, each in the described audio data stream all is performed aforementioned processing.
As shown in figure 30, the video data stream of output is transfused to video play device in extraction step 102, and the audio data stream of output is transfused to audio playing apparatus in extraction step 102.Described video data stream and audio data stream can be play the high brightness scene of a summary or media content whereby by synchronous playing.In addition, so video that produces and audio data stream can be prepared summary or the relevant MPEG-1 system data flow of described media content high brightness scene with described media content whereby by multiplexed.
Second embodiment
Various details second embodiment.This second embodiment only is being different from first embodiment aspect the processing relevant with selecting step.
Describe below with reference to accompanying drawings according to second embodiment and the relevant processing of selection step 101.In selection step 101 according to second embodiment, utilized be assigned to all elements and scope from highest ranked<joint to minimum<section priority value.Be assigned to each element<joint〉and<section priority represent the objective degree of context importance.Describe and the relevant processing of selection step 101 below with reference to Figure 31.In Figure 31, label 1301 expression is included in a plurality of elements<joint in the highest ranked in the described description data〉in one; 1302 expression element<joints〉a daughter element<joint of 1301 〉; 1303 expression element<joints〉a daughter element<joint of 1302 〉; 1304 expression daughter element<joints〉a daughter element<joint of 1303 〉.In selection step 101, comprise a leaf<section that is assigned to from described highest ranked according to second embodiment〉to its elder generation<joint〉arithmetic mean of all priority values in the path of extending.When the arithmetic mean in described path surpasses a threshold value, select described element<section 〉.In example shown in Figure 28, calculate element<section〉1304,<joint 1303,<joint 1302 and<save the arithmetic mean " pa " of 1301 attribute, be their attribute priority value p4, p3, p2 and p1.Described mean value " pa " calculates according to equation 4.
pa = p 1 + p 2 + p 3 + p 4 4 - - - ( 4 )
" pa " that is so calculated and described threshold (S1 and S2).If " pa " surpasses described threshold value, selections<section〉1304 (S3), with<section property value that 1304 " beginning " is relevant with " end " start time and concluding time of being used as selected scene export (S4).All element<sections〉all be carried out aforementioned processing (S1 and S6).The flow process of Figure 32 shows according to this second embodiment and the relevant processing of selection step 101.
In this second embodiment, calculating is from being assigned to the described<section of lowest hierarchical〉priority value to the elder generation<joint that is assigned to limit priority the arithmetic mean of priority value, and on the basis of the arithmetic mean that is so calculated selection described leaf<section.In addition, can calculate to be assigned to and have son<section〉element<joint priority value to the elder generation<joint that is assigned to highest ranked the arithmetic mean of priority value, by arithmetic mean and the described threshold value of relatively so being calculated, can select to have described son<section〉element<joint.Similarly, in another classification section, can calculate from being assigned to element<joint〉priority value to the elder generation<joint of the highest ranked that is assigned to it the arithmetic mean of priority value, arithmetic mean that is so calculated and described threshold ratio are, whereby, can be chosen in element<joint in the described classification section 〉.
The 3rd embodiment
A third embodiment in accordance with the invention is described below.Described the 3rd embodiment is only different with first embodiment aspect the processing relevant with selecting step.
Describe below with reference to the accompanying drawings according to the 3rd embodiment and the relevant processing of selection step 101.Under the situation in conjunction with first processing that embodiment describes, in the selection step 101 according to the 3rd embodiment, described selection only all has a son<section at wherein each〉element<joint carry out.In the 3rd embodiment, be provided with one consider with all with cycle duration of selecteed scene and threshold value.Specifically, by the end of selecteed element<joint at present〉cycle duration with maximum but still before keeping less than described threshold value, the described element of the select progressively<joint that reduces according to priority 〉.The flow process of Figure 33 shows according to the 3rd embodiment and the relevant processing of selection step 101.Wherein each all has son<section〉a plurality of<joint set got as one the collection Ω (S1).Element<joint of described collection Ω〉by descending storage (S2) according to attribute priority.Element<joint that selection has highest priority value from collection Ω〉(S4 and S5), and element<joint that deletion is so selected from described collection Ω.By checking described element<joint〉all son<sections obtain the element<joint so selected start time and concluding time, and calculate described element<joint duration (S6).Calculating is by the end of selecteed described element<joint so far〉cycle duration and (S7).If described and surpassed described threshold value, (S8) finishes dealing with.If described and be lower than described threshold value, output is at the described element<joint of this selection of time〉start time and concluding time (S9).Then, handle to turn back to from described collection Ω, select element<joint with highest priority value step.Repeat above-mentioned processing, up to selected element<joint〉cycle duration and surpass described threshold value or described collection Ω and become (S4 and S8) till the sky.
In the 3rd embodiment, at having son<section〉element<joint carry out to select.But described selection also can be at described element<joint〉and carry out at element<section.In this case, priority value is corresponding to all elements<joint in described media content〉the central significance level of sharing.In addition, selecting also can be at not having son<section in the same classification〉element<joint carry out.Specifically, selection can be at being arranged in from described elder generation<content〉or leaf<section the element<joint in the same path that begins to count carry out.
With identical in the situation of second embodiment, be assigned to each element<joint〉and element<joint the objective degree of priority value as context importance, calculate from being assigned to described element<joint〉to its elder generation<joint of highest ranked the mean value " pa " of all priority.Select wherein each all to have son<section with the descending of " pa "〉element<joint or element<section, up to but till of described cycle duration less than described threshold value with maximum.Even in this case, also can obtain the useful result identical with second embodiment.
The 4th embodiment
Various details the 4th embodiment.Described the 4th embodiment is only different with first embodiment aspect the processing relevant with selecting step.
Describe according to the 4th embodiment and the relevant processing of selection step 101 below with reference to accompanying drawing.Identical with the situation of the selection of carrying out in selecting step 101 in first embodiment, selection relevant with selecting step 101 in the 4th embodiment is at element<section〉and have son<section element<joint carry out.Identical with the situation of first embodiment, consider in current embodiment with cycle duration of selecteed all scenes and, be provided with a threshold value.With identical in the situation of first embodiment, be assigned to have son<section element<joint priority value all have son<section corresponding to wherein each in the described media content all elements<joint in the middle of the significance level shared.Specifically, described priority value got as in Figure 34 by the described element<joint of dotted line in the middle of shared significance level.In addition, be assigned to described element<joint〉priority value corresponding to same parent element<joint described element<joint of sharing in the middle of shared significance level; That is, by the described element<section of a dotted line shown in Figure 34〉in the middle of the significance level shared.
The flow process of Figure 35 shows according to the 3rd embodiment and the relevant processing of selection step.Wherein each all has son<section〉element<joint set got as the collection Ω (S1 〉.Element<joint in the described collection Ω〉store (S2) according to the descending of priority.Then, select element<joint in the described collection Ω with highest priority value〉(S3, S4 and S5).If a plurality of elements<joint〉all have the highest priority value, select these all elements so.Element<joint of so being selected〉got as another collection element of Ω ' and deleted from described collection Ω.By checking described element<joint in advance〉son<section obtain and element<joint that storage is so selected start time, concluding time and duration (S6) of a scene representing.If select a plurality of elements<joint 〉, obtained in advance and stored by start time, concluding time and duration of each scene in a plurality of scenes of each element representation.Obtain described element<joint of described collection Ω '〉cycle duration with (S7 and S8).Described and with a threshold (S9).If described cycle duration and equal described threshold value, then output and described start time and the concluding time is relevant and by the end of all data sets of having stored so far, processing end (S10) then.On the contrary, if described cycle duration and be lower than described threshold value, handle to return once more and select an element<joint from described collection Ω〉step (S4 and S5).If described collection Ω is empty, then export all data sets relevant with the concluding time of being stored with the described start time, processing finishes (S4) then.If described cycle duration and surpassed described threshold value, then carry out following processing.Specifically, element<joint that selection has minimum priority from described collection Ω '〉(S11).At this moment, if a plurality of element<joint〉have described minimum priority, then select all these elements.Element<joint of so being selected〉son<section in, deletion has the son<section of minimum priority〉(S12).Change and the son<section of so being deleted corresponding element<joint start time, concluding time and duration (S13).As the deletion described element<section the result, scene is interrupted.In this case, for each interrupted scene, store described start time, concluding time and duration.In addition, as deletion described son<section〉the result, if an element<joint〉all son<sections all deleted, so, the described element of deletion<joint from described collection Ω ' 〉.If selected a plurality of elements<joint 〉, so all elements all are carried out above-mentioned processing.As the deletion described son<section the result, therefrom deleted described son<section〉element<joint duration become shorter, thereby reduced described cycle duration with.Repeat this deletion and handle, up to cycle duration of the element of described collection Ω ' and become be lower than described threshold value till.When cycle duration of the element of described collection Ω ' and become (S14) when being lower than described threshold value, export all stored relevant data sets, then processing end (S15) with start time and concluding time.
Though in that to select described in the 4th embodiment be all to have son<section at wherein each〉element<joint or son<section execution, but described selection also can be at an element<joint〉and its son<joint or an element<joint and its son<section carry out.Even in this case, also can realize the useful result identical with the 4th embodiment.
Described element<section of considering when described cycle duration and carrying out when surpassing described threshold value〉deletion, in current embodiment, begin to delete described element<joint according to ascending order from lowest priority 〉.But, a threshold value can be set be used for element<joint〉priority, can be from being lower than all elements<joint of described threshold value〉deletion have the son<section of minimum priority.In addition, another threshold value also can be set and be used for element<section〉priority and can delete element<section that its priority is lower than described threshold value.
The 5th embodiment
Below with reference to accompanying drawing the fifth embodiment of the present invention is described.In this embodiment, the moving image of MPEG-1 form is got as media content.In this case, a media content is cut apart corresponding to a single scene.Score is corresponding to the scene of interest objective degree of importance hereinafter.
The block diagram of Figure 36 shows the media processing method according to fifth embodiment of the invention.In Figure 36, step is selected in one of label 1801 expression; Extraction step of 1802 expressions; One of 1803 expression form step; Transfer step of 1804 expressions; Database of 1805 expressions.In selecting step 1801, from description data, select the scene of a media content, and the start time of output and the scene so selected and concluding time related data and expression are used to store the data of the file of described data.In extraction step 1802, receive the data set of described scene start time of expression and concluding time and be illustrated in the data set of selecting the file of output in the step 1801.In addition, in extraction step 1802, by the reference configuration data of description, extraction and the relevant data of stipulating by start time of exporting and concluding time of section from described media content in selection step 1801.In forming step 1803, the data of output and are constituted the system data flow of MPEG-1 form thus by multiplexed in extraction step 1802.In transfer step 1804, the system data flow of the MPEG-1 form of preparing in forming step 1803 is transmitted through a line.Label 1805 expressions are used for a database of storing media content, its structrual description data and description data.
Figure 37 shows the structure according to the structrual description data of the 5th embodiment.In this embodiment, with the physical content of three kinds of described data of structrual description.Owing to the storage characteristics of media content in described database 1805, must not store the media content of a monolithic with the form of single file.In some cases, the media content of a monolithic can be stored in a plurality of independent files.The root of three structures of structrual description data can be described to<content〉and the expression monolithic content.The exercise question of a respective flap content is used as attribute and appends to described<content〉on.Described<content〉son<content corresponding to a file of the described media content of expression storage.Described son<media object〉be used as link<steady arm that attribute appends to the link of the described file of representing to store described media content〉and represent on the identifier ID of link of description data.Under the situation that described media content is made up of a plurality of files, " seq " is used as attribute and appends to described element<media object〉on, be used to be illustrated in the order of file interested in the described media content.
Figure 38 shows the structure according to the 5th embodiment description data.The description data of present embodiment is corresponding to being had to the element<media object of described structrual description data by additional〉the description data of first embodiment of link.Specifically, the root<content of described description data〉have a son<media object 〉, element<media object〉have a son<joint 〉.Element<joint〉with<section with the element<joint that in first embodiment, uses and<section identical.Element<the media object of described structrual description data〉with the element<media object of described description data relevant.Described element<media object by means of described description data〉son<media object the scene of the described media content described is stored in the element<media object by the structrual description data with identical value attribute ID in the file of appointment.In addition, be assigned to an element<section〉temporal information " beginning " and " end " set up the time that has passed from the head beginning of each file.Specifically, comprise under the situation of a plurality of files at a monolithic media content, time at each file header place corresponding to 0, the time that each scene begins is represented by begin to finish institute's elapsed time to a scene of interest from described file header.
In computing machine, can use for example extendible markup language (XML) described structrual description data of expression and description data.Figure 39 shows an example of the document type definition (DTD) that is used to use XML to describe structrual description data shown in Figure 37, and an example that uses the structrual description data that described DTD describes.Figure 40 to 45 shows the example of the DTD that is used to use XML to describe description data shown in Figure 38, and an example that uses the described description data that described DTD describes.
Describe below and the relevant processing of described selection step 1801.In selecting step 1801, can be used as the method for selecting a scene in conjunction with the described any method of first to the 4th embodiment.To described structrual description data<target link last export synchronously with the start time of selected scene and the output of concluding time.Figure 46 show use DTD shown in Figure 39 with the structrual description data of XML formal description and use DTD shown in Figure 40 and 45 with the situation of XML formal description description data under from an example of the data of described selection step 1801 output.In Figure 46, the element<media object of structrual description data is followed in " id " back〉ID; The described start time has been followed in " beginning " back; The described concluding time has been followed in " end " back.
The processing relevant with extraction step 1802 is described below.The block diagram of Figure 47 shows the extraction step 1802 according to the 5th embodiment.In Figure 47, according to the extraction step 1802 of the 5th embodiment by interface arrangement 2401, go multiplex machine 2402, video clipping device 2403 and audio clips device 2404 to carry out.Interface arrangement 2401 is received in structrual description data and section of selecting step 1801 output, from database 1805, extract a medium content file, to the file that goes multiplex machine 2402 outputs so to be extracted, and the described start time and the concluding time of the output in selection step 1801 to video clipping device 2403 and 2404 outputs of audio clips device.The media content of present embodiment is corresponding to wherein by the system data flow of the MPEG-1 form of multiplexed video data stream and audio data stream.Therefore, go multiplex machine 2402 that the system data flow of described MPEG-1 form is divided into described video data stream and described audio data stream.The video data stream of so being cut apart and be transfused to video clipping device 2403 from described section of interface arrangement 2401 output.In the video data stream of being imported, 2403 outputs of described video clipping device and selected section relevant data.Similarly, the audio data stream of output and described section are transfused to described audio clips device 2404 in selecting step 2402.In the audio data stream of being imported, 2402 outputs of audio clips device and selected section relevant data.
The processing relevant with interface arrangement 2401 is described below.The flow process of Figure 48 shows the processing of being carried out by interface arrangement 2401.Relevant with corresponding contents as shown in figure 46 structrual description data and the section of exporting in selection step 1801 are transfused to interface arrangement 2401.From being assigned to the element<media object of described structrual description data〉attribute " id " obtain the file of descending, therefore, in the section of selecting step 1801 output by sequential storage (S1) according to descending and " id ".In addition, described section is converted into all data as shown in figure 49.Identical section is integrated into together and according to the series arrangement of start time.Then, 2401 pairs of data sets shown in Figure 49 of interface arrangement are carried out following processing in accordance with the order from top to bottom.At first, interface arrangement 2401 uses " id " element<media object with reference to structrual description data 〉, and in this element<media object the basis of attribute " steady arm " on read a filename.From described database, read with corresponding to the relevant data of the file of described filename, the data of so being read are exported to multiplexer 2402 (S2 and S3).The start time and the concluding time of the selected file section of describing later at " id " are exported to video clipping device 2403 and audio clips device 2404 (S4).After all data sets all had been carried out above-mentioned processing, processing finished (S5).If still remaining some data set is not handled, so, after finishing, repeat aforesaid processing (S6 and S7) by the processing of going multiplex machine 2402 to carry out, the processing of carrying out by video clipping device 2403 and processing by 2404 execution of audio clips device.
The processing relevant with removing multiplex machine 2402 is described below.The flow process of Figure 50 shows by the processing of going multiplex machine 2402 to carry out.Go multiplex machine 2402 from interface arrangement 2401, to receive the system data flow of the MPEG-1 form corresponding, and the system data flow of the MPEG-1 form that will so be received is divided into a video data stream and an audio data stream with media content.Described video data stream is exported to video clipping device 2403 and described audio data stream is exported to audio clips device 2404 (S1 is to S10).In the output of finishing described video data stream and described audio data stream (S9 and S11) afterwards, finish (S12) to interface arrangement 2401 reports by the processing of going multiplex machine 2402 to carry out.Point out as the flow process among Figure 50, except transmitting processing finishes to confirm, by the processing of going multiplex machine 2402 to carry out with identical by the processing of going the multiplex machine execution of first embodiment.
The processing of being carried out by video clipping device 2403 is described below.The flow process of Figure 53 shows the processing of being carried out by video clipping device 2403.Flow process as Figure 53 is pointed, finish to confirm (S15 and S17) that the processing of being carried out by video clipping device 2403 is with identical by the processing of described video clipping device execution according to first embodiment except when processing finishes, transmitting processing to interface arrangement 2401.
The processing of being carried out by audio clips device 2404 is described below.The flow process of Figure 52 shows the processing of being carried out by audio clips device 2404.Flow process as Figure 52 is pointed, finish to confirm (S11 and S12) that the processing of being carried out by audio clips device 2404 is with identical by the processing of described audio clips device execution in conjunction with first embodiment except when processing finishes, transmitting processing to interface arrangement 2401.
In forming step 1803, the video data of output and voice data are carried out time division multiplex by means of the method that is used for Moving Picture Experts Group-1ization under international standard ISO/IEC IS 11172-1 in extraction step 1802.Under described media content is stored in situation in a plurality of unique files, according to order each in the multiplexed described file in extraction step 1802 of output video data stream and audio data stream.
In transfer step 1804, in forming step 1803, be transmitted through described line by the system data flow of multiplexed MPEG-1 form.When exporting the system data flow of a plurality of MPEG-1 forms in forming step 1803, all system data flows are transmitted in proper order according to their output.
In the present embodiment, be stored in a plurality of each file wherein all in extraction step 1802 under the situation in the processed unique file at described media content, the associated video of a plurality of files of described media content and audio data stream are connected each other together and are exported in the formation step 1803 of the data stream of so being got in touch therein, even when described video and audio data stream are multiplexed in the system data flow of a single MPEG-1 form, also can be implemented in and form the identical useful result who obtains in the step 1803.In this case, must utilize video clipping device 2403 to change described timing code, only increase with the quantity of video data stream and measure accordingly so that be used in the counter C that the quantity of output frame is counted.Counter C only when a file begins, be initialised (S3 shown in Figure 51 and S18).The processing of being carried out by video clipping device 2403 this moment is provided in the flow process of Figure 53.Though description data described in the 5th embodiment and physically context data be described separately each other, but, element<the media object that appends to described description data by means of attribute " seq (in proper order) " and " locator (steady arm) " with the structrual description data〉attribute on, these data also can be described to single data.
The 6th embodiment
The sixth embodiment of the present invention is described with reference to the accompanying drawings.In the present embodiment, the moving image of MPEG-1 form is got this content as the matchmaker.In this case, a media segment is cut apart corresponding to a single scene.In addition, score is corresponding to the objective degree of the context importance of a scene of interest.
The block diagram of Figure 54 shows the media processing method according to sixth embodiment of the invention.In Figure 54, step is selected in label 3101 expressions; 3102 expression extraction steps; 3103 expressions form step; 3104 expression transfer step and database of 3105 expressions.In selecting step 3101, from description data, select a media content scene, and the output data relevant with the concluding time with the start time of so being selected scene, and the data of a file of the described data of expression storage.Like this, relevant with selecting step 3101 processing is identical with the processing of carrying out in the selection step of the 5th embodiment.In extraction step 3102, be received in described scene start time of expression and the data set of concluding time and the data of representing described file of selecting output in the step 3101.In addition, by the reference configuration data of description, from described medium content file, extract and the described section relevant data of stipulating by start time of exporting and concluding time in selection step 3101.Identical with the processing that extraction step 3102 is correlated with the processing of carrying out at extraction step described in the 5th embodiment.In forming step 3103, according to the degree of crowding of in transfer step 3104, determining, be multiplexed in the part or all of data stream of output in the extraction step 3102, whereby, constitute the system data flow of MPEG-1 form.In transfer step 3104, be identified for transmitting the degree of crowding of the described line of MPEG-1 format system data stream, in forming step 3103, transmit described definite result.In addition, in transfer step 3104, be transmitted in the system data flow that forms the MPEG-1 form of preparing in the step 3103 through described line.Label 3105 expressions are used to store described media content, its structrual description data and a database of description data.
The block diagram of Figure 55 shows the processing of carrying out according to the 6th embodiment during forming step 3103 and transfer step 3104.In Figure 55, form step 3103 and carry out by data stream selecting arrangement 3201 and multiplex machine 3202.Transfer step 3104 is determined device 3203 and conveyer 3204 execution by the degree of crowding.Data stream selecting arrangement 3201 is received in the extraction step 3102 video of output and audio data stream and determines the degree of crowding of output in the device 3203 in the degree of crowding.If the degree of crowding of described line is low as to be enough to allow to transmit all data sets, so, all system data flows all will be exported to multiplex machine 3202.If it is very busy or have the very high degree of crowding and make that transmitting all data sets needs long time, so, only selects the part of described a plurality of video and audio data stream to export to described multiplex machine 3202 owing to described line.In this case, can more than one kind of mode carry out described selection, that is: only select the basic unit of video data stream; Only select the monophony of audio data stream; Only select the left stereophonic signal of audio data stream; Only select the right stereophonic signal of audio data stream; Or their mutual combination.Here, iff having single video data stream and single audio data stream, can not consider the described degree of crowding so and export described data stream.Multiplex machine 3202 is by means of being made by the multichannel multiplexing method of the standardized MPEG-1 form of international standard ISO/ISE IS 11172-1 from the video of data stream selecting arrangement 3201 output and audio data stream by time division multiplexing.The degree of crowding determines that device 3203 inspection is used to transmit the current state and the degree of crowding of the described line of described data stream, and the outgoing inspection result gives data stream selecting arrangement 3201.Conveyer 3204 transmits by the system data flow of the multiplexed MPEG-1 form of multiplex machine 3202 through described line.
In the present embodiment, exist under the situation of single video data stream, data stream selecting arrangement 3201 is not considered the described degree of crowding and is exported described video data stream.But,, can only select the presentation video of described video data stream to be transmitted so if all data sets relevant with described video data stream need a large amount of time when transmitting through described line.When selecting described presentation video, the timing code of described presentation video is described in described description data.In addition, only being referred to as I image and a single frame can being deciphered separately can be selected from a plurality of frames.
The 7th embodiment
Below with reference to accompanying drawing the seventh embodiment of the present invention is described.In the 7th embodiment, the moving image of MPEG-1 form is used as media content.In this case, a media segment is cut apart corresponding to a scene.In addition, in the present embodiment, score is corresponding to the objective degree from the context importance of the interested scene of angle of the key word relevant with character of being selected by described user or incident.
The block diagram of Figure 56 shows the disposal route according to seventh embodiment of the invention.In Figure 56, step is selected in label 3301 expressions, 3302 expression extraction steps.In selecting step 3301, by means of a key word with append to its score on the described description data is selected a media content from description data scene.Export the data relevant with the concluding time with the start time of so being selected scene.In extraction step 3302, extract and the relevant data of stipulating by start time of in selecting step 3301, exporting and concluding time of section.
Figure 57 shows the structure according to the description data of the 7th embodiment.In the present embodiment, described context is described according to a tree construction.Element in described tree construction is from left to right arranged in chronological order.In Figure 57, be designated as<content〉the root of described tree represent a single content part, exercise question is used as attribute and is assigned to described.
Utilization<joint〉appointment<content son<content.The priority of the key word of the interior perhaps character of a scene of expression and the described key word significance level of expression appends to element<joint with key word and the right form of priority as attribute〉on.Suppose that described priority is an integer of scope from 1 to 5.Wherein, 1 lowermost level and 5 of pointing out importance is pointed out the superlative degree of importance.Set up described to (key word and priority) so that make it can be used as the index of desirable special scenes of retrieval user or character.For this reason, can have a plurality of (key word and priority) is affixed to a single element<joint on.The character quantity that quantity equals to occur in scene of interest a plurality of to being affixed to a single element<joint for example, are being described under the situation of character〉on.Setting appends to the value of the priority on the described scene, so that become when making its worthwhile a large amount of character appear in the scene of interest big.
Utilization<joint〉or<section appointment<joint son<joint.Here, element<joint〉itself can be used as another height<joint son<joint.But, a single element<joint〉can not have son<joint and son<section potpourri.
An element<section〉expression a single scene cut apart.With append to described element<joint on to similarly relevant temporal information, " end " of promptly representing described start time " beginning " and described concluding time of expression are used as attribute and append to<save to (key word and priority) with scene of interest on.Can use commercial available software or cut apart described scene through the available software of network.In addition, can make and cut apart described scene by hand.The attribute of a scene start time of expression " from " can stipulate the start frame of a scene of interest.Though temporal information is to represent according to start time and concluding time that a scene is cut apart in the present embodiment, but, when representing described temporal information, also can realize similar result according to the duration of start time of a scene of interest and described scene of interest.In this case, the concluding time of described scene of interest is gone up acquisition by being added to the described start time the described duration.
Under the situation of the story of all films in this way, character, can use element<joint〉chapter, joint and section described on the basis of described description data.In another example, when describing baseball, the element<joint of highest ranked〉can be used to description office, their son<joint〉can be used to describe half innings.In addition, element<joint〉second generation<joint can be used to describe swinging of each baseball player.Element<joint〉third generation<joint also can be used to describe the time cycle and the result that swings between each spacing and two spacings.
Description data with this spline structure for example can utilize extend markup language (XML) to represent in computing machine.XML is the data description language (DDL) that its standardization is pursued by World Wide Web Consortium.Submitted 1.0 editions suggestion in February 10 in 1998.The explanation that relevant XML is 1.0 editions can obtain from http:/www.w3.org/TR/1998/REC-xml-19980210.Figure 58 to 66 expression be used for utilizing XML explanation present embodiment description data DTD (Document Type Definition) (DTD) an example and utilize an example of the description data of DTD explanation.Figure 67 to 80 expression adds an example of the description data that representative data (domination data), for example presentation graphics (being video data) and the key word (voice data) of media segment obtain and is used for utilizing XML that the DTD of this description data is described by giving the description data shown in Figure 58 to 66.Describe now and the relevant processing of selection step S3301.In the present embodiment, at element<section〉with have son<section element<joint carry out relevant processing with selection step S3301.Figure 81 is the process flow diagram of the expression processing relevant with the selection step 3301 of the 7th embodiment.Select in the step 3301 at this, imported the key word of the index effect of selecting scene and the threshold value of priority thereof, thus from having the element<section of description data〉those element<joints select the index of its key word and input identical with the element<joint of its priority above threshold value conduct<joint (S2 and S3).Then, the element<joint from so selecting〉son<section only select its key word and its priority identical to surpass the son<section of this threshold value with this index (S5 and S6).According to utilizing the selected son<section of above-mentioned processing〉attribute " beginning " and " end " start time and concluding time of determining chosen scene, and export this start time and concluding time (S7, S8, S9, S10, S11, S1, and S4).
Though in the present embodiment at element<section〉and have son<section element<joint select, also can select at other mother-subrelation; Element<the joint in a certain hierarchical layer for example〉and son<joint.In addition, this mother-subrelation is not only limited to two-layer hierarchical layer.The number of each of hierarchical layer layer can be more than 2, can be to the leaf of tree construction, i.e. son<section〉carry out identical processing.Also have, it is right to set search index the index of the condition that comprises a plurality of key words and determine the relation between these key words for.The condition of determining the relation between each key word for example comprises " among both any ", " both " or " among both any or both " such combinations.The threshold value that can be identified for selecting under the situation of a plurality of key words, can be carried out each key word and handle.Playing the key word of search index effect can be imported by the user, or by system according to the user profile automatic setting.
Identical with the processing of carrying out in extraction step 3302 processing of being correlated with and the extraction step of in first embodiment, describing.
Shown in Figure 82, the advantage of present embodiment is: by the video flowing of extraction step 3302 outputs being inputed to video play device and the audio stream of same step output being inputed to audio playing apparatus and plays these audio and video streams synchronized with each other, just can only play the scene of the interested media content of spectators.In addition, by multiplexed these video flowings and audio stream, also can prepare the system flow of the MPEG-1 form relevant with the scene set of the interested media content of spectators.
The 8th embodiment
The eighth embodiment of the present invention is described now.The 8th embodiment only is and selects the relevant processing aspect of step with the difference of the 7th embodiment.
Describe now and the relevant processing of selection step S3301.In the present embodiment, only at element<section〉carry out and the relevant processing of selection step S3301.Figure 83 is the process flow diagram of the expression processing relevant with the selection step S3301 of the 7th embodiment.Shown in Figure 83, in selecting step 3301, the key word of the index effect that is used for selecting scene and the threshold value of priority thereof have been imported.Element<section from description data〉select the son<section of its key word and its priority identical above threshold value with this index (S1 and S6).
Though in the 8th embodiment only at element<section select, also can be at element<joint with certain classification〉select.In addition, also can set search index for and comprise a plurality of key words and determine that the index of condition of the relation between these key words is right.The condition of determining the relation between each key word for example comprises " among both any ", " both " or " among both any or both " such combinations.The threshold value that can be identified for selecting under the situation of a plurality of key words, can be carried out each key word and handle.
The 9th embodiment
The ninth embodiment of the present invention is described now.The 9th embodiment only is and selects the relevant processing aspect of step with the difference of the 7th embodiment.
Referring now to accompanying drawing, describes and the relevant processing of selection step S3301.As in the situation of the processing that the 7th embodiment is described, in the selection step 3301 of the 9th embodiment, at element<section〉and have son<section element<joint select.In the present embodiment, wait to select the duration sum setting threshold of scene with respect to all; Specifically, select like this, that is, the duration sum that makes up to the present selected scene is maximum but still less than this threshold value.Figure 84 is the process flow diagram of the expression processing relevant with the selection step of the 9th embodiment.In selecting step 3301, received a key word of search index effect.Then from having son<section〉element<joint extract all elements<joint with key word identical with this search index.So selected element<joint〉set as set omega (S1 and S2).Element<the joint of set omega〉according to priority descending stores (S3).From the element of so selected set omega, select its key word or search index to have the element<joint of maximum preferred value then〉(S5).The so selected element<joint of deletion from set omega〉(S6).In this case, if a plurality of element<joint〉all have maximum preferred value, just extract all these element<joints 〉.At so selected element<joint〉son<section in, only select to have the son<section of search index 〉, son<section of selecting like this add to another set omega ' in go.Set omega ' initial value be " sky " (S2).Obtain the duration sum (S8) with the scene of set omega ' relevant, make comparisons this with threshold value (S9).If should the duration sum equal this threshold value, the element<section of output and set omega ' comprised just〉the relevant data of all sections, end process (S14).On the contrary, if less than threshold value, handling just to return, the duration sum from set omega, select its search index or key word to have the element<joint of limit priority 〉.Repeating above-mentioned selection handles.If set omega is empty, just output and set omega ' element<section the relevant data of all sections, end process (S4).If with the duration sum of the scene of set omega ' relevant greater than threshold value, just carry out following the processing.From set omega ' its search index of deletion or key word have the element<section of minimum priority (S11).At this moment, if a plurality of element<section〉all have minimum priority, just delete all these element<sections 〉.Obtain set omega ' element<section duration sum (S12), make comparisons this with threshold value (S13).If should the duration sum greater than this threshold value, handle just return from set omega ' deletion element<section.Repeating such deletion handles.At this, if set omega ' would be empty, with regard to end process (S10).On the contrary, if the duration sum less than threshold value, just output and set omega ' element<section the relevant data of all sections, end process (S14).
Though in the present embodiment at element<section〉and have son<section element<joint carry out to select, also can be to other mother-subrelation, for example element<joint〉and its son<section in another grade the execution selection.Also have, mother-subrelation is not only limited to the classification of two-stage; The progression of classification can increase.For example, to being in element<joint from highest ranked〉to its son<section〉under the element of classification in the scope situation about handling, select five-star element<joint 〉.Also select so selected element<joint〉follow-up<save, and further select chosen elements like this<joint second generation.Repeat this and take turns selection operation up to son<section〉chosen till.Element<section of selecting like this〉the composition set omega '.
In the present embodiment, element is stored according to the descending of search index or key word priority, can select element according to the descending of priority about the preferred value setting threshold.Can be about element<joint〉and can be about element<section setting threshold respectively.
In the present embodiment, search index is defined as single key word.But, can set search index for and comprise a plurality of key words and determine that the index of condition of the relation between these key words is right.The condition of determining the relation between each key word for example comprises " among both any ", " both " or " among both any or both " such combinations.In this case, need determine selecting or deletion element<joint〉and element<section the time use the rule of priority of each key word.An example of this rule is as follows: if condition is " among both any ", then the maximum preferred value of the preferred value of corresponding each key word is set to " preferentially ".In addition, if condition is " both ", then the minimum preferred value of the preferred value of corresponding each key word is set to " preferentially ".Even when being " among both any or both " in condition, also can determine preferred value by this rule.Also have, under search index or key word are a plurality of situation, can be about priority level initializing threshold value as the key word of search index, can handle those elements that its preferred value surpasses this threshold value.
The tenth embodiment
The tenth embodiment of the present invention is described now.The tenth embodiment only is and selects the relevant processing aspect of step with the difference of the 7th embodiment.
Referring now to accompanying drawing, describes and the relevant processing of selection step S3301.As in the situation of the processing that the 8th embodiment is described, in the selection step 3301 of the tenth embodiment, only at element<section〉select.In addition, as the 9th embodiment, in the present embodiment, wait to select the duration sum setting threshold of scene with respect to all; Specifically, select element like this, that is, the duration sum that makes up to the present selected scene is maximum but still less than threshold value.Figure 85 is the process flow diagram of the expression processing relevant with the selection step of the tenth embodiment.
In selecting step 3301, received a key word of search index effect.Set omega ' be initialized as " sky " (S2).Then from element<section〉extract have all elements<section of the key word identical with this search index.So selected element<section〉set as set omega.Store its key word element<section identical according to the descending of priority then with search index〉(S3).From the element of the set omega of ordering like this, extract its key word then or search index has the element<section of maximum preferred value〉(S5), and from this set omega deletion element<section of extraction like this.In this case, if a plurality of element<section〉all have maximum preferred value, just select all these element<sections 〉.If set omega is empty, just output and set omega ' element<section the relevant data of all elements, end process (S4).Calculate element<section of so extracting〉duration sum T1 (S6) and the duration sum T2 (S7) of each scene of set of computations Ω '.T1 and T2 sum and threshold value make comparisons (S8).If T1 and T2 sum surpass threshold value, the element<section of output and set omega ' comprised just〉the relevant data of all sections, end process (S11).If T1 and T2 sum equal threshold value, with regard to give set omega ' the element element<section of adding all extractions (S9 and S10), export element<section with set omega ' comprised〉the relevant data of all sections, and end process (S11).On the contrary, if T1 and T2 sum less than threshold value, with regard to give set omega ' the element element<section of adding all extractions, processing is returned then and select element<section from set omega 〉.
Though in the present embodiment at element<section〉select, also can be at the element<joint in another classification carry out and select.In the present embodiment, element sorts according to the descending as the priority of the key word of search index.Can be about the preferred value setting threshold, as long as the preferred value of element just can be selected these elements according to the descending of priority greater than threshold value.
In addition, in the present embodiment, search index is defined as single key word.But, can set search index for and comprise a plurality of key words and determine that the index of condition of the relation between these key words is right.The condition of determining the relation between each key word for example comprises " among both any ", " both " or " among both any or both " such combinations.In this case, need determine selecting or deletion element<joint〉and element<section the time use the rule of priority of each key word.An example of this rule is as follows: if condition is " among both any ", then the maximum preferred value of the preferred value of corresponding each key word is set to " preferentially ".In addition, if condition is " both ", then the minimum preferred value of the preferred value of corresponding each key word is set to " preferentially ".Even when being " among both any or both " in condition, also can determine preferred value by this rule.Also have, under search index or key word are a plurality of situation, can be about priority level initializing threshold value as the key word of search index, can handle those elements that its preferred value surpasses this threshold value.
The 11 embodiment
Eleventh embodiment of the invention is described now.The description data of present embodiment is that with the difference of the description data of the 7th to the tenth embodiment viewpoint is used to select the key word effect of scene-and the explanation aspect of this viewpoint significance level.Shown in Figure 57, in the 7th to the tenth embodiment, the significance level of viewpoint and this viewpoint is by giving element<section〉or<section〉distribute the combination (being key word and priority) of key word and significance level to describe.In contrast, shown in Figure 133, in the 11 embodiment, the significance level of viewpoint and this viewpoint is by giving root<content〉distributive property " povlist " and give element<joint or<section distributive property " povvalue " illustrates.
Shown in Figure 134, attribute " povlist " is corresponding to the viewpoint of representing with vector form.Shown in Figure 135, attribute " povvalue " is corresponding to the significance level of representing with vector form.Each set comprises that the composite set of the significance level of viewpoint and this viewpoint one-one relationship arranges by given sequence, forms attribute " povlist " and " povvalue " thus.For example, shown in Figure 134 and 135, the significance level value 5 of viewpoint 1, the significance level value 0 of viewpoint 2; The significance level value 2 of viewpoint 3; The significance level value 0 of viewpoint " n " (" n " is positive integer).In the situation of the 7th embodiment, the significance level value 2 of viewpoint 2 shows that viewpoint 2 is not assigned with key word; I.e. combination (key word, level earlier).
Figure 136 to 163 and Figure 164 to 196 expression is used to utilize " extend markup language " that be used for illustrating description data in computing machine (XML) " document type definition " some examples (DTD) of the description data of present embodiment and an example of the description data that is illustrated to be described in DTD.Even also utilize description data to realize those processing operations identical in the present embodiment with the processing operation of in the 7th to the tenth embodiment, describing.
In the present embodiment, attribute " povlist " is assigned to root<content 〉, and attribute " povvalue " is attached to element<joint〉or<section 〉.Shown in Figure 197, attribute " povlist " also can be attached to element<joint〉or<section 〉.For the element<joint that has been assigned with attribute " povlist "〉or<section, attribute " povvalue " is corresponding to being assigned to element<joint〉or<section attribute " povlist ".And for the element<joint that is not assigned with attribute " povlist " or<section, attribute " povvalue " is corresponding to being assigned to root<content〉attribute " povlist " or at the element<joint that is not assigned with attribute " povlist " or<section ancestors in be assigned with attribute " povlist " near element<joint attribute " povlist ".
Figure 198 to 252 expression is corresponding to the example of the DTD of the description data DTD shown in Figure 197, that be used to utilize the XML explanation present embodiment that is used for illustrating description data in computing machine, and an example of the description data that is illustrated in DTD.For these examples in, be assigned to element<joint〉or<section attribute " povvalue " corresponding to being assigned to root<content attribute " povlist ".
The 12 embodiment
Referring now to accompanying drawing, twelveth embodiment of the invention is described.In the present embodiment, the moving image of the system flow of MPEG-1 form is used as media content.In this case, partitioning is equivalent to a scene and cuts apart.
Figure 86 is the block scheme of the media processing method of expression twelveth embodiment of the invention.In Figure 86, step is selected in label 4101 expressions; 4102 expression extraction steps; 4103 expressions form step; 4104 expression supplying steps; 4105 expression databases.In selecting step 4101, the data that the file of these data has been stored in data that a based on context scene of data of description selection media content, and output is relevant with the concluding time with the start time of the scene of so selecting and expression.In extraction step 4102, receive the data set and the data set that is illustrated in the file of selecting step 4101 output of the start time and the concluding time of this scene of expression.Referring to the structrual description data, from the file of media content, extract and the relevant data of determining by start time that in selecting step 4101, receives and concluding time of section.In forming step 4103, the data of extraction step 4102 outputs are carried out multiplexed, form the system flow of MPEG-1 form thus.In supplying step 4104, carry the system flow of the MPEG-1 form that in forming step 4103, forms by circuit.The database of media content and structrual description data and description data has been stored in label 4105 expressions.
The structure of the structrual description data that the 12 embodiment adopts is identical with the 5th embodiment's.Specifically, use structrual description data with structure shown in Figure 37.
Figure 87 represents the structure of the description data of the 12 embodiment.The description data of present embodiment is corresponding to the element<media object of giving the structrual description data of the 7th embodiment〉description data of having added link.Specifically, the root<content of description data〉have son<media object 〉, and element<media object have son<joint.Element<joint〉with<section with used identical of the 7th embodiment.Give the element<media object of description data〉adeditive attribute " id ".Utilize this attribute " id " to make the element<media object of structrual description data〉with the element<media object of description data be associated.Utilize the scene of the media content that the descendants of the element (media object) of description data describes to be stored in element<media object by the structrual description data of the attribute id with same value〉in the file of appointment.In addition, temporal information " beginning " and " end " of distributing to element " section " determined from the overpast time of the beginning of each file.Specifically, comprise under the situation of a plurality of files that the moment of the beginning of each file is corresponding to 0, play the time of having pass by till the interested scene from the beginning of this document and represent and use the zero hour of each scene at one section media content.
Structrual description data and description data can for example utilize " extend markup language " (XML) to represent in computing machine.Figure 39 relevant with the 5th embodiment represents an example of structrual description data.In addition, Figure 88 to 96 expression be used for utilizing " document type definition " that XML describes the description data shown in Figure 87 (DTD) an example and utilize an example of the description data that this DTD describes.
Describe now and the relevant processing of selection step 4101.In selecting step 4101, adopt any method of in the 7th to the tenth embodiment, describing as the method for selecting scene.Element<media object corresponding to the structrual description data〉" id " in fact be output simultaneously with the start time of selected scene and the output of concluding time.Utilizing DTD shown in Figure 39 with the formal description structrual description data of XML file with utilize under the situation of the DTD shown in Figure 88 and 96 with the formal description description data of XML file, shown in Figure 6 identical from example of the data of selecting step 4101 output and the 5th embodiment.
Identical with the processing that extraction step 4102 is correlated with the extraction step of in the 5th embodiment, describing.The processing relevant with forming step 4103 is also identical with the formation step of describing in the 5th embodiment.In addition, the processing of being correlated with supplying step 4104 is also identical with the supplying step of describing in the 5th embodiment.
The 13 embodiment
Referring now to accompanying drawing, thriteenth embodiment of the invention is described.In the present embodiment, the moving image of the system flow of MPEG-1 form is used as media content.In this case, partitioning is equivalent to a scene and cuts apart.
Figure 97 is the block scheme of the media processing method of expression thriteenth embodiment of the invention.In Figure 97, step is selected in label 4401 expressions; 4402 expression extraction steps; 4403 expressions form step; 4404 expression supplying steps; 4405 expression databases.In selecting step 4401, the data that the file of these data has been stored in data that a based on context scene of data of description selection media content, and output is relevant with the concluding time with the start time of the scene of so selecting and expression.The processing relevant with selecting step 4401 is identical with the processing of being correlated with the selection step of describing in the 12 embodiment.In extraction step 4402, receive the data set and the data set that is illustrated in the file of selecting step 4401 output of the start time and the concluding time of this scene of expression.Referring to the structrual description data, from the file of media content, extract and the relevant data of determining by start time that in selecting step 4401, receives and concluding time of section.The processing of being correlated with extraction step 4402 is identical with the processing of being correlated with the extraction step of describing in the 12 embodiment.In forming step 4403, according to the conveying capacity of determining at supplying step 4404 part or all of the system flow of extraction step 4402 outputs carried out multiplexedly, form the system flow of MPEG-1 form thus.The processing relevant with forming step 4403 is identical with the processing of being correlated with the extraction step of describing in the 6th embodiment.In supplying step 4404, determine the conveying capacity of circuit, and the result who determines is sent to formation step 4403.In addition, carry the system flow of the MPEG-1 form that in forming step 4403, forms by circuit.The processing relevant with forming step 4404 is identical with the processing of being correlated with the formation step of describing in the 6th embodiment.The database of media content and structrual description data and description data has been stored in label 4405 expressions.
Though in the 13 embodiment the system flow of MPEG-1 as media content, as long as other form also can obtain the time code of each screen, then use this form also can obtain the favourable outcome identical with the MPEG-1 system flow.
Following embodiment will describe the summary corresponding to the pattern of technical scheme of the present invention.Below will be with " voice data " expression data relevant with sound, sound comprises audible sound accent, noiseless, speech, music, peace and quiet, external noise etc.Represent audible and visual data with " video data ", for example the character of moving image, rest image or automatic diascope.The mark that calculates according to the content of voice data, for example audible tone, noiseless, speech, music, peace and quiet or external noise with " score " expression; According to having NULI character and combination thereof to distribute mark in the video data.In addition, also can use score except that above-mentioned.
The 14 embodiment
The 14th embodiment of the present invention is described now.Figure 98 is the block scheme of the expression processing relevant with the data processing method of present embodiment.In the figure, step is selected in label 501 expressions; Label 503 expression extraction steps.In selecting step 501, based on context at least one section or a scene getting the component selections media content of data of description, and the section or the scene of output selection like this.The section of selecting is corresponding to for example start time and the concluding time of a selection section.In extraction step 503, only extraction and quilt section (hereinafter referred to as " media segment ") relevant data at the media content of the section division of selecting step S501 to select is promptly with the relevant data of selecting of section.Particularly, score is corresponding to the objective degree from the context importance of the interested scene of viewpoint of the relevant key word of the character selected with the user or incident.
The 15 embodiment
The 15th embodiment of the present invention is described now.Figure 99 is the block scheme of the expression processing relevant with the data processing method of present embodiment.In the figure, step is selected in label 501 expressions; Step is play in label 505 expressions.In playing step 505, only play and the relevant data of section of being divided in the selection section of selecting step S501 output.The processing relevant with selecting step 501 is identical with the processing of describing in the first to the 13 embodiment, for simplicity's sake, no longer describes herein.
The 16 embodiment
The 16th embodiment of the present invention is described now.Figure 100 is the block scheme of the expression processing relevant with the data processing method of 16 embodiment.In the figure, label 507 expression videos are selected step; Label 509 expression audio selection steps.Video selects step 507 and audio selection step 509 all to be included among the 14 and 15 described selection of the embodiment steps 501.
Select to select video-data fragment or scene referring to the description data relevant in the step 507 at video with video data, and the output section of selecting like this.In audio selection step 509, referring to description data the select a sound section relevant with voice data, and the output section of selecting like this.At this, the section of selection is corresponding to for example this zero hour of selected period and finish time.In the described extraction step 503 of the 14 embodiment, only play the data of selecting the video-data fragment of step 507 selection at video.In playing step 505, only play the data of the voice data section of selecting in audio selection step 509.
The 17 embodiment
The 16th embodiment of the present invention is described now.Figure 101 is the block scheme of the expression processing relevant with the data processing method of present embodiment.In the figure, label 511 expression determining steps; Step is selected in 513 expressions; 503 expression extraction steps; Step is play in 505 expressions.
(example 1)
In the present example, media content comprises a plurality of different media data collection in a period of time.In determining step 511, receive the structrual description data of describing the media content data structure.In this step, ask definite data as alternative according to the ability of Rule of judgment, for example receiving end, the conveying capacity and the user of transmission line.In selecting step 513, be received in the data, structrual description data and the description data that are judged as alternative in the determining step 511.In addition, only from the data that determining step 511, are judged as alternative, select the media data collection.Because extraction step 503 is identical with the extraction step of the 14 embodiment, and it is identical with the broadcast step of the 15 embodiment to play step 505, so omit description of them at this.Media data comprises several data sets, for example video data, voice data and text data.In following each example explanation, media data is particularly including one of video data and voice data at least.
In the present example, shown in Figure 102, in a time period of media content, different video datas or voice data are distributed to channel, further these video datas or voice data are distributed to the classification collection of layer.For example, the channel of translatory movement image is distributed to the video data with standard resolution for 1/ layer 1, channel is distributed to for 1/ layer 2 had high-resolution video data.The channel 1 that transmits voice data is distributed to stereo data, channel 2 is distributed to mono data." document type definition " that Figure 103 and 104 expressions are used for utilizing XML description scheme data of description (DTD) an example and utilize an example of the description data that this DTD describes.
Media content by such channel and layer situation about constituting under, referring to Figure 105 to 108 processing relevant with the determining step 511 of this example described.Shown in Figure 105, judged whether that in step 101 user asks to exist.If confirm to have the user to ask to exist in step 101, just this user asked to carry out the judgment processing SR-A shown in Figure 106.
In step 101, if confirm the no user request, handle and just arrive step S103, judge further whether receivable data are video data, are voice data or video and voice data.If confirm that at step S103 can receive data is video data, just carries out the judgment processing SR-C relevant with video data shown in Figure 107.Be identified just voice data if can receive data, just carry out the judgment processing SR-C relevant shown in Figure 108 with voice data.If video and voice data all are receivable, handle just arriving step S105.At step S105, judge the ability of the receiving end of receiver, video and voice data; For example, video display capabilities, the speed of ability to play and depressurizing compression data.If confirm that the ability of receiving end is stronger, handle just arriving step S107.On the contrary, if the ability of confirming receiving end a little less than, handle just arriving step S109.At step S107, judgement will transmit the conveying capacity of the circuit of video data and voice data by it.If confirm that the conveying capacity of this circuit is bigger, handle just arriving step S109.If confirm that the conveying capacity of this circuit is less, handle just arriving step S111.
When conveying capacity more weak in the receiving end ability or circuit is big, the processing of execution in step S109.During this was handled, receiving end received the video data with standard resolution for 1/ layer 1 by channel, received voice data by channel 2.In the receiving end ability strong or conveying capacity hour, the processing of execution in step S111.During this was handled, receiving end receives for 1/ layer 2 by channel had high-resolution video data, receives stereo by channel 1.
Describe shown in Figure 106 now and ask relevant judgment processing SR-A with the user.In this example, suppose that user's request is for selecting video layer and acoustic channel.In step S151, judge whether the user asks video data.If confirm that in step S151 the user asks video data, handle just to arrive step S153.If confirm that the user does not ask video data, handle just arriving step S159.At step S153, judge the user to the request of video data whether corresponding to the selection of layer 2.If S153 has selected "Yes" in step, handle just arriving step S155, select layer 2 as video data.If S153 has selected "No" in step, handle just arriving step S157, select layer 1 as video data.At step S159, judge whether the user asks voice data.If confirm that at step S159 the user asks voice data, handle just arriving step S161.If confirm that the user does not ask voice data, with regard to end process.At step S161, judge the user to the request of voice data whether corresponding to the selection of channel 1.If S161 has selected "Yes" in step, to handle and just arrive step S162, selective channel 1 is as voice data.If S161 has selected "No" in step, to handle and just arrive step S615, selective channel 2 is as voice data.
The judgment processing SR-B relevant with video data shown in Figure 107 described now.At step S171, judge the ability of the receiving end of receiving video data.Have stronger ability if receiving end is confirmed to be, handle just arriving step S173.Have more weak ability if receiving end is confirmed to be, handle just arriving step S175.At step S173, determine the conveying capacity of circuit.If it is bigger that the conveying capacity of circuit is confirmed to be, handle just arriving step S175.On the contrary, less if the conveying capacity of circuit is confirmed to be, handle just arriving step S177.
When conveying capacity more weak in the receiving end ability or circuit is big, the processing of execution in step S175.During this was handled, receiving end received only the video data with standard resolution for 1/ layer 1 by channel.The conveying capacity of or circuit more weak in the receiving end ability hour, the processing of execution in step S177.During this was handled, receiving end receives only for 1/ layer 2 by channel had high-resolution video data.
The judgment processing SR-C relevant with voice data shown in Figure 108 described now.At step S181, judge the ability of the receiving end that receives voice data.Have stronger ability if receiving end is confirmed to be, handle just arriving step S183.Have more weak ability if receiving end is confirmed to be, handle just arriving step S185.At step S183, determine the conveying capacity of circuit.If it is bigger that the conveying capacity of circuit is confirmed to be, handle just arriving step S185.On the contrary, less if the conveying capacity of circuit is confirmed to be, handle just arriving step S187.
When conveying capacity more weak in the receiving end ability or circuit is big, the processing of execution in step S185.During this was handled, receiving end received monaural audio data by channel 2.The conveying capacity of or circuit strong in the receiving end ability hour, the processing of execution in step S187.During this was handled, receiving end received only stereo data by channel 1.
(example 2)
The invention that this example is described and example 1 described difference are only aspect the processing relevant with determining step S511.In determining step 511, receive the structrual description data of describing the media content data structure.In this step, according to conveying capacity and user's request of the ability of Rule of judgment, for example receiving end, transmission line, judgement is only to select video data, only select voice data or both selected video data also to select voice data.Owing to select step 513, extraction step 503 and play step 505, so omit description of them at this all with above-described identical.
Referring now to Figure 109 to 110, the processing relevant with the determining step 511 of this example described.Shown in Figure 109, judged whether that in step S201 the user asks to exist.If confirm to have the user to ask to exist at step S201, handle just arriving step S203, if confirm the no user request, handle just arriving step S205.At step S203, judge whether the user only asks video data.If S203 has selected "Yes" in step, handle and just arrive step S253, only video data is confirmed to be the object of selection.If S203 has selected "No" in step, handle just arriving step S207.At step S207, judge whether the user only asks voice data.If S207 has selected "Yes" in step, handle and just arrive step S255, only voice data is confirmed to be the object of selection.If S207 has selected "No" in step, to handle and just arrive step S251, video and voice data all are confirmed to be the object of selection.
When the no user request exists, handle among the step S205 that will arrive, judgement be only video data, only voice data or video and voice data both are receivable.If confirm that at step S205 only video data is receivable, handle just arriving step S253, only video data is confirmed as the object of selection.If confirm that at step S205 only voice data is receivable, handle just arriving step S255, only voice data is confirmed as the object of selection.If confirm that at step S205 video and voice data all are receivable, handle just arriving step S209.
At step S209, determine the conveying capacity of circuit.If the conveying capacity of this circuit is less, handle and just to arrive step S251, video and voice data are all confirmed as the object of selection.If the conveying capacity of this circuit is bigger, handle just arriving step S211.In step S211, whether judgement will comprise voice data by the data that this circuit is transmitted.If S211 has selected "Yes" in step, handle and just arrive step S255, voice data is confirmed as the object of selection.If S211 has selected "No" in step, handle and just arrive step S253, video data is confirmed as the object of selection.
(example 3)
In the present example, media content comprises a plurality of different video and/or audio data sets in a time cycle.Except that judgement is only to select video data, select a sound data or both selected video also to select a sound the data only, carry out in this determining step 511 that is chosen in second example, also according to the conveying capacity of the ability of Rule of judgment, for example receiving end, transmission line and user ask to judge select these sets of video data/audio data sets which as alternative.Owing to select step 513, extraction step 503 and play step 505 with above-mentioned identical, so do not repeat them here.
As example 1, in a time cycle of media content, different video datas or voice data are distributed to channel or layer.For example, the channel of translatory movement image is distributed to the video data with standard resolution for 1/ layer 1, channel is distributed to for 1/ layer 2 had high-resolution video data.The channel 1 that transmits voice data is distributed to stereo data, channel 2 is distributed to mono data." document type definition " that Figure 103 and 104 expressions are used for utilizing XML description scheme data of description (DTD) an example and utilize an example of the description data that this DTD describes.
Referring now to Figure 111 to 113, the processing relevant with the determining step 511 of the 3rd example described.Shown in Figure 111, in the present example, as the judgement that example 2 is done, determine data (alternative is determined SR-D) as alternative.In step S301, determine to utilize alternative to determine treatment S R-D established data.In step S301, when having only video data to be confirmed as alternative, just carry out the judgment processing SR-E relevant shown in Figure 112 with video data.In step S301, when having only voice data to be confirmed as alternative, just carry out the judgment processing SR-F relevant shown in Figure 113 with voice data.In step S301, when video data and voice data all are confirmed as alternative, handle and just arrive step S303, determine the receiving ability of the receiving end of receiver, video and voice data.If confirm that the ability of receiving end is stronger, handle just arriving step S305.If the ability of confirming receiving end a little less than, handle and just arrive step S307, determine the ability of circuit, as transfer rate.If confirm that the ability of this circuit is stronger, handle just arriving step S309.On the contrary, if the ability of confirming this circuit a little less than, handle just arriving step S307.If confirm that the conveying capacity of this circuit is bigger, handle just arriving step S307.If confirm that the conveying capacity of this circuit is less, handle just arriving step S311.
In that the receiving end ability is weak, line capacity is more weak or the conveying capacity of circuit when big, the processing of execution in step S307.During this was handled, receiving end received mono data by the video data of 1/ layer of 1 acceptance criteria resolution of channel by channel 2.On the contrary, strong in the receiving end ability, line capacity is strong or the conveying capacity of circuit hour, the processing of execution in step S311.During this was handled, receiving end received stereo data by the video data of 1/ layer of 2 receiving high definition of channel by channel 1.
The judgment processing SR-F relevant with video data shown in Figure 112 described now.In step S351, judge the ability of the receiving end of receiving video data.If confirm that the ability of receiving end is stronger, handle just arriving step S353.If the ability of confirming receiving end a little less than, handle just arriving step S355.At step S353, determine the ability of circuit.If confirm that the ability of this circuit is stronger, handle just arriving step S357.On the contrary, if the ability of confirming this circuit a little less than, handle just arriving step S355.At step S357, determine the conveying capacity of this circuit.If confirm that the conveying capacity of this circuit is bigger, handle just arriving step S355.On the contrary, if confirm that the conveying capacity of this circuit is less, handle just arriving step S359.
In that the receiving end ability is weak, line capacity is more weak or the conveying capacity of circuit when big, the processing of execution in step S355.During this was handled, receiving end was by 1/ layer of 1 video data that receives only standard resolution of channel.On the contrary, strong in the receiving end ability, line capacity is strong or the conveying capacity of circuit hour, the processing of execution in step S359.During this was handled, receiving end received only high-resolution video data for 1/ layer 2 by channel.
The judgment processing SR-F relevant with voice data shown in Figure 113 described now.In step S371, judge the ability of the receiving end that receives voice data.If confirm that the ability of receiving end is stronger, handle just arriving step S373.If the ability of confirming receiving end a little less than, handle just arriving step S375.At step S373, determine the ability of circuit.If confirm that the ability of this circuit is stronger, handle just arriving step S377.On the contrary, if the ability of confirming this circuit a little less than, handle just arriving step S375.At step S77, determine the conveying capacity of this circuit.If confirm that the conveying capacity of this circuit is bigger, handle just arriving step S735.On the contrary, if confirm that the conveying capacity of this circuit is less, handle just arriving step S379.
In that the receiving end ability is weak, line capacity is more weak or the conveying capacity of circuit when big, the processing of execution in step S375.During this was handled, receiving end received only mono data by channel 2.On the contrary, strong in the receiving end ability, line capacity is strong or the conveying capacity of circuit hour, the processing of execution in step S379.During this was handled, receiving end received only stereo data by channel 1.
(example 4)
In the present example, increase the representative data relevant as attribute for each element of the description data in the lowest hierarchical layer with corresponding media segment.Media content comprises a plurality of different media data collection in a time cycle.In determining step S511, receive the structrual description data of the data structure of describing media content.Which in this step,, determine media data collection and/or representative data collection as alternative according to the conveying capacity of the ability of Rule of judgment, for example receiving end, transmission line, ability and user's request of this circuit.
Owing to select step 513, extraction step 503 and play step 505 with described above identical, so do not repeat them here.Media data comprises video data, voice data or text data.In the present example, media data comprises at least one in video data and the voice data.Under the situation of representative data corresponding to video data, this representative data comprises for example the presentation graphics data or the low-resolution video data of each media segment.Under the situation of representative data corresponding to voice data, this representative data comprises for example key phrase of each media segment (key-phrase) data.
As example 3, in a time cycle of media content, different video datas or voice data are distributed to channel or layer.For example, the channel of translatory movement image is distributed to the video data with standard resolution for 1/ layer 1, channel is distributed to for 1/ layer 2 had high-resolution video data.The channel 1 that transmits voice data is distributed to stereo data, channel 2 is distributed to mono data.
Referring now to Figure 114 to 118, the processing relevant with the determining step 511 of this example described.Shown in Figure 114, judged whether that in step S401 the user asks to exist.If confirm to have the user to ask to exist at step S401, just carry out shown in Figure 116 and ask relevant judgment processing SR-G with the user.
If confirm the no user request at step S401, handle just arriving step S403, judgement be only video data, only voice data or video and voice data both are receivable.If confirm that at step S403 only video data is receivable, just carry out judgment processing SR-H relevant shown in Figure 117 with video data.On the contrary, if confirm that only voice data is receivable, just carry out judgment processing SR-I relevant shown in Figure 118 with voice data.If confirm that video and voice data all are receivable, handle just shown in Figure 115, arriving step S405.
At step S405, determine the ability of receiving end.After the processing of execution in step S405, by the processing of the step S407 that carry out to determine line capacity to definite sequence with determine the processing of step S409 of the conveying capacity of this circuit.On the result's of the performed processing operation of step S405, S407 and S409 basis, in the determining step S511 of this example, determine it is the channel or the layer of receiving video data or voice data, still receive representative data.
Table 1
The receiving end ability Line capacity Is the conveying capacity of circuit big? Be received data
By force By force Not Video data: channel 1, layer 2 voice data: channel 1 (S411)
By force By force Be Video data: channel 1, layer 1 voice data: channel 1 (S413)
By force A little less than Not Video data: channel 1, layer 1 voice data: channel 2 (S413)
A little less than By force Be Video data: channel 1, layer 1 voice data: channel 2 (S415)
A little less than By force Not Video data: channel 1, layer 1 voice data: channel 2 (S415)
A little less than By force Be Video data: representative data voice data: channel 2 (S417)
A little less than A little less than Not Video data: representative data voice data: channel 2 (S417)
A little less than A little less than Be Video data: representative data voice data: representative data (S419)
Describe shown in Figure 116 now and ask relevant judgment processing SR-G with the user.At step S451, judge whether the user only asks video data.If select "Yes", just carry out the processing relevant and judge SR-H with video data at step S451.If select "No", handle just arriving step S453 at step S451.At step S453, judge whether the user only asks voice data.If select "Yes", just carry out the judgment processing SR-I relevant with voice data at step S453.If select "No", handle just arriving step S405 at step S451.
The judgment processing SR-H relevant with video data shown in Figure 117 described now.At step S461, determine the ability of receiving end.After the processing of execution of step S461, by the processing of the step S465 of the processing of the step S463 that carry out to determine line capacity to definite sequence and definite circuit conveying capacity.With these steps, behind S461, the S463 processing EO relevant with S465, as long as the receiving end ability is strong, line capacity is strong and conveying capacity circuit is little, then during the judgment processing SR-H relevant, receive only video data (step S471) for 1/ layer 2 by channel with the video data of this example.On the contrary, if the receiving end ability is weak, line capacity is weak and conveying capacity circuit is big, then receive only representative video data (step S473).If above-mentioned arbitrary condition all is not being met, then receive only video data (step S475) for 1/ layer 1 by channel.
The judgment processing SR-I relevant with voice data shown in Figure 118 described now.At step S471, determine the ability of receiving end.After the processing of execution of step S471, by the processing of the step S475 of the processing of the step S473 that carry out to determine line capacity to definite sequence and definite circuit conveying capacity.Behind the processing EO relevant with S475 with these steps S471, S473, as long as the receiving end ability is strong, line capacity is strong and conveying capacity circuit is little, then during the judgment processing SR-I relevant, receive only voice data (step S491) by channel 1 with the voice data of this example.On the contrary, if the receiving end ability is weak, line capacity is weak and conveying capacity circuit is big, then receive only representative voice data (step S493).If above-mentioned arbitrary condition all is not being met, then receive only video data (step S495) by channel 2.
(example 5)
In the present example, according to the conveying capacity of Rule of judgment, for example receiving end ability, transmission line ability, this circuit and user's request, determine a total data relevant, the only representative data of being correlated with or total data of being correlated with or representative data with corresponding media segment with corresponding media segment with media segment which as alternative.
As example 4, increase the representative data relevant as attribute for each element of the description data in the lowest hierarchical layer with corresponding media segment.Under the situation of representative data corresponding to video data, this representative data comprises for example the presentation graphics data or the low-resolution video data of each media segment.Under the situation of representative data corresponding to voice data, this representative data comprises for example key phrase of each media segment (key-phrase) data.
Referring now to Figure 119 to 121, the processing relevant with the determining step 511 of this example described.Shown in Figure 119, judged whether that in step S501 the user asks to exist.If confirm to have the user to ask to exist at step S501, just carry out shown in Figure 121 and ask relevant judgment processing SR-J with the user.
If confirm the no user request at step S501, handle just arriving step S503, judgement is that only relevant with media segment representative data, the total data of only being correlated with this media segment or the representative data and the total data both that are correlated with this media segment are receivable.If confirm that at step S503 only representative data is receivable, handle just shown in Figure 120, to arrive step S553, representative data as alternative.If only total data is receivable, handle and just to arrive step S555, only this total data as alternative.If representative data and total data all are receivable, handle just arriving step S505.
At step S505, determine line capacity.If line capacity is stronger, handle just arriving step S507.On the contrary, if line capacity a little less than, handle and just to arrive step S509.In each step S507 and S509, determine the conveying capacity of circuit.In step S507, if confirm that the conveying capacity of circuit is less, handle just arriving step S551, total data and representative data all as alternative.At step S509, if confirm that the conveying capacity of circuit is bigger, handle and just arrive step S553, representative data as alternative.If confirm that at step S507 the conveying capacity of circuit is big and confirm that at step S509 the conveying capacity of circuit is bigger, handle and just arrive step S555, total data as alternative.
With during the user asks relevant judgment processing SR-J, judge that at step S601 whether user's request is only corresponding to representative data.If select "Yes" at step S601, handle just arriving step S553, only representative data as alternative.If select "No" at step S601, handle and just arrive step S603, judge that whether this user's request is only corresponding to total data.If select "Yes" at step S603, handle just arriving step S555, only total data as alternative.If select "No" at step S603, handle just arriving step S551, the total data corresponding and representative data with media segment all as alternative.
The 18 embodiment
Eighteenth embodiment of the invention is described now.Figure 122 is the block scheme of the expression processing relevant with the data processing method of present embodiment.In the accompanying drawings, step is selected in label 501 expressions; 503 expression extraction steps; 515 expressions form step; Owing to select the identical of step 501 and extraction step 503 and the 14 embodiment, so do not repeat them here.
In forming step 515, form media content stream according to the data relevant with the selection section of extracting at extraction step 503.Particularly, in forming step, by the data of exporting at extraction step 503 are carried out the multiplexed stream that forms.
The 19 embodiment
Nineteenth embodiment of the invention is described now.Figure 123 is expression and the block scheme of the processing of the data processing method of present embodiment.In the figure, step is selected in label 501 expressions; 503 expression extraction steps; 515 expressions form step; 517 expression transfer step.Owing to select step 501 and extraction step 503 with described identical, so do not repeat them here referring to the 14 embodiment.In addition, it is identical with the formation step of 18 embodiment to form step 515, so also omit the description to it.
In transfer step 517, be transmitted in the stream that forms in the formation step by circuit.This transfer step 517 can comprise the step of the conveying capacity of determining circuit, can comprise the step of adjusting the data volume of composing document in transfer step 517 according to the conveying capacity of the circuit of determining and form step 515.
The 20 embodiment
The 20th embodiment of the present invention is described now.Figure 124 is the block scheme of the expression processing relevant with the data processing method of present embodiment.In the figure, step is selected in label 501 expressions; 503 expression extraction steps; 515 expressions form step; 519 expression recording steps; 521 expression data mediums.In recording step 519, the stream that forms in forming step 515 is recorded on the data medium 521.With data medium 521 record media content, the description data relevant and the structrual description data of being correlated with this media content with this media content.Data medium 521 can be for example hard disk, storer or DVD-ROM etc.Owing to select step 501 and extraction step 503 with described identical, so do not repeat them here referring to the 14 embodiment.In addition, it is identical with the formation step of 18 embodiment to form step 515, so also omit the description to it.
The 21 embodiment
The 21st embodiment of the present invention is described now.Figure 125 is the block scheme of the expression processing relevant with the data processing method of present embodiment.In the figure, step is selected in label 501 expressions; 503 expression extraction steps; 515 expressions form step; 519 expression recording steps; 521 expression data mediums; 523 expression data medium management processs.In data medium management process 523, reorganize media content of having stored and the media content that will store recently according to the available disk space of data medium 521.Specifically, the data recording management process/or 523 in, carry out one of following at least operation.When the available disk space of data medium 521 hour, after the media content that will store is recently edited, again it is stored.To selecting step 501 to transmit all relevant description data and structrual description data with the media content of having stored.To extraction step 503 transfers media content and structrual description data.Reorganize media content, and the content record that will so reorganize is on data medium 521.In addition, delete the media content that is not reorganized.
Owing to select the identical of step 501 and extraction step 503 and the 14 embodiment, do not repeat them here.In addition, it is identical with the formation step of the 18 embodiment to form step 515, in the description of this omission to it.Also have, because recording step 519 and data medium 521 and the 19 embodiment's is identical, so also omit description of them at this.
The 22 embodiment
The 22nd embodiment of the present invention is described now.Figure 126 is the block scheme of the expression processing relevant with the data processing method of present embodiment.In the figure, step is selected in label 501 expressions; 503 expression extraction steps; 515 expressions form step; 519 expression recording steps; 521 expression data mediums; 525 expression memory contents management processs.In memory contents management process 525, reorganize the media content that has been stored on the data medium 521 according to the media content memory cycle.Specifically, memory contents management process 525 may further comprise the steps: the media content of managed storage on data medium 521; To selecting step 501 to transmit description data and physical content data, they are all relevant with stored media content in predetermined a period of time; To extraction step 503 transfers media content and structrual description data; Reorganize media content; The media content that so reorganizes is recorded on the data medium 521; And the media content that do not reorganized of deletion.
Owing to select the identical of step 501 and extraction step 503 and the 14 embodiment, do not repeat them here.In addition, it is identical with the formation step of the 18 embodiment to form step 515, in the description of this omission to it.Also have, because recording step 519 and data medium 521 and the 19 embodiment's is identical, so also omit description of them at this.
In above-mentioned the 13 to the 22 embodiment, select step 501 and 503 can be embodied as selecting arrangement; Video selects step 507 can be embodied as the video selecting arrangement; Audio selection step 511 can be embodied as the audio selection device; Determining step 511 can be embodied as judgment means; Form step 515 and can be embodied as the formation device; Transfer step 517 can be embodied as conveyer; Recording step 519 can be embodied as pen recorder; Data medium management process 523 can be embodied as the data medium management devices; Memory contents management process 525 can be embodied as the memory contents management devices.Therefore can be embodied as part or all the data processing equipment that comprises these devices.
In the various embodiments described above, media content can comprise data stream, for example the text data except that video and voice data.In addition, each step of the various embodiments described above can utilize the program that makes computing machine carry out the processing relevant with all or these steps of a part with form of software that is stored in the program memory medium to realize, or utilizes the custom-designed hardware circuit that presents the feature of these steps to realize.
Though describe description data and structrual description data in the above-described embodiments separately, they can be merged into a data set shown in Figure 127 to 132.
As mentioned above, in data processing equipment of the present invention, data processing method, recording medium and program, utilize the description data of layering, according to from media content, selecting wherein one section at least by the additional score of selecting arrangement (corresponding to selecting step) to description data.Utilize extraction element (corresponding to extraction step) only to extract the relevant data of selecting with selecting arrangement (corresponding to selecting step) of section.Perhaps, utilize playing device (corresponding to playing step) only to play the relevant data of selecting with selecting arrangement (corresponding to selecting step) of section.
Utilize said structure, can from media content, freely select prior scene, can extract or play the important segment of selection like this.In addition, the description data of layering comprises top, lowermost layer and other layer.Can be according to unit arbitrarily, for example be that scene is selected by unit with a chapter or a joint.Can adopt various selection forms, for example the selection of a certain chapter and from this chapter the unnecessary section of deletion.
In data processing equipment of the present invention, data processing method, recording medium and program, the degree of the context importance of score presentation medium content.In case determine terrible assign to select important scenes, the set that just can easily prepare some important scenes of a program.In addition, as long as determine terrible importance of assigning to represent, just can in very big degree of freedom, select section by determining key word from the interested scene of angle of key word.For example, as long as determined key word, just can only select the required scene of user from certain view, for example personage or incident.
In data processing equipment of the present invention, data processing method, recording medium and program, under the situation that media content is made up of a plurality of different media data collection in a period of time, judgment means (corresponding to determining step) is according to Rule of judgment, determines which of these media data collection as alternative.Selecting arrangement (corresponding to selecting step) is only selected the media data collection from the determined data of judgment means (corresponding to determining step).Because judgment means (corresponding to determining step) can be determined and the relevant media data of best section according to Rule of judgment, selecting arrangement (corresponding to selecting step) can be selected the media data of suitable quantity.
In data processing equipment of the present invention, data processing method, recording medium and program, judgment means (corresponding to determining step) is according to Rule of judgment, only determines video data, only still is voice data all as being elected to be object video and both audio.Select the required time of section so can shorten selecting arrangement (corresponding to selecting step).
In data processing equipment of the present invention, data processing method, recording medium and program, added representative data as attribute to description data, judgment means can be determined the media data or the representative data of best section according to Rule of judgment.
In data processing equipment of the present invention, data processing method, recording medium and program, judgment means (corresponding to determining step) is according to Rule of judgment, only determines the total data relevant with the respective media section, only still is representative data as alternative this total data and representative data both.So can shortening selecting arrangement (corresponding to selecting step), judgment means selects the required time of section.

Claims (30)

1. data processing equipment comprises:
Input media, be used to import content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described section attribute information comprises the temporal information of representing scene boundary and based on the contextual score of media content, described score is represented described section significance level;
Selecting arrangement is used for selecting described section according to described score and described temporal information;
The content input media is used to import described media content; With
Extraction element is used for extracting the described media content portion of having imported according to the described temporal information that is associated with selected section.
2. according to the data processing equipment of claim 1, also comprise memory storage, it stores described content description data and described media content.
3. according to the described data processing equipment of claim 1, wherein said content description data comprises the link information of described media content, and described extraction element is extracted in the media content of link destination.
4. according to the described data processing equipment of claim 1, wherein said temporal information comprises the start time and the concluding time of each scene.
5. according to the described data processing equipment of claim 1, wherein said temporal information comprises the start time and the duration of each scene.
6. according to the described data processing equipment of claim 1, wherein said section by hierarchical description.
7. according to the described data processing equipment of claim 1, the score that is endowed in described each section of wherein said selecting arrangement selection is greater than the section of predetermined threshold.
8. according to the described data processing equipment of claim 1, wherein said selecting arrangement is just selected described section with score so that described period duration and be maximum, but below predetermined threshold value.
9. according to the described data processing equipment of claim 1, wherein said selecting arrangement is just selected described section with score so that described period duration and around predetermined threshold value.
10. according to the described data processing equipment of claim 1, wherein also comprise the playing device of the media content of playing described extraction.
11. a data processing equipment comprises:
Input media, be used to import content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described section attribute information comprises the temporal information of representing scene boundary and the viewpoint represented by at least one key word of describing scene and represent the score of each section based on described viewpoint, and described score is represented described section significance level;
Selecting arrangement is used for selecting described section according at least one and described temporal information of described viewpoint and described score;
The content input media is used to import described media content; With
Extraction element is used for extracting the described media content portion of having imported according to the described temporal information that is associated with selected section.
12., wherein in described content description data, described many as described section attribute information to described viewpoint and described score according to the described data processing equipment of claim 11.
13., wherein also comprise the playing device of the media content of playing described extraction according to the described data processing equipment of claim 11.
14. according to the data processing equipment of claim 11, also comprise memory storage, it stores described content description data and described media content.
15. according to the described data processing equipment of claim 11, wherein said content description data comprises the link information of described media content, described extraction element is extracted in the media content of link destination.
16. according to the described data processing equipment of claim 11, wherein said temporal information comprises the start time and the concluding time of each scene.
17. according to the described data processing equipment of claim 11, wherein said temporal information comprises the start time and the duration of each scene.
18. according to the described data processing equipment of claim 11, wherein said section by hierarchical description.
19. according to claim 11 or 12 described data processing equipments, wherein said selecting arrangement is selected the section of the score of described viewpoint greater than predetermined threshold value.
20. according to claim 11 or 12 described data processing equipments, wherein said selecting arrangement is according to the score of the selected viewpoint section of selection just so that the duration of section and for maximum below predetermined threshold.
21. according to claim 11 or 12 described data processing equipments, the alternative condition of wherein said selecting arrangement when selecting described viewpoint and described score is to import from the user profile of having described its condition.
22. according to claim 11 or 12 described data processing equipments, wherein said selecting arrangement is according to the logic operation result more than 2 in the score of described viewpoint and described viewpoint is selected described section.
23. a data processing method comprises step:
The input content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described section attribute information comprises the temporal information of representing scene boundary and based on the contextual score of media content, described score is represented described section significance level;
According to described score and the described temporal information section of selection;
Import described media content; With
According to extracting the described media content portion of having imported with selected section temporal information that is associated.
24. a data processing method comprises step:
The input content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described section attribute information comprises the temporal information of representing scene boundary and the viewpoint represented by at least one key word of describing scene and based on the score of described viewpoint, and described score is represented described section significance level;
At least one and described temporal information according to described viewpoint and described score are selected described section;
Import described media content; With
According to extracting the described media content portion of having imported with selected section temporal information that is associated.
25. a data processing equipment comprises:
Input media, be used to import content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described attribute information comprises the contextual score based on media content, and described score is represented described section significance level;
Selecting arrangement is used for selecting at least one section according to described score from described a plurality of sections.
26. according to the described data processing equipment of claim 25, wherein said each section is hierarchical description.
27. according to claim 25 or 26 described data processing equipments, wherein said content description data includes the additional information of closing the context content.
28., wherein represent the link destination of the representative data of each section to be affixed to described each section according to claim 25 or 26 described data processing equipments.
29. according to the described data processing equipment of claim 28, wherein said representative data be video information with and/or audio-frequency information.
30. a data processing method may further comprise the steps:
Input step, the input content description data, described content description data is described the attribute information of a plurality of sections and described section, a scene of the media content that each expression of described section is made up of a plurality of scenes, described attribute information comprises the contextual score based on media content, and described score is represented described section significance level; With
Select step, select at least one section from described a plurality of sections according to described score.
CNB2004100566330A 1998-12-25 1999-12-25 Data processing device and method Expired - Fee Related CN100452028C (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP37148398 1998-12-25
JP371483/98 1998-12-25
JP271404/99 1999-09-24
JP350479/99 1999-12-09

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CNB991229886A Division CN1170237C (en) 1998-12-25 1999-12-25 Data processing unit and method, and media and program of executing same method

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CNB200610100673XA Division CN100433015C (en) 1998-12-25 1999-12-25 Data processing method and device
CNB2006101006725A Division CN100428239C (en) 1998-12-25 1999-12-25 Data processing method and device

Publications (2)

Publication Number Publication Date
CN1821996A CN1821996A (en) 2006-08-23
CN100452028C true CN100452028C (en) 2009-01-14

Family

ID=36923364

Family Applications (3)

Application Number Title Priority Date Filing Date
CNB2004100566330A Expired - Fee Related CN100452028C (en) 1998-12-25 1999-12-25 Data processing device and method
CNB2006101006725A Expired - Fee Related CN100428239C (en) 1998-12-25 1999-12-25 Data processing method and device
CNB200610100673XA Expired - Fee Related CN100433015C (en) 1998-12-25 1999-12-25 Data processing method and device

Family Applications After (2)

Application Number Title Priority Date Filing Date
CNB2006101006725A Expired - Fee Related CN100428239C (en) 1998-12-25 1999-12-25 Data processing method and device
CNB200610100673XA Expired - Fee Related CN100433015C (en) 1998-12-25 1999-12-25 Data processing method and device

Country Status (1)

Country Link
CN (3) CN100452028C (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120197630A1 (en) * 2011-01-28 2012-08-02 Lyons Kenton M Methods and systems to summarize a source text as a function of contextual information
IN2014CN02383A (en) * 2011-10-13 2015-06-19 Koninkl Philips Nv

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0737930A1 (en) * 1995-04-12 1996-10-16 Sun Microsystems, Inc. Method and system for comicstrip representation of multimedia presentations
US5572728A (en) * 1993-12-24 1996-11-05 Hitachi, Ltd. Conference multimedia summary support system and method
US5664227A (en) * 1994-10-14 1997-09-02 Carnegie Mellon University System and method for skimming digital audio/video data
WO1998014895A2 (en) * 1996-09-30 1998-04-09 Philips Electronics N.V. A method for organizing and presenting the structure of a multimedia system and for presenting this structure to a person involved, in particular a user person or an author person, and a software package having such organization and presentation facility

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5572728A (en) * 1993-12-24 1996-11-05 Hitachi, Ltd. Conference multimedia summary support system and method
US5664227A (en) * 1994-10-14 1997-09-02 Carnegie Mellon University System and method for skimming digital audio/video data
EP0737930A1 (en) * 1995-04-12 1996-10-16 Sun Microsystems, Inc. Method and system for comicstrip representation of multimedia presentations
WO1998014895A2 (en) * 1996-09-30 1998-04-09 Philips Electronics N.V. A method for organizing and presenting the structure of a multimedia system and for presenting this structure to a person involved, in particular a user person or an author person, and a software package having such organization and presentation facility

Also Published As

Publication number Publication date
CN100428239C (en) 2008-10-22
CN1821996A (en) 2006-08-23
CN1945573A (en) 2007-04-11
CN1945572A (en) 2007-04-11
CN100433015C (en) 2008-11-12

Similar Documents

Publication Publication Date Title
CN100474295C (en) Data processing apparatus and method
US7134074B2 (en) Data processing method and storage medium, and program for causing computer to execute the data processing method
JP3454764B2 (en) Search system and search method for searching video based on content
KR100771055B1 (en) Data processing apparatus and data processing method
CN100452028C (en) Data processing device and method
CN106663050B (en) Differential data generation system, data update system and differential data generation method
CN100474308C (en) Data processing method and storage medium, and program for causing computer to execute the data processing method
KR100532558B1 (en) Apparatus and method for managing search information for searching for moving image contents, and search apparatus using search information
US7617237B2 (en) Encoding device, encoding method, decoding device, decoding method, program and machine readable recording medium containing the program
KR100690945B1 (en) Dynamic image content search information managing apparatus
JP3824318B2 (en) Data processing apparatus, data processing method and recording medium
JP2007074749A (en) Data processing apparatus, data processing method, and program for computer to execute data processing method
JP2003018540A (en) Image summarizing method and control program
JP2005166063A (en) Data processing apparatus, data processing method, recording medium, and program for making computer to execute the data processing method
JP2009223901A (en) Data processing device and method
JP2007080290A (en) Summary creating apparatus, data processing method and program for causing computer to execute the data processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090114

Termination date: 20100125