WO2005050645A1 - Real-time store and playback of working screen and audio signal - Google Patents

Real-time store and playback of working screen and audio signal Download PDF

Info

Publication number
WO2005050645A1
WO2005050645A1 PCT/KR2004/003003 KR2004003003W WO2005050645A1 WO 2005050645 A1 WO2005050645 A1 WO 2005050645A1 KR 2004003003 W KR2004003003 W KR 2004003003W WO 2005050645 A1 WO2005050645 A1 WO 2005050645A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
file
moving picture
information
information processing
Prior art date
Application number
PCT/KR2004/003003
Other languages
French (fr)
Inventor
Sun-Hyung Ryu
Sung-Been Cho
Original Assignee
Athena Korea Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Athena Korea Co., Ltd. filed Critical Athena Korea Co., Ltd.
Publication of WO2005050645A1 publication Critical patent/WO2005050645A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing

Definitions

  • the present invention relates to a method and system for real-time recording and replaying image or moving picture being displayed on screen of an information processing apparatus such as PDA(Portable Digital Assistant) with audio signals including voice and/or sound.
  • PDA Portable Digital Assistant
  • the event recording technology is a technology that detects any event wher ver event is inputted through a user interface of the information processing apparatus and records the detected event.
  • the image capturing technology is a technology that captures the image as it is displayed on screen of the information processing apparatus and records the captured image.
  • the image capturing technology includes method for using a specified capture board, method for using a graphic device interface, method for using a virtual drive and method for using GUI etc.
  • Method for using a specified capture board, GDI hooking and virtual drive is disadvantageous in that above methods are dependent on user terminal configuration and thus cannot be generally used.
  • the method for capturing GUI is the most general method, which is independent of operating system and user terminal configuration.
  • the conventional image capturing systems cause much load to the CPU(Central
  • an object of the present invention is to provide a method and system which can record and replay multimedia data including screen on terminal and audio signal (including voice signal inputted externally and sound generated from the terminal) in real time.
  • Another object of the present invention is to provide a method and system which can record and replay audio signals in real time and working screen(current screen by user's execution) enabling for real time blackboard writing on screen or on application program executed on screen while recording screen of a terminal.
  • Still another object of the present invention is to provide a method and system for recording and replaying audio signals and working screen having screen capturing technique which allows to reduce load of central processing unit (CPU).
  • Still another object of the present invention is to provide a method and system for recording and replaying audio signals and working screen with more comfortable user interface.
  • a computer-readable medium including a program containing computer-executable instructions for performing method for recording multimedia data including working screen and audio signal of a information processing apparatus in real time, comprising the steps of: capturing capture region of the information processing apparatus screen in real time to record as an image file; recording a first audio signal inputted to the information processing device through a sound input device and a second audio signal generated in the information processing apparatus; and detecting input of an event through a user interface device, and if an event is inputted, saving event information of the inputted event as an event file is provided.
  • a computer-readable medium including a program containing computer-executable instructions for performing method for replaying screen by user's execution and audio signals using a moving picture file and an event file separate from the moving picture file, the moving picture file including information on image that is captured by capturing working screen of the information processing apparatus, a first
  • a system for recording audio signals and a working screen of an information processing apparatus comprising: an image generator for capturing capture regions of the information processing apparatus in real time to record as an image file; an audio generator for recording a first audio signal inputted through the information processing apparatus and a second audio signal generated by the information processing apparaUis; and an event hooking part for saving event information of an inputted event as an event file, by determining event input through a user interface device, when an event is inputted is provided
  • An information processing apparatus comprising: a screen; a sound input device for receiving a first audio signal externally; an user interface device for receiving event by a user; and a real time recording system for recording audio signals and screen information displayed on screen currently, wherein the real time recording system comprises an image generator for capturing capture region of the screen in real time to record as an image
  • FIG. 1 is a drawing for explaining configuration for use of system for recording/replaying working screen and audio signals according to an embodiment of the present invention.
  • FIG. 2 is a schematic block diagram of system for recording/replaying working screen and audio signals according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of process that records working screen and audio signals in real time and synthesizes them according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of process that replays the recorded screen and audio signals according to an embodiment of the present invention.
  • FIG. 5 is a flowchart for compressing images according to an embodiment of the present invention.
  • FIG. 6 is an exemplary drawing showing capturing location by user's selection according to an embodiment of the present invention.
  • FIG. 7 is an exemplary drawing of divided capturing location on screen according to an embodiment of the present invention.
  • FIG. 1 is a drawing for explaining configuration for use of system for recording/replaying working screen(current screen by user's execution) and audio signal according to an embodiment of the present invention.
  • the system for recording/replaying in real time may be loaded in an information processing apparatus such as notebook PC(Personal Computer) 110, PDA (Portable Digital Assistant 120, etc.
  • the system for recording/replaying in real time can be also loaded in other kinds of information processing apparatuses (hereinafter, referred to as 'terminal')- for example, desktop PC, MAC, etc-
  • multimedia data can be stored with one-click of an appropriate hot key 6, 8 on PC 1 or PDA 3.
  • Multimedia data includes the working screen which are image signals displayed on working screen and audio signals.
  • the appropriate hot key 6, 8 is a combination of at least one general key(s) on PC 1 or PDA 3. Otherwise, private keys 7, 9 are provided to the PC 1 or the PDA 3 and multimedia data may be recorded with one-click using the private key 7, 9.
  • the system according to an embodiment of the present invention can be controlled by appropriate hot keys 6, 8 or private keys 7, 9 of PC 1 or PDA.
  • the system according to an embodiment of the present invention is preferred to be loaded in main memory of PC 1 or PDA 3.
  • the system according to an embodiment of the present invention can be also controlled by button of wireless remote LAN controller 2. That is, recording and replaying multimedia data can be controlled by PC 1 or PDA 3 or remote controller 2 coupled to wireless LAN according to the present invention.
  • bluetooth interface which is one of wireless LANs, control signal is transmitted to a terminal and the screen and audio data is started to recording and also finished when the remote controller button is pushed.
  • the system according to an embodiment of the present invention may be executed in strand-alone form, or cooperating with external devices.
  • Example of the external device may include CD-RW driver 4, USB driver 5, and the like.
  • the system according to an embodiment of the present invention may be implemented in the form of software or hardware, but it is preferable to be in the form of software. If the system is implemented in the form of software, the system for recording/replaying multimedia data of the present invention may be loaded in the CD- RW or memory stick of USB driver 5 etc. At this case, execution of the system for recording replaying multimedia data is performed in PC 1 or PDA 3 coupled to CD-RW driver 4 or USB driver 5, and recording of multimedia data may be performed in CD- RW or memory stick of USB driver.
  • the system may comprise a media generator 10, a media synthesizer 20 and a media player 30.
  • the media generator 10 receives screen of PC, voice signals inputted to the PC and sounds generated by application program being executed in the PC, etc. and records them in real time.
  • the media synthesizer 20 synthesizes several kinds of data files generated by the media generator 10 into one moving picture file.
  • the media player 30 replays the synthesized media.
  • the media generator 10 may comprise an external device interface and detector 1 1, an event hooking part 12, an image generator 13 and an audio generator 14.
  • the external device interface and detector 11 detects if the external device is loaded and processes control signals from/to the external device if the external device is detected. Examples of the external device may include a remote controller 2, a CD- RW driver and a USB driver 5. and the like as shown in FIG. 1.
  • the external device interface and detector 1 1 also detects if the specified hot key (6, 8 in FIG. 1) or private key (7, 9 in FIG. 1) is inputted. That is, the external device interface and detector 11 detects if a user has requested for recording, stopping or replaying, etc. using the specified hot key (6, 8 in FIG. 1) or private key (7, 9 in FIG.
  • the Image generator 13 captures image as it is displayed on screen of PC 1 in real time and saves in the form of image file 15. Therefore, all contents on screen and changes of the screen status, etc. is included in the image file 15.
  • the audio generator 14 receives and records user's voice around the PC 1, and captures and records all sounds generated from the terminal.
  • the audio generator 14 is provided separately from the image generator and thus, captures and records audio signals such as voices and sounds. Particularly, the audio generator 14 receives voice signals inputted to the PC 1 through a voice input device of the PC 1. Besides voice signals surrounding noises or sounds can be also inputted.
  • the audio generator 14 receives sounds generated from PC 1, especially the sound generated by an application program (for example, alarm sound, etc.), mixes them with external voice signals, and records the mixed signal in the form of 'voice+sound' file 17, that is, audio file 17. On the other hand, the audio generator 14 can also record separately the voice signal inputted externally and the sound generated internally from the PC 1 in the form of each independent file.
  • the event hooking part 12 detects if the blackboard writing event is inputted by a user while each of the audio generator 14 and the image generator 13 captures and records each of the image signal and the audio signal. If the blackboard writing event is inputted, the event hooking part 12 saves information corresponding to the event.
  • the blackboard writing event refers to function for inserting explanation, annotation, marking, etc.
  • the blackboard writing event can occur through the user interface device of PC 1.
  • the blackboard writing event can be inputted through the user interface device such as mouse, keyboard, touch panel, or touch screen.
  • Information related to the event is saved in the form of an independent event file 16.
  • the event information includes input time, input type, location information, etc.
  • the input type information is information on the event input device, for example, mouse, keyboard, etc.
  • the event hooking part 12 saves only the information related to the event by hooking the event separated from the image information on screen.
  • the image generator 13 instead of the image hooking part 12, captures screen image as it is including the blackboard writing event and saves in the form of image file. That is, at this case, the event information is not saved separately but saved in the form of image.
  • the media synthesizer 20 includes a synthesizer 21 and a compressor 22.
  • the synthesizer 21 synthesizes above unit files into a unified file format. At this case, if the blackboard writing event is set up to be saved in the form of event through user configuration setting, the synthesizing process for the event file is not required.
  • the unit files are inputted to the synthesizer 21 and the synthesizer generates a synthesized file, for example, AVI (Audio Video Interleave) file 23.
  • the compressor 22 compresses the synthesized file and generates a compressed file in the form that a user wants through the compression process. That is, the compressor 21 receives a synthesized file (for example, AVI file) and then generates a compressed file (for example, SWF, ASF file) by compressing the received file. Through the compression process, a final moving picture file in the form of ASF 24, SWF 25 is generated.
  • the media player 30 replays multimedia data using the final moving picture file 24, 25 generated from the media synthesizer 20.
  • the event generator 33 forwards time information, input type (for example, if input device is mouse or keyboard, etc.), location information of the event, etc. to the media synchronizer 32 using the event file 16 if the blackboard writing event is not saved in the form of image but in the form of event.
  • the moving picture generator 34 forwards moving picture data to the media synchronizer 32 using the final moving picture file in the form of SWF, ASF.
  • the media synchronizer 32 compares times of the moving picture information and the event information, synchronizes the moving picture information with the event information according to time, and outputs to the player 31.
  • the player 31 replays the moving picture information and the event information through screen and speaker.
  • Commercial media players (For example, window media player of Microsoft, etc.) can be used as the player 31.
  • function for synchronizing the moving picture information with the event information can be provided to the commercial media player or interface part with the commercial media player.
  • FIG. 3 is a flow chart for showing an example of process for recording and synthesizing audio signal and working screen in real time which is performed by the media player 10 and the media synthesizer 20. It is assumed that the blackboard writing event is set up to be saved in the form of event, not in the form of image, before a user pushes a save start button/key. If the multimedia data saving is started by the user 40, the event hooking part
  • the event hooking part 12 determines if the event is saved as image
  • the image capture process captures image including the event 45. If the event is saved in the form of event, the corresponding event information is saved 47. At this time, if the event is inputted by the keyboard, the event information including event generation time, keyboard value and keyboard input location information is saved. If the event is inputted by the mouse, the event information including event generation time, mouse click value, location information is saved.
  • the terminal screen is captured through continuous execution of an image capture thread, the GUI is captured according to screen capture attribute value 45.
  • the screen capture attribute value is for determining if region to be captured is whole screen of the terminal, or specified region, or specified application program region.
  • the captured image is generated as a frame unit AVI file to be saved.
  • the image capture thread is for performing a portion of or whole function of the image generator 13 of FIG. 2. Further, voice recording and sound recording 42, 43 are also performed along with the event hooking and the image capture. In voice recording process 42, it is performed for detecting the voice input device, receiving voice from the voice input device, and recording the voice in the form of a voice file (WAV). In sound recording process 43, it is performed for recording of the sound generated from the terminal using sound mixing function. The recorded sound is synthesized with the voice file 46, and then saved as an audio signal 48. If the recording is completed, the image file and the voice file is inputted to the media synthesizer 21 of FIG.
  • voice recording process 42 it is performed for detecting the voice input device, receiving voice from the voice input device, and recording the voice in the form of a voice file (WAV).
  • WAV voice file
  • sound recording process 43 it is performed for recording of the sound generated from the terminal using sound mixing function. The recorded sound is synthesized with the voice file 46, and then saved as an audio signal 48. If the
  • FIG. 4 is a flow chart for an example of working screen and audio signal replay process. If the replay is started by employing a replaying selection key/button, etc., it is checked if the blackboard writing event is saved as the image or the event 61. If the event is saved as the image, a user replays automatically by opening the compressed final moving picture file through the media player 31 of FIG. 2
  • the multimedia data replaying process through the moving picture file including the blackboard writing event is performed until the moving picture replay is completed 69.
  • event information including the event time derived from the saved event file is extracted while the moving picture file being replayed 62. Further, information on moving picture frame time replayed presently is derived 63. The event time is compared with the moving picture frame time 68. If the event time is smaller than the moving picture frame time or the same, the derived event information is executed and the corresponding event is output along with the moving picture 70. If the event time is larger than the moving picture frame time, the process is returned to the event time information deriving process 62.
  • FIG. 5 is an exemplary flow chart of process for generating the image performed in the image generator 13. Particularly, the FIG. 5 shows an exemplary compression process which is one of cores of image generation process.
  • the image generator shown in FIG. 2 includes an image compression module (not shown).
  • the image compression module the core module in the image recording, does not record all captured images but records changed part compared with the previous screen. By this, the amount of the data is minimized because the image which is same as the previous screen is not recorded. Further, the mouse movement or screen image movement may not be soft through the CPU load if the whole region of the screen is captured.
  • an exemplary compression process is as following. Firstly, user's selection information about the capture region is checked before the image capture 81. The user's selection information can be classified into appointed region capture 82 where only the region selected by a user is captured, whole screen region capture 83, and specified application program capture 84 where only the application program selected by a user among the operating application programs is captured, etc.
  • FIG. 6 is an example of the capture region selected by a user. Referring to FIG.
  • the whole screen 101 of the terminal becomes the capture region.
  • the power point screen 102 of FIG. 6 becomes the capture region.
  • the region capture 82 for example, as shown in FIG. 6, the specified region 103 selected by a user becomes the capture region.
  • the capture division region is computed according to the user selection information.
  • the capture division region is a divided region into more than two parts, and the number of the division regions is preferred to be determined automatically. However, the number of the division regions can be also set by a user.
  • FIG. 7 is an example of the capture division region. In FIG. 7, the capture region is the whole screen 101 which is divided to 4 division regions 1 1 1, 1 12, 1 13, 114. Referring to FIG.
  • the image is captured base on the division region using the computed division region. Then, it is checked if the comparison target for the division region exists 87.
  • the comparison target is for determining if image information in the vision region is same as the previous image information. That is. the comparison target refers to image information previously captured and recorded. If the comparison target does not exist, the captured division region image is saved as the comparison target 88. If the comparison target for the corresponding division region exists, the captured division region image information is compared with the image information of the comparison target. If they are same, image information of captured division region is not saved and the process returns to the division region capture process 86. If they are not same, the image information of the captured division region is recorded 90. The above processes from 86 to 90 is performed for all the division regions.
  • the above processes from 86 to 90 is performed 4 times.
  • the captured image data of each of the division regions is synthesized and one image data is generated 91.
  • the generated image data is saved in the data file 92. Referring to FIG. 7, the image of the first division region 111 is captured 86, and it is checked if the comparison target of the captured image, that is, pre-captured and recorded image, exists 87. If the comparison target exists, the presently captured image is compared with the comparison target 89. If they are different, the presently captured image is recorded 90.
  • memo or marking/annotation can be inserted to the terminal screen and voice and sound can be mixed simultaneously.
  • execution in the external device is possible, and execution result data can be saved in the external device after the execution, and the result data can be used by a user easily using specified keys or specified hot keys.
  • the saved result data can be replayed in any circumstance, because the moving picture file is generated regardless of user platforms.
  • contents can be generated by the present invention, the field where the present invention can be applied is very wide, as will be described in the following.
  • educational contents can be generated easily using the present invention.
  • the educational contents can be generated using the application program (for example, word, excel, etc.) in PC without employing any particular device.
  • the present invention can be also applied to the product marketing and presentation.
  • the present invention is useful in making introduction materials of products in internet shopping malls and companies and making presentation materials for corporate public relations.
  • the present invention can be also applied to editing electronic manuals which is easier to understand as the all data related to the manual such as picture including text, voice explanation about text, and sounds, etc. can be recorded in manual.
  • the record result of the present invention can be generated in the form of streaming, download, the present invention can be applied to production of contents which can be provided through wire or wireless network.
  • the present invention can be also useful in the record of conferences or interviews and contents production for electronic books.
  • multimedia data including screen of the terminal and audio signals can be recorded simultaneously in real time with a simple process and replayed. Therefore, according to the present invention, as the user's voice and sound included in an application program can be recorded using sound mixing technique, it allows to deliver vivid images while replaying and create the best results for a specific situation while recording.
  • the present invention allows to write marking/explanation/annotation on screen or on an application program executed on the screen while recording screen on terminal. Furthermore, according to the present invention, as the user can select screen capture region among whole screen, specified region, specified application program region, etc., it allows to improve minimization of system load and comfort of the user . through various capturing regions. Further, CPU load can be reduced because capture image is recorded only when the capture image is different from the previous capture image after dividing capture regions. Moreover, according to the present invention, it allows comfort in use since record and replay can be performed with one-click of specified keys, buttons of the terminal or the remote controller.

Abstract

The present invention relates to real time recording of working screen and audio signal, replay method and the system. The present invention comprises real time recording process and replay process. The recording process comprises the steps of capturing capture region of the information processing apparatus screen in real time to record as an image file; recording a first audio signal inputted to the information processing device through a sound input device and a second audio signal generated in the information processing apparatus; and monitoring input of an event through a user interface device, and if an event is inputted, saving event information of the inputted event as an event file. The replay process comprises the steps of deriving time information off each moving picture frame from the moving picture file; deriving time information of each event from the event file; Synchronizing each event with the corresponding moving picture frame by comparing the time information of each moving picture frame with the time information of each event; and replaying the synchronized event and moving picture frame. According to the present invention, multimedia data including screen on the terminal, audio signal and the event can be recorded and replayed simultaneously in real time with simple process.

Description

AUDIO SIGNAL
Field of the invention The present invention relates to a method and system for real-time recording and replaying image or moving picture being displayed on screen of an information processing apparatus such as PDA(Portable Digital Assistant) with audio signals including voice and/or sound.
Background of the invention Screen recording technology is classified into event recording technology and image capturing technology. The event recording technology is a technology that detects any event wher ver event is inputted through a user interface of the information processing apparatus and records the detected event. On the other hand, the image capturing technology is a technology that captures the image as it is displayed on screen of the information processing apparatus and records the captured image. As the event recording technology stores location information of mouse event, keyboard input value, and so on, it requires same configuration when recording and replaying, while of information is relatively small. Therefore, different results are output depending on user terminal and application program configuration, by which use of the event recording technology is restricted. The image capturing technology includes method for using a specified capture board, method for using a graphic device interface, method for using a virtual drive and method for using GUI etc. Method for using a specified capture board, GDI hooking and virtual drive is disadvantageous in that above methods are dependent on user terminal configuration and thus cannot be generally used. The method for capturing GUI is the most general method, which is independent of operating system and user terminal configuration. The conventional image capturing systems cause much load to the CPU(Central
Processing Unit) while capturing screen, by which gap of mouse movement occurs. Therefore, there were obstacles in using conventional image capturing systems. Further, conventional image capturing systems do not provide a function for inserting explanation to an application program, in other words blackboard writing function. Therefore, blackboard writing is performed after image recording is completed, which requires double task for insertion of explanation. Furthermore, according to the conventional image capturing system, voice had to be recorded separately in recording screen, because recording the voice and the screen image simultaneously was difficult, and sound output from the terminal (including sound output by application program) is also difficult to record in real time. Moreover, in view of user interface, conventional image capturing system is installed in PC through install shield like other application programs and used by icon click, which causes inconvenience when image capturing system should be executed immediately. Therefore, various and comfortable user interface is required. Detailed description of the invention Technical objects Accordingly, an object of the present invention is to provide a method and system which can record and replay multimedia data including screen on terminal and audio signal (including voice signal inputted externally and sound generated from the terminal) in real time. Another object of the present invention is to provide a method and system which can record and replay audio signals in real time and working screen(current screen by user's execution) enabling for real time blackboard writing on screen or on application program executed on screen while recording screen of a terminal. Still another object of the present invention is to provide a method and system for recording and replaying audio signals and working screen having screen capturing technique which allows to reduce load of central processing unit (CPU). Still another object of the present invention is to provide a method and system for recording and replaying audio signals and working screen with more comfortable user interface.
Technical solution In order to achieve above-mentioned objects, according to an aspect of the present invention, a computer-readable medium including a program containing computer-executable instructions for performing method for recording multimedia data including working screen and audio signal of a information processing apparatus in real time, comprising the steps of: capturing capture region of the information processing apparatus screen in real time to record as an image file; recording a first audio signal inputted to the information processing device through a sound input device and a second audio signal generated in the information processing apparatus; and detecting input of an event through a user interface device, and if an event is inputted, saving event information of the inputted event as an event file is provided In order to achieve above-mentioned objects, according to another aspect of the present invention, a computer-readable medium including a program containing computer-executable instructions for performing method for replaying screen by user's execution and audio signals using a moving picture file and an event file separate from the moving picture file, the moving picture file including information on image that is captured by capturing working screen of the information processing apparatus, a first audio signal inputted to the information processing apparatus through a sound input device and a first audio signal generated from the information processing apparatus, comprising the steps of: deriving time information of each moving picture frame from the moving picture file; deriving time information of each event from the event file; synchronizing each event with the corresponding moving picture frame by comparing the time information of each moving picture frame with the time information of each event; and replaying the synchronized event and moving picture frame is provided. In order to achieve above-mentioned objects, according to still another aspect of the present invention, A system for recording audio signals and a working screen of an information processing apparatus, the system being loaded to the information processing apparatus, comprising: an image generator for capturing capture regions of the information processing apparatus in real time to record as an image file; an audio generator for recording a first audio signal inputted through the information processing apparatus and a second audio signal generated by the information processing apparaUis; and an event hooking part for saving event information of an inputted event as an event file, by determining event input through a user interface device, when an event is inputted is provided In order to achieve above-mentioned objects, according to still another aspect of the present invention, An information processing apparatus comprising: a screen; a sound input device for receiving a first audio signal externally; an user interface device for receiving event by a user; and a real time recording system for recording audio signals and screen information displayed on screen currently, wherein the real time recording system comprises an image generator for capturing capture region of the screen in real time to record as an image file; an audio generator for recording the first audio signal and a second audio signal generated from the information processing apparatus; and a event hooking part for recording event information on an inputted event as an event file by monitoring event input, when an event is inputted is provided
Brief description of the drawings FIG. 1 is a drawing for explaining configuration for use of system for recording/replaying working screen and audio signals according to an embodiment of the present invention. FIG. 2 is a schematic block diagram of system for recording/replaying working screen and audio signals according to an embodiment of the present invention. FIG. 3 is a flowchart of process that records working screen and audio signals in real time and synthesizes them according to an embodiment of the present invention. FIG. 4 is a flowchart of process that replays the recorded screen and audio signals according to an embodiment of the present invention. FIG. 5 is a flowchart for compressing images according to an embodiment of the present invention. FIG. 6 is an exemplary drawing showing capturing location by user's selection according to an embodiment of the present invention. FIG. 7 is an exemplary drawing of divided capturing location on screen according to an embodiment of the present invention.
Mode of invention Hereinafter, the preferred embodiment of the present invention will be described with accompanying drawings. FIG. 1 is a drawing for explaining configuration for use of system for recording/replaying working screen(current screen by user's execution) and audio signal according to an embodiment of the present invention. Referring to FIG. 1, the system for recording/replaying in real time according to an embodiment of the present invention may be loaded in an information processing apparatus such as notebook PC(Personal Computer) 110, PDA (Portable Digital Assistant 120, etc. Further, the system for recording/replaying in real time can be also loaded in other kinds of information processing apparatuses (hereinafter, referred to as 'terminal')- for example, desktop PC, MAC, etc- When the system according to an embodiment of the present invention is loaded in PC 1 or PDA 3, multimedia data can be stored with one-click of an appropriate hot key 6, 8 on PC 1 or PDA 3. Multimedia data includes the working screen which are image signals displayed on working screen and audio signals. The appropriate hot key 6, 8 is a combination of at least one general key(s) on PC 1 or PDA 3. Otherwise, private keys 7, 9 are provided to the PC 1 or the PDA 3 and multimedia data may be recorded with one-click using the private key 7, 9. That is to say, the system according to an embodiment of the present invention can be controlled by appropriate hot keys 6, 8 or private keys 7, 9 of PC 1 or PDA. In order to execute the system with one-click of specified hot keys 6, 8 or private keys 7, 9, the system according to an embodiment of the present invention is preferred to be loaded in main memory of PC 1 or PDA 3. The system according to an embodiment of the present invention can be also controlled by button of wireless remote LAN controller 2. That is, recording and replaying multimedia data can be controlled by PC 1 or PDA 3 or remote controller 2 coupled to wireless LAN according to the present invention. For example, through bluetooth interface, which is one of wireless LANs, control signal is transmitted to a terminal and the screen and audio data is started to recording and also finished when the remote controller button is pushed. The system according to an embodiment of the present invention may be executed in strand-alone form, or cooperating with external devices. Example of the external device may include CD-RW driver 4, USB driver 5, and the like. The system according to an embodiment of the present invention may be implemented in the form of software or hardware, but it is preferable to be in the form of software. If the system is implemented in the form of software, the system for recording/replaying multimedia data of the present invention may be loaded in the CD- RW or memory stick of USB driver 5 etc. At this case, execution of the system for recording replaying multimedia data is performed in PC 1 or PDA 3 coupled to CD-RW driver 4 or USB driver 5, and recording of multimedia data may be performed in CD- RW or memory stick of USB driver. FIG. 2 is a schematic block diagram of the system for recording/replaying working screen and audio signals in real time according to an embodiment of the present invention. It is assumed that the system for recording/replaying in real time according to an embodiment of the present invention is loaded in PC 1 of FIG. 1. Referring to FIG. 2, the system may comprise a media generator 10, a media synthesizer 20 and a media player 30. The media generator 10 receives screen of PC, voice signals inputted to the PC and sounds generated by application program being executed in the PC, etc. and records them in real time. The media synthesizer 20 synthesizes several kinds of data files generated by the media generator 10 into one moving picture file. The media player 30 replays the synthesized media. Particularly, the media generator 10 may comprise an external device interface and detector 1 1, an event hooking part 12, an image generator 13 and an audio generator 14. The external device interface and detector 11 detects if the external device is loaded and processes control signals from/to the external device if the external device is detected. Examples of the external device may include a remote controller 2, a CD- RW driver and a USB driver 5. and the like as shown in FIG. 1. The external device interface and detector 1 1 also detects if the specified hot key (6, 8 in FIG. 1) or private key (7, 9 in FIG. 1) is inputted. That is, the external device interface and detector 11 detects if a user has requested for recording, stopping or replaying, etc. using the specified hot key (6, 8 in FIG. 1) or private key (7, 9 in FIG.
1) and performs controlling according to the detected request. The Image generator 13 captures image as it is displayed on screen of PC 1 in real time and saves in the form of image file 15. Therefore, all contents on screen and changes of the screen status, etc. is included in the image file 15. The audio generator 14 receives and records user's voice around the PC 1, and captures and records all sounds generated from the terminal. The audio generator 14 is provided separately from the image generator and thus, captures and records audio signals such as voices and sounds. Particularly, the audio generator 14 receives voice signals inputted to the PC 1 through a voice input device of the PC 1. Besides voice signals surrounding noises or sounds can be also inputted. The audio generator 14 receives sounds generated from PC 1, especially the sound generated by an application program (for example, alarm sound, etc.), mixes them with external voice signals, and records the mixed signal in the form of 'voice+sound' file 17, that is, audio file 17. On the other hand, the audio generator 14 can also record separately the voice signal inputted externally and the sound generated internally from the PC 1 in the form of each independent file. The event hooking part 12 detects if the blackboard writing event is inputted by a user while each of the audio generator 14 and the image generator 13 captures and records each of the image signal and the audio signal. If the blackboard writing event is inputted, the event hooking part 12 saves information corresponding to the event. The blackboard writing event refers to function for inserting explanation, annotation, marking, etc. in order to explain the application program or contents which is executed in PC by a user. The blackboard writing event can occur through the user interface device of PC 1. In case of PC 1 , the blackboard writing event can be inputted through the user interface device such as mouse, keyboard, touch panel, or touch screen. Information related to the event is saved in the form of an independent event file 16. The event information includes input time, input type, location information, etc. The input type information is information on the event input device, for example, mouse, keyboard, etc. When the above-mentioned blackboard writing event is inputted, the blackboard writing event may be saved through user's configuration setting whether to record in the form of event or image. If the user sets up to save the blackboard writing in the form of event, as explained above, the event hooking part 12 saves only the information related to the event by hooking the event separated from the image information on screen. On the other hand, if the user sets up to save the blackboard writing event in the form of image, the image generator 13, instead of the image hooking part 12, captures screen image as it is including the blackboard writing event and saves in the form of image file. That is, at this case, the event information is not saved separately but saved in the form of image. When above unit files- image file 15, event file 16 and audio file 17- are generated by the media generator 10, the media synthesizer synthesizes them into a unified file format automatically. It is preferable that the unit files separately have a file format which could be reused. Thus, the unit files can be reused in different systems. Further, the unit files can be automatically encoded to single media so that the unit files can be used in any kind of configuration and service (for example, download, streaming, etc.) The media synthesizer 20 includes a synthesizer 21 and a compressor 22. The synthesizer 21 synthesizes above unit files into a unified file format. At this case, if the blackboard writing event is set up to be saved in the form of event through user configuration setting, the synthesizing process for the event file is not required. The unit files are inputted to the synthesizer 21 and the synthesizer generates a synthesized file, for example, AVI (Audio Video Interleave) file 23. The compressor 22 compresses the synthesized file and generates a compressed file in the form that a user wants through the compression process. That is, the compressor 21 receives a synthesized file (for example, AVI file) and then generates a compressed file (for example, SWF, ASF file) by compressing the received file. Through the compression process, a final moving picture file in the form of ASF 24, SWF 25 is generated. The media player 30 replays multimedia data using the final moving picture file 24, 25 generated from the media synthesizer 20. The media player 30, particularly, includes an event generator 33, a moving picture generator 34, the media synchronizer 32 and a player 31. The event generator 33 forwards time information, input type (for example, if input device is mouse or keyboard, etc.), location information of the event, etc. to the media synchronizer 32 using the event file 16 if the blackboard writing event is not saved in the form of image but in the form of event. The moving picture generator 34 forwards moving picture data to the media synchronizer 32 using the final moving picture file in the form of SWF, ASF. The media synchronizer 32 compares times of the moving picture information and the event information, synchronizes the moving picture information with the event information according to time, and outputs to the player 31. The player 31 replays the moving picture information and the event information through screen and speaker. Commercial media players (For example, window media player of Microsoft, etc.) can be used as the player 31. At this case, function for synchronizing the moving picture information with the event information can be provided to the commercial media player or interface part with the commercial media player. FIG. 3 is a flow chart for showing an example of process for recording and synthesizing audio signal and working screen in real time which is performed by the media player 10 and the media synthesizer 20. It is assumed that the blackboard writing event is set up to be saved in the form of event, not in the form of image, before a user pushes a save start button/key. If the multimedia data saving is started by the user 40, the event hooking part
12 of FIG. 2 monitors all events generated in the corresponding terminal 41, and if the event is inputted, the event hooking part 12 determines if the event is saved as image
44. If the event is saved as image, the image capture process captures image including the event 45. If the event is saved in the form of event, the corresponding event information is saved 47. At this time, if the event is inputted by the keyboard, the event information including event generation time, keyboard value and keyboard input location information is saved. If the event is inputted by the mouse, the event information including event generation time, mouse click value, location information is saved. The terminal screen is captured through continuous execution of an image capture thread, the GUI is captured according to screen capture attribute value 45. The screen capture attribute value is for determining if region to be captured is whole screen of the terminal, or specified region, or specified application program region. The captured image is generated as a frame unit AVI file to be saved. The image capture thread is for performing a portion of or whole function of the image generator 13 of FIG. 2. Further, voice recording and sound recording 42, 43 are also performed along with the event hooking and the image capture. In voice recording process 42, it is performed for detecting the voice input device, receiving voice from the voice input device, and recording the voice in the form of a voice file (WAV). In sound recording process 43, it is performed for recording of the sound generated from the terminal using sound mixing function. The recorded sound is synthesized with the voice file 46, and then saved as an audio signal 48. If the recording is completed, the image file and the voice file is inputted to the media synthesizer 21 of FIG. 2 to be synthesized as one moving picture file (AVI file) 51, and a final moving picture file format (ASF file, SWF file) is generated by compressing the synthesized file 52. The moving picture compressing process 52 includes ASF encoding and SWF encoding. Hereby, the multimedia data recording and synthesizing process is completed 53. FIG. 4 is a flow chart for an example of working screen and audio signal replay process. If the replay is started by employing a replaying selection key/button, etc., it is checked if the blackboard writing event is saved as the image or the event 61. If the event is saved as the image, a user replays automatically by opening the compressed final moving picture file through the media player 31 of FIG. 2
67. Therefore, the multimedia data replaying process through the moving picture file including the blackboard writing event is performed until the moving picture replay is completed 69. If the event is saved as the event format, event information including the event time derived from the saved event file is extracted while the moving picture file being replayed 62. Further, information on moving picture frame time replayed presently is derived 63. The event time is compared with the moving picture frame time 68. If the event time is smaller than the moving picture frame time or the same, the derived event information is executed and the corresponding event is output along with the moving picture 70. If the event time is larger than the moving picture frame time, the process is returned to the event time information deriving process 62. Until the moving picture replay is completed 71, the from the event time information deriving process 62 to event execution process 70 is repeated. FIG. 5 is an exemplary flow chart of process for generating the image performed in the image generator 13. Particularly, the FIG. 5 shows an exemplary compression process which is one of cores of image generation process. The image generator shown in FIG. 2 includes an image compression module (not shown). The image compression module, the core module in the image recording, does not record all captured images but records changed part compared with the previous screen. By this, the amount of the data is minimized because the image which is same as the previous screen is not recorded. Further, the mouse movement or screen image movement may not be soft through the CPU load if the whole region of the screen is captured. In order to solve this problem, the method for automatically dividing the screen of the terminal into more than two regions to capture is employed. If the screen of the terminal is divided into more than two regions and each region is captured sequentially, the unnatural movement of the mouse or the screen can be solved because the CPU load can be reduced significantly, about more than 40%. Referring to FIG. 5 an exemplary compression process is as following. Firstly, user's selection information about the capture region is checked before the image capture 81. The user's selection information can be classified into appointed region capture 82 where only the region selected by a user is captured, whole screen region capture 83, and specified application program capture 84 where only the application program selected by a user among the operating application programs is captured, etc. FIG. 6 is an example of the capture region selected by a user. Referring to FIG.
6, in case of the whole screen region capture 83, the whole screen 101 of the terminal becomes the capture region. In case of the specified application program capture 84, for example, the power point screen 102 of FIG. 6 becomes the capture region. In case of the region capture 82, for example, as shown in FIG. 6, the specified region 103 selected by a user becomes the capture region. The capture division region is computed according to the user selection information. The capture division region is a divided region into more than two parts, and the number of the division regions is preferred to be determined automatically. However, the number of the division regions can be also set by a user. FIG. 7 is an example of the capture division region. In FIG. 7, the capture region is the whole screen 101 which is divided to 4 division regions 1 1 1, 1 12, 1 13, 114. Referring to FIG. 5 again, the image is captured base on the division region using the computed division region. Then, it is checked if the comparison target for the division region exists 87. The comparison target is for determining if image information in the vision region is same as the previous image information. That is. the comparison target refers to image information previously captured and recorded. If the comparison target does not exist, the captured division region image is saved as the comparison target 88. If the comparison target for the corresponding division region exists, the captured division region image information is compared with the image information of the comparison target. If they are same, image information of captured division region is not saved and the process returns to the division region capture process 86. If they are not same, the image information of the captured division region is recorded 90. The above processes from 86 to 90 is performed for all the division regions. Therefore, if the number of the division region is 4, the above processes from 86 to 90 is performed 4 times. Among the image information of captured division regions, if there exists one or more division region which is different from the comparison target, the captured image data of each of the division regions is synthesized and one image data is generated 91. The generated image data is saved in the data file 92. Referring to FIG. 7, the image of the first division region 111 is captured 86, and it is checked if the comparison target of the captured image, that is, pre-captured and recorded image, exists 87. If the comparison target exists, the presently captured image is compared with the comparison target 89. If they are different, the presently captured image is recorded 90. Like the first division region 1 1 1 , above processes 86 to 90 are performed for the second division region 112, the third division region 113 and the fourth division region 1 14. After capture is performed from the first division region 1 1 1 to fourth division region 114, captured division region images are synthesized if at least one image of the 4 division regions is different from that of the corresponding region 91. Hereby, the new capture image for whole screen 101 is generated. If the screen recording is completed by user's click of the finish button 93, the AVI file is generated (not shown). According to the present invention, displayed contents on screen can be recorded as it as in the form of multimedia data in the terminal configuration based on window (Windows, MAC, PDA, etc.) and replayed anytime. Further, memo or marking/annotation can be inserted to the terminal screen and voice and sound can be mixed simultaneously. Furthermore, through the cooperation with the various external devices, execution in the external device is possible, and execution result data can be saved in the external device after the execution, and the result data can be used by a user easily using specified keys or specified hot keys. The saved result data can be replayed in any circumstance, because the moving picture file is generated regardless of user platforms. As various contents can be generated by the present invention, the field where the present invention can be applied is very wide, as will be described in the following. Firstly, educational contents can be generated easily using the present invention. For example, the educational contents can be generated using the application program (for example, word, excel, etc.) in PC without employing any particular device. Further, since the contents can be regenerated in any place, the education is possible without direct meeting. The present invention can be also applied to the product marketing and presentation. The present invention is useful in making introduction materials of products in internet shopping malls and companies and making presentation materials for corporate public relations. Further, the present invention can be also applied to editing electronic manuals which is easier to understand as the all data related to the manual such as picture including text, voice explanation about text, and sounds, etc. can be recorded in manual. Further, as the record result of the present invention can be generated in the form of streaming, download, the present invention can be applied to production of contents which can be provided through wire or wireless network. Besides above mentioned application examples, the present invention can be also useful in the record of conferences or interviews and contents production for electronic books. Since the accompanying drawings and the detailed description are only examples of the present invention, it is only for describing the present invention not for limiting the scope of the present invention. Therefore, those who skilled in the art will understand that many changes and equivalent embodiments can be made without departing from the present invention. Thus, the true scope of the present invention must be determined by the accompanying claims. Industrial applicability As described above, multimedia data including screen of the terminal and audio signals (including voice signals inputted externally and sounds generated from the terminal) can be recorded simultaneously in real time with a simple process and replayed. Therefore, according to the present invention, as the user's voice and sound included in an application program can be recorded using sound mixing technique, it allows to deliver vivid images while replaying and create the best results for a specific situation while recording. Further, according to the present invention, it allows to write marking/explanation/annotation on screen or on an application program executed on the screen while recording screen on terminal. Furthermore, according to the present invention, as the user can select screen capture region among whole screen, specified region, specified application program region, etc., it allows to improve minimization of system load and comfort of the user . through various capturing regions. Further, CPU load can be reduced because capture image is recorded only when the capture image is different from the previous capture image after dividing capture regions. Moreover, according to the present invention, it allows comfort in use since record and replay can be performed with one-click of specified keys, buttons of the terminal or the remote controller.

Claims

Claims 1. A method for recording multimedia data including a working screen and audio signals of a information processing device in real time, comprising the steps of: capturing a capture region of the information processing device screen in real time to record as an image file; recording a first audio signal inputted to the information processing device through a sound input device and a second audio signal generated within the information processing device; and detecting input of an event through a user interface device, and if an event is inputted, saving event information of the inputted event as an event file.
2. The method of claim 1, wherein said step of capturing the capture region in real time to record as an image comprises the steps of: dividing the capture region into at least two division regions; capturing each of the division regions; and synthesizing the captured images of division regions to one image.
3. The method of claim 2, wherein said step of capturing the capture region in real time to record as an image further comprises the steps of: comparing each captured image of the division regions with the previously captured image of the corresponding division regions; and recording the image of the division region where the currently captured image is different from the previously captured image.
4. The method of claim 2, wherein said capture region can be selected by a user.
5. The method of claim 2, wherein said capture region is one selected from group consisting of whole screen region of the information processing device, region appointed by a user and region of a predetermined application program.
6. The method of claim 1 , wherein said step of recording a first audio signal inputted to the information processing device through a sound input device and a second audio signal generated in the information processing apparatus comprises the steps of: mixing the first audio signal and the second audio signal; and saving the mixed signal as an audio file.
7. The method of claim I, wherein said user interface device includes at least one selected from the group consisting of a mouse, a keyboard, a touch panel and a touch screen.
8. The method of claim 1 , wherein said event information includes input type, input time and location information.
9. The method of claim 1, further comprising the steps of: synthesizing the audio file and the image file into one moving picture file; and compressing the synthesized moving picture file.
10. The method of claim 1, further comprising the step of detecting a request for saving of a user, wherein the request for saving is performed by Input with a specified key of the information processing device or specified button.
11. A method for replaying a screen by user's execution and audio signals using a moving picture file and an additional event file, the moving picture file including image information that is captured in real time by capturing capture regions of the working screen of the information processing apparatus, a first audio signal inputted to the information processing apparatus through a sound input device and a second audio signal generated from the information processing apparatus, comprising the steps of: deriving time information of each moving picture frame from the moving picture file; deriving time information of each event from the event file; Synchronizing each event with the corresponding moving picture frame by comparing the time information of each moving picture frame with the time information of each event; and replaying the synchronized event and moving picture frame.
12. The method of claim 1 1 , wherein said event file includes time information of the event inputted by the user interface device, location information and input type.
13. The method of claim 11 , wherein said moving picture file is ASF format file or SWF format file.
14. A system for recording audio signals and a working screen of an information processing apparatus, the system being loaded to the information processing apparatus, comprising: an image generator for capturing capture regions of the information processing apparatus in real time to record as an image file; an audio generator for recording a first audio signal inputted through the information processing apparatus and a second audio signal generated by the information processing apparatus; and an event hooking part for saving event information of an inputted event as an event file, by determining event input through a user interface device, when an event is inputted.
15. The system of claim 14, further comprising: a synthesizer for synthesizing the audio file and the image file into one moving picture file; and a compressor for compressing the synthesized moving picture file.
16. The system of claim 15, further comprising: a media synchronizer for synchronizing the event with the corresponding moving picture frame by comparing time information of each moving picture frame and time information of the event.
17. An information processing apparatus comprising: a screen; a sound input device for receiving a first audio signal externally; an user interface device for receiving event by a user; and a real time recording system for recording audio signals and screen information displayed on screen currently, wherein the real time recording system comprises an image generator for capturing capture region of the screen in real time to record as an image file; an audio generator for recording the first audio signal and a second audio signal generated from the information processing apparatus; and a event hooking part for recording event information on an inputted event as an event file by monitoring event input, when an event is inputted.
18. The apparatus of claim 17, further comprising a memory for storing data and/or program, and the real time recording system is loaded to the memory.
19. The apparatus of claim 17, further comprising a private key for the real time recording system.
20. A computer-readable medium including a program containing computer- executable instructions for performing method for recording multimedia data including working screen and audio signal of a information processing apparatus in real time, comprising the steps of: capturing capture region of the information processing apparatus screen in real time to record as an image file; recording a first audio signal inputted to the information processing device through a sound input device and a second audio signal generated in the information processing apparatus; and detecting input of an event through a user interface device, and if an event is inputted, saving event information of the inputted event as an event file.
21. A computer-readable medium including a program containing computer- executable instructions for performing method for replaying screen by user's execution and audio signals using a moving picture file and an event file separate from the moving picture file, the moving picture file including information on image that is captured by capturing working screen of the information processing apparatus, a first audio signal inputted to the information processing apparatus through a sound input device and a first audio signal generated from the information processing apparatus, comprising the steps of: deriving time information of each moving picture frame from the moving picture file; deriving time information of each event from the event file; synchronizing each event with the corresponding moving picture frame by comparing the time information of each moving picture frame with the time information of each event; and replaying the synchronized event and moving picture frame.
PCT/KR2004/003003 2003-11-19 2004-11-19 Real-time store and playback of working screen and audio signal WO2005050645A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2003-0082087 2003-11-19
KR1020030082087A KR20050048215A (en) 2003-11-19 2003-11-19 Method for real-time storing and playing of working screen and audio signal, and system there-of

Publications (1)

Publication Number Publication Date
WO2005050645A1 true WO2005050645A1 (en) 2005-06-02

Family

ID=34617250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2004/003003 WO2005050645A1 (en) 2003-11-19 2004-11-19 Real-time store and playback of working screen and audio signal

Country Status (2)

Country Link
KR (1) KR20050048215A (en)
WO (1) WO2005050645A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580888A (en) * 2014-12-17 2015-04-29 广东欧珀移动通信有限公司 Picture processing method and terminal
CN105630462A (en) * 2014-10-27 2016-06-01 任子行网络技术股份有限公司 Violation audio/video program evidence collection method and device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100765803B1 (en) * 2006-07-26 2007-10-12 주식회사 엔터기술 Video music accompaniment apparatus having real time saving and playing function
KR101111956B1 (en) * 2011-10-24 2012-02-15 (주)지란지교소프트 System for providing substance about monitoring of computing device and method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5805173A (en) * 1995-10-02 1998-09-08 Brooktree Corporation System and method for capturing and transferring selected portions of a video stream in a computer system
US5959622A (en) * 1996-05-31 1999-09-28 Intel Corporation Still image capture under computer control in response to user-instructed trigger
JP2003085527A (en) * 2001-09-14 2003-03-20 Sony Corp Method for producing information, device for producing information, and system for processing network information
US6774939B1 (en) * 1999-03-05 2004-08-10 Hewlett-Packard Development Company, L.P. Audio-attached image recording and playback device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5805173A (en) * 1995-10-02 1998-09-08 Brooktree Corporation System and method for capturing and transferring selected portions of a video stream in a computer system
US5959622A (en) * 1996-05-31 1999-09-28 Intel Corporation Still image capture under computer control in response to user-instructed trigger
US6774939B1 (en) * 1999-03-05 2004-08-10 Hewlett-Packard Development Company, L.P. Audio-attached image recording and playback device
JP2003085527A (en) * 2001-09-14 2003-03-20 Sony Corp Method for producing information, device for producing information, and system for processing network information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105630462A (en) * 2014-10-27 2016-06-01 任子行网络技术股份有限公司 Violation audio/video program evidence collection method and device
CN104580888A (en) * 2014-12-17 2015-04-29 广东欧珀移动通信有限公司 Picture processing method and terminal

Also Published As

Publication number Publication date
KR20050048215A (en) 2005-05-24

Similar Documents

Publication Publication Date Title
AU735289B2 (en) Non-linear editing system for home entertainment environments
JP3943635B2 (en) Method for controlling session playback points in a computer controlled display system
JP3943636B2 (en) Computer controlled display system
JP4322945B2 (en) Electronic device and image display control method
US20020178450A1 (en) Video searching method, apparatus, and program product, producing a group image file from images extracted at predetermined intervals
US20010023450A1 (en) Authoring apparatus and method for creating multimedia file
EP1926103A2 (en) System, method and medium playing moving images
JP2009088689A (en) Electronic equipment, and display method
WO2009081478A1 (en) Electronic device, control method, and program
WO2006011401A1 (en) Information processing device and method, recording medium, and program
JP4572615B2 (en) Information processing apparatus and method, recording medium, and program
CN113727140A (en) Audio and video processing method and device and electronic equipment
US20040015595A1 (en) System and method for generating synchronous playback of slides and corresponding audio/video information
JP2007066018A (en) Information processing method and information processor
CN112584208B (en) Video browsing editing method and system based on artificial intelligence
JPH07319901A (en) Method for executing desired job by use of picture data base
JPH1055391A (en) Information reproducing device and material display device
WO2005050645A1 (en) Real-time store and playback of working screen and audio signal
JP2005252574A (en) Device and method for creating multimedia content, program for making computer execute method, and multimedia content creating system
JP5039020B2 (en) Electronic device and video content information display method
JP3092496B2 (en) Scenario editing device
JP2021061519A (en) Information processing device and program
CN112616084B (en) Lyric display method, device, terminal and storage medium
JP2010093865A (en) Electronic apparatus and display control method
JP2005167822A (en) Information reproducing device and information reproduction method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC - FORM EPO 1205A DATED 27-07-2006

122 Ep: pct application non-entry in european phase