WO2002031769A1 - Procede et systeme de traitement de donnees, programme informatique, et support enregistre - Google Patents
Procede et systeme de traitement de donnees, programme informatique, et support enregistre Download PDFInfo
- Publication number
- WO2002031769A1 WO2002031769A1 PCT/JP2001/008862 JP0108862W WO0231769A1 WO 2002031769 A1 WO2002031769 A1 WO 2002031769A1 JP 0108862 W JP0108862 W JP 0108862W WO 0231769 A1 WO0231769 A1 WO 0231769A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- processing
- execution
- image
- data
- result
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 380
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000004590 computer program Methods 0.000 title claims description 9
- 239000000872 buffer Substances 0.000 claims description 22
- 238000012546 transfer Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 5
- 238000003672 processing method Methods 0.000 claims description 2
- 230000004044 response Effects 0.000 claims description 2
- 230000001960 triggered effect Effects 0.000 claims 1
- 239000013256 coordination polymer Substances 0.000 description 13
- 230000010365 information processing Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 230000015654 memory Effects 0.000 description 10
- 239000007853 buffer solution Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 102100029968 Calreticulin Human genes 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- AAHNBILIYONQLX-UHFFFAOYSA-N 6-fluoro-3-[4-[3-methoxy-4-(4-methylimidazol-1-yl)phenyl]triazol-1-yl]-1-(2,2,2-trifluoroethyl)-4,5-dihydro-3h-1-benzazepin-2-one Chemical compound COC1=CC(C=2N=NN(C=2)C2C(N(CC(F)(F)F)C3=CC=CC(F)=C3CC2)=O)=CC=C1N1C=NC(C)=C1 AAHNBILIYONQLX-UHFFFAOYSA-N 0.000 description 2
- 101100449736 Candida albicans (strain SC5314 / ATCC MYA-2876) ZCF23 gene Proteins 0.000 description 2
- 101150016162 GSM1 gene Proteins 0.000 description 2
- 101100326671 Homo sapiens CALR gene Proteins 0.000 description 2
- 101100084100 Mus musculus Ppp1r17 gene Proteins 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 101000869488 Rhizobium radiobacter Aminoglycoside (3'') (9) adenylyltransferase Proteins 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- ZYXYTGQFPZEUFX-UHFFFAOYSA-N benzpyrimoxan Chemical compound O1C(OCCC1)C=1C(=NC=NC=1)OCC1=CC=C(C=C1)C(F)(F)F ZYXYTGQFPZEUFX-UHFFFAOYSA-N 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Definitions
- the present invention relates to a data processing technique for efficiently displaying a large-screen moving image by linking a plurality of processing devices, for example, an image processing device.
- the present invention includes control means for controlling the operation of a plurality of processing devices, and each processing device starts execution of a process assigned to itself upon receipt of an execution permission signal sent from the control means. And a processing result and an execution end signal sent to the control means after the execution of the processing, wherein the control means outputs identification information of one or a plurality of processing devices to which an execution permission signal is to be sent, the processing result, and
- Each application is provided with a processing table in which identification information of one or a plurality of processing devices to receive the execution end signal is stored in a predetermined order for each application, and upon receiving a processing request from one application, the application table is executed.
- the execution permission signal is sent to the corresponding processing device in the order stored in the processing table for the application, and the execution end signal and the processing result are applied to the corresponding processing device.
- a data processing system configured to receive the data from the processing device.
- the execution permission signal is a type of control signal for permitting the execution of the process
- the execution end signal is a type of notification signal indicating that the execution of the process has been completed.
- the present invention also provides a first arbitration means for arbitrating the operation of N (N is a natural number greater than 1) processing devices that cooperate in cooperation with each other, and M (M is a natural number greater than 1)
- a second arbitration unit that arbitrates the operation of the first arbitration unit
- a control unit that controls the operation of the first arbitration unit.
- the present invention provides a data processing system which starts the execution of the processing assigned to the control means and sends a processing result and an execution end signal to the control means after the execution of the processing.
- the control means may include identification information of one or a plurality of processing devices to which the execution permission signal is to be transmitted, the processing result, and the processing result.
- a processing table storing identification information of one or a plurality of processing devices to receive the execution end signal in a predetermined order is provided for each application (provided that a processing request is received from a processing request from one application).
- the execution permission signal is transmitted to the corresponding processing device in the order stored in the processing table of the processing request, and the execution end signal and the processing result are received from the processing device. It is characterized by having been done.
- Each processing device is configured to generate frame image data for a divided image obtained by subdividing a predetermined image in cooperation with another processing device, and to output the generated frame image data as the processing result.
- the image processing apparatus includes: a drawing processing unit that performs a drawing process of a predetermined image; a plurality of geometry processing units that perform a geometry process based on a predetermined image display command; and an image interface interposed therebetween.
- the drawing processing means includes: a buffer for storing a drawing context serving as a plurality of sets of parameters different for each of the geometry processing means together with the identification information; and a drawing instruction input from the image interface.
- An image transfer request that is performed independently and includes identification information of a drawing context obtained as a result of the processing.
- the image interface transmits the drawing instruction to the image interface together with information indicating the priority, and receives the image transfer request having a higher priority and inputs the drawing instruction to the drawing processing means. It is configured to output a drawing processing result by the drawing processing means as the processing result.
- Another data processing system provided by the present invention is a system for controlling the operation of a plurality of processing devices, wherein each processing device receives an execution permission signal. Triggers the execution of the process assigned to itself and, after the execution of the process, sends a process result and an execution end signal, wherein the identification information of one or more processing devices to which the execution permission signal is to be sent and the First means for holding, for each application, a processing table storing, in a predetermined order, a processing result and identification information of one or more processing devices to receive the execution end signal; and receiving a processing request from a certain application. Triggering the second means for specifying the processing table for the application, sending the execution permission signal to the corresponding processing device in the order stored in the specified processing table, and Means for receiving a result and an execution end signal from a corresponding processing device.
- the present invention also provides a plurality of processes for starting execution of a drawing process assigned to itself upon receipt of an execution permission signal and outputting a processing result and an execution end signal after the execution of the drawing process.
- a method of controlling a device to display a processing result from a part or all of the plurality of processing devices on a predetermined display device, wherein the identification of one or a plurality of processing devices to which the execution permission signal is to be transmitted is provided.
- the information, the processing result, and the identification information of one or more processing devices to receive the execution end signal are determined for each application in a predetermined order, and upon receiving a processing request from one application, The execution permission signal is sent to the corresponding processing device in the order determined for the application, and the execution end signal and the processing result are received from the corresponding processing device. , And displays the processing result received at a predetermined timing on the display device, to provide a data processing method according to a plurality of processing devices.
- the present invention further provides a computer which starts the execution of a process assigned to itself upon receipt of an execution permission signal, and transmits the processing result and an execution end signal after the execution of the process.
- Computer program to operate as a controlled data processing system I will provide a.
- the computer and the data processing system implemented by the computer program may include one or more identification information of one or more processing devices to which the execution permission signal is to be sent, one or more processing devices, and one or more to receive the processing result and the execution end signal.
- This computer program is typically embodied by being recorded on a computer-readable recording medium. BRIEF DESCRIPTION OF THE FIGURES
- FIG. 1 is a block diagram of an integrated image processing apparatus according to an embodiment of the present invention.
- FIG. 2 is a functional configuration diagram of the GSM.
- FIG. 3 is an illustration of the signals exchanged between the GSM and the main SYNC.
- FIG. 4 is an explanatory diagram of the contents of the display order table, in which (a) shows an example of a single buffer and (b) shows an example of a double buffer.
- FIG. 5 is an explanatory diagram of an image processing procedure in the case of the single buffer system, and (b) is an explanatory diagram of an image processing procedure in the case of the double buffer system.
- FIG. 6 is an explanatory diagram of the overall processing procedure by the main MG and the like.
- FIG. 7 is an example of image display when performing area synthesis.
- FIG. 8 is an image display example when performing scene anti-aliasing.
- FIG. 9 is an example of image display when performing layer composition.
- FIG. 10 is an example of an image display when a flip animation is performed.
- the processing device is an image processing device, and the data processing by this is image (generation) processing.
- the execution permission signal is a drawing permission signal (DrawNext) output to the image processing apparatus, and the execution end signal is a drawing completion signal (DrawDone) output from the image processing apparatus.
- FIG. 1 is a block diagram showing the overall configuration of the integrated image processing device according to the present embodiment.
- This integrated image processing device consists of four image processing devices
- GSB GSB 100 and an integrated device (hereinafter referred to as “main MG”) 200 located after the GSB 100 and integrating each output data, and each GSB 100 SYNC signal (V-SYNC) and drawing permission signal
- Main S YN C J Main S YN C J
- control device hereinafter referred to as
- Main CP 400 has a “Main CP” 400 and a network control circuit (hereinafter “Main NET”) 500 for linking all the GSB 100s.
- Main NET network control circuit
- a display device DP is connected to the output side of the main MG200 so that the image processing results from the integrated image processing device are displayed in an integrated manner. I have.
- the timing for issuing the data from the primary S YNC 300 to GSM 1 to be described later and the timing for issuing the data from each GSM 1 to the primary MG 200 are determined by the primary MG in cooperation with the primary CP 400. Controlled by 200.
- the main MG 400, the external storage device 410 and the main NET 500 are connected to the main CP 400.
- GS B 100 is output from each of four information processing devices (hereinafter, “GSM”) 1 for generating frame image data corresponding to the input image data sequence, and GSM 1 A merger (hereinafter referred to as “slave MG”) 3 that merges the frame image data into one frame image data and outputs this to subsequent processing, and V-SYNC and a drawing enable signal (DrawNext) are sent to each GSM 1.
- GSM information processing devices
- slave MG A merger
- V-SYNC and a drawing enable signal (DrawNext) are sent to each GSM 1.
- a synchronization circuit (hereinafter referred to as “slave S YNC”) 4 to supply the drawing completion signal (DrawDone) issued from each GSM 1 to the main SYNC 300 and control to control the operation of each GSM 1 Equipment (hereinafter referred to as “slave CP”) 5 and a network control circuit (hereinafter “slave NE TJ”) 6 for coordinating with all GSMs 1 in the same GSB and GSMs 1 in other GSBs ing.
- slave S YNC A synchronization circuit (hereinafter referred to as “slave S YNC”) 4 to supply the drawing completion signal (DrawDone) issued from each GSM 1 to the main SYNC 300 and control to control the operation of each GSM 1 Equipment (hereinafter referred to as “slave CP”) 5 and a network control circuit (hereinafter “slave NE TJ”) 6 for coordinating with all GSMs 1 in the same GSB and GSMs 1 in other GSBs ing.
- Each GSM 1 is provided with a synchronization circuit (hereinafter referred to as “SYNC GSM”) 2 so that the VSYNC and the drawing enable signal (GSB Next) are supplied to the internal circuit from the SYNC GSM 2. Has become.
- SYNC GSM a synchronization circuit
- Each of the slave MG 3 and the master MG 200 has a register for temporarily storing frame image data to be output.
- the slave CP 5 controls the operation of the entire GSB.
- the slave CP 5 has a demultiplexer (not shown) that distributes the input data to four, and separates the image data sequence of the moving image to be generated into each of the four GSMs 1.
- Distribute There are various types of distribution depending on the application using the device. For example, the image to be finally displayed may be divided into four parts for that range, or the image data to be displayed may be divided into four parts to display each layer superimposed on the image to be displayed finally You can also. Alternatively, image data for four frames may be put together and divided into four parts.
- the network 6 is a circuit for mutually transferring a part or the whole of the image data sequence to another GSB.
- the transfer of the image data sequence is mainly performed to balance the processing load between the GSBs in the image processing.
- the merge in the slave MG 3 is performed in synchronization with the absolute time axis that controls the operation of the entire GSB. That is, a plurality of frame image data input at a timing synchronized with the absolute time axis are merged to generate one frame image data.
- Each GSB 100 has an image sequence (from the main CP 400 via the secondary CP 5), a V—S YNC and a drawing enable signal (DrawNext) (via the secondary S YNC 4, the primary S YNC 3). From 0) is supplied.
- the GSM 1 having received the drawing permission signal (DrawNext) starts image processing on the image data sequence.
- the S YNC ⁇ GSM 2, the slave S YNC 4, and the master SYNC 300 each have a built-in data register and multiple counters. Each counter also has a built-in register to hold the count value, so that when the count value reaches a specific value, an interrupt process occurs.
- the first counter is a counter for synchronizing multiple GSMs 1. The counter counts up at the falling edge of the input synchronization signal (“V—S YNCJ”). Since the clock is asynchronous with the clock of V-S YNC, it is counted There is a possibility that the timing of the backup will be shifted between GSM by the first click.
- the count value is reset by a reset signal from the main CP400, which is connected to the asynchronous clear terminal of the counter module. Therefore, there is a possibility that the fluctuation of one clock will occur between 051 ⁇ 1 when the first clock is used as a reference.
- the second counter is an up-counter for more accurate time measurement between V and S YNC.
- V—S YNC falling is detected and forced to zero every time.
- GSM 1 operates at the timing of V—S YNC in SYNC and GSM2, performs image processing upon receiving a drawing permission signal (DrawNext), and generates frame image data corresponding to the image data sequence.
- the individual image data forming the image data sequence is data read and supplied from the external storage device 410 connected to the main CP 400, and becomes frame image data through predetermined image processing. Data.
- the frame image data allows an image to be displayed on the display device DP.
- the GSM 1 After executing the processing assigned to GSM 1, the GSM 1 sends the processing result to the main MG 200 via the slave MG 3 and sends a drawing completion signal (DrawDone) to the SYNC GSM 2 And the data is sent to the master SYNC 300 via the slave SYNC 4.
- DrawDone drawing completion signal
- FIG. 2 shows the functional configuration of the GSM 1 according to the present embodiment in detail.
- GSM 1 has two buses, a main bus B1 and a sub-bus B2. These buses B 1 and B 2 are connected or disconnected from each other via a bus interface INT.
- Me The Inbus Bl includes a main processing unit (Central Processing Unit) 10 including a microprocessor and a VPUO (Vector Processing Unit: VPUO).
- main processing unit Central Processing Unit
- VPUO Vector Processing Unit
- 2nd VPU 21 and a GIF (Graphical Synthesizer Interface) 30 functioning as an arbitrator for the 1st VPU 20 and the 2nd VPU 21 are connected and further rendered via GIF 30 Processing means ("GS") 3 1 is connected.
- the main CPU 10 reads a startup program from the ROM 17 on the sub-bus B via the bus interface INT and executes the startup program to operate the operating system. Also, 3D object data composed of multiple basic figures (polygons)
- Geometry processing is performed on the coordinates of the vertices (representative points) of the polygon in cooperation with the first VPU 20.
- SP R for temporarily storing the result of the cooperative processing with the first VP U 20
- a high-speed memory called (Scratch Pad RAM) is provided.
- the first VPU 20 has a plurality of arithmetic elements for calculating a floating-point real number, and performs floating-point arithmetic in parallel by these arithmetic elements. That is, the main CPU 10 and the first VPU 20 perform arithmetic processing that requires detailed operations in polygon units in the geometry processing. Then, a display list is generated which includes polygon definition information such as the vertex coordinate sequence / shading mode information obtained by the arithmetic processing.
- the polygon definition information includes drawing area setting information and polygon information.
- the drawing area setting information includes the offset coordinates in the frame buffer address of the drawing area and the coordinates of the drawing clipping area for canceling the drawing when there is a polygon coordinate outside the drawing area.
- the polygon information is composed of polygon attribute information and vertex information.
- the polygon attribute information is information for specifying a shading mode, a 0! Blending mode, a texture mapping mode, and the like, and the vertex information is in a vertex drawing area. Information such as coordinates, vertex texture area coordinates, and vertex colors.
- the second VPU 21 is similar to the first VPU 20 and has a plurality of arithmetic elements for calculating a floating-point real number, and performs floating-point arithmetic in parallel with these arithmetic elements. In addition, a display list containing the operation result as its contents is generated.
- the first VPU 20 and the second VPU 21 have the same configuration, but each function as a geometry engine sharing arithmetic processing of different contents.
- the first VPU 20 is assigned processing such as character movements that require complex behavior calculations (atypical geometry processing)
- the second VPU 21 is simple but has a large number of polygons. Is assigned to the object for which is required, for example, processing of a background building (standard geometry processing).
- the first VPU 20 performs a macro operation process synchronized with the video rate, and the second VPU 21 can operate in synchronization with the GS 31.
- the second VPU 21 has a direct path directly connected to the GS 31.
- the first VPU 20 is tightly coupled with the microphone processor in the main CPU 10 to facilitate programming of complex processing.
- the display list generated by the first VPU 20 and the second VPU 21 is transferred to GS31 via GIF30.
- GIF 30 is a file generated by the first VPU 20 and the second VPU 21. Arbiter to avoid collision when transferring the spray list to GS31. In the present embodiment, a function of examining these display lists in descending order of priority and transferring the display list to the GS 31 from the higher order is added to the GIF 30.
- the information indicating the priority of the display list is usually described in the tag area when each VPU 20 and 21 generates the display list, but the information can be determined independently by the GIF 30. Is also good.
- the GS 31 holds the drawing context, reads out the corresponding drawing context based on the image context identification information included in the display list notified from the GIF 30, and uses this to perform the rendering processing. And draw a polygon in the frame buffer 32. Since the frame memory 32 can also be used as a texture memory, the pixel image on the frame buffer can be pasted as a texture to a polygon to be drawn.
- the main D MAC 12 controls the DMA transfer for each circuit connected to the main bus B 1 and also controls each circuit connected to the sub bus B 2 according to the state of the bus interface INT. Performs DMA transfer control.
- the MDEC 13 operates in parallel with the main CPU 10 and decompresses data compressed by the MPEG (Moving Picture Experts Group; JPEG, Joint Photographic Experts Group) or the like.
- the sub bus B 2 stores programs such as a sub CPU 14 including a microphone port processor, a sub memory 15 including a RAM, a sub D MAC 16 and an operating system.
- ROM 17, Sound memory 41 Sound processing unit (SPU) 40 which reads sound data stored in the memory and outputs it as audio output.
- the communication control unit (ATM) 50 for transmitting and receiving data and the input unit 70 are connected.
- S YNC ⁇ G SM 2 is connected to this sub-bus B 2, and slave 6 is connected to ATM 50.
- the input section 70 has a video input circuit 73 for externally inputting image data and an audio input circuit 74 for externally inputting audio data.
- an image data string is input from the slave CP 5 (distributed from the master CP 400) via the video input circuit 73.
- the sub CPU 14 performs various operations according to the program stored in the ROM 17.
- D MAC 16 controls the DMA transfer etc. for each circuit connected to the sub bus B 2 only when the bus interface INT separates the main path B 1 from the sub bus B 2.
- FIG. 3 is an explanatory diagram of signals exchanged between the GSM1 and the main SYNC300 and the main MG200, which are subsequent processing devices.
- a display order table in which the ID of the GSM 1 to which the drawing permission signal (DrawNext) should be transmitted, the processing result, and the ID of the GSM 1 to receive the drawing completion signal (DrawDone) are stored in a predetermined order.
- the display order table TB is provided in any of the external storage device 410 on the main CP400 side, the data register in the main MG200, and the data register in the main SYNC300. In short, it is provided in an area where the main SYNC 300 can point.
- each GSM 1 is the image processing result
- the “single-buffer method” in which storage and reading of frame image data are realized by one frame memory 32
- the “double-buffer method” which is realized by switching between two frame memories 32.
- Fig. 4 shows an example of the contents of the display order table TB.
- Fig. 4 (a) shows an example of the single buffer system and (b) shows an example of the double buffer system.
- the application number is individualized, and when an application is specified, the contents of the display order table TB corresponding to the application are specified.
- GSM 1-0 to GSM 1-3 are four GSMs provided in the first GSB
- GSM2-0 to GSM2-3 are four GSMs
- G provided in the second GSB
- SM3-0 to GSM3-3 are the four GSMs provided in the third GSB
- GSM4-0 to GSM4-3 are the four GSMs provided in the fourth GSB.
- Each is defined to be grouped at one V—SYNC timing.
- the main SYNC 300 points the display order table TB by two indexes of “display start” and “display end”.
- Display start is a GSM that is to be displayed on the display device DP based on the processing result after drawing is completed (a drawing completion signal (DrawDone) is received), and “Display completed” is After the display period for one frame in the display device DP ends, the GSM may issue a drawing permission signal for the next frame.
- the display is started after the display is completed.
- the display end and the display start are performed at the same time. Therefore, as shown in FIGS. 4 (a) and 4 (b), the display timing for one V-SYNC is delayed in the single buffer system as compared with the double buffer system.
- the application is stored in the external storage device 410, and the image data stream can be supplied to each GSM 1 through the main CP 400 and the sub CP 5 of each GSB 100. It is assumed that When the application is started by the main CP 400 and there is a processing request from the application, the main CP 400 issues a drawing instruction to the main SYNC 300 via the main MG 200. . The main SYNC 300 sends a drawing permission signal (DrawNext) to the corresponding GSMl in the order stored in the display order table TB for the application.
- DrawNext drawing permission signal
- GSM 1 performs image processing as follows.
- processing is performed according to the procedure shown in Fig. 5 (a). That is, upon receiving the drawing permission signal (DrawNext), the drawing processing assigned to itself is performed (step S101). Specifically, it changes the contents of the frame buffer. After the drawing processing, a drawing completion signal (DrawDone) is output (step S102). While the drawn image is being displayed on the display device DP, it waits for a drawing permission signal (DrawNext) to be received (steps S103 and S104). In other words, the period from the output of the drawing completion signal (DrawDone) to the reception of the drawing permission signal (DrawNext) is the image display period (at least one V-SYNC wait is required). If there is no more image data to be drawn, the process ends (Step S105: Yes). If there is still more image data to be drawn, the process after Step S101 is performed. Is repeated (step S105: No).
- GSM 1 performs processing according to the procedure shown in Fig. 5 (b).
- the drawing assigned to itself upon receiving the drawing permission signal (DrawNext)
- the point that image processing is performed is the same as in the case of the single buffer method.
- the drawing process is switched by switching the frame buffer (Fig. 2: Frame memory 32) from the previous V-S YNC frame buffer used for the drawing process to the other frame buffer that was switched. (Steps S204, S201).
- step S202 a drawing completion signal (DrawDone) is output (step S202), and the reception of a drawing permission signal (DrawNext) is waited (step S203).
- the frame buffer is switched for the next drawing process (step S204). If there is no more image data to be rendered, the process ends (step S205: Yes). If there is still an image to be drawn, the processing from step S201 is repeated (step S205: No).
- the integrated MG200 stores accumulated frame image data.
- the main MG 200 performs processing in accordance with the procedure of FIG. 6 in cooperation with the main SYNC 300 and the main CP 400.
- step S301 it is confirmed that drawing has been completed for all GSMs on the entry pointed to by "start display” (step S301). If it is confirmed that the drawing has been completed, the processing result (frame image data for which the drawing has been completed) is output to the display DP (step S302: Yes, S303). If the drawing has not been completed, some abnormalities may be considered, and the processing is terminated (step S302: No).
- step S 3 0 4 the drawing permission signal (DrawNext) is output from the main SYNC 300 (step S 3 0 4). Then, each index of “display start” and “display end” is advanced by one (step S305). If the last entry has been reached, return to the first entry (step S306: Yes, S307). If it is not the last entry, or if there is next data after returning to the first entry, the processing from step S301 is repeated (step S308: Yes). If there is no next data, etc., the process ends (Step 308: No).
- the result of integrating the drawing processing of GSM 1 may be displayed on one screen of the display device DP at the same time, and the drawing processing result of each GSM 1 may be displayed on one screen. You may make it display sequentially.
- FIGS. 7 to 9 show examples of displaying on a single screen at the same time
- FIG. 10 shows an example of displaying on a screen sequentially.
- Fig. 7 shows an example in which the processing results of four GSM la to lb are synthesized on the display device DP, and each GSM displays a different effective area on one screen. have.
- the effective area is identified by the value of the frame image data, and the main MG 200 implements area blending by performing ⁇ blending on each screen, and integrates as one screen. Output.
- FIG. 8 shows an example of realizing scene anti-aliasing from the processing results of four GSM la to ld.
- GS M 1 a to 1 d are each sub-pic Have the same image that is shifted on a cell-by-cell basis. By performing pre-rendering on these images for each screen, averaging is performed, and scene anherias is realized.
- FIG. 9 is an example of a case of performing layer composition of processing results of four GSMs 1a to 1d.
- the images of GSM la and lb are combined as layers having a fixed linear order, and the values are combined in the layer order using the threshold value.
- the order of the layers can be selected depending on the registration.
- FIG. 10 shows an example in which flip animation is performed based on the processing results of four GSMs 1a to 1d.
- GSM 1a to 1d are sequentially displayed by flip animation in frames.
- the above display mode can be realized very easily by defining the display order and the like in the display order table TB.
- the display order table TB is provided, and the drawing permission signal (DrawNext) is transmitted to the corresponding GSM 1 in the order specified in the display order table TB. Since the processing result is output to the display device DP upon receiving the drawing end signal (DrawDone) from GSM 1, drawing processing is performed consistently even if the number of GSM 1 to be linked increases. be able to.
- the data processing technique in the case of performing image processing has been described.
- the data processing technique of the present invention can be applied to other types of information processing, for example, sound generation. This makes it possible to generate high-definition, high-quality sound, such as the performance of an orchestra.
- data for sound generation is also processed individually by each GSM1.
- a form in which image processing and sound generation are linked to perform complex processing is also conceivable. As shown in FIG. 2, according to the GSM 1 of the present embodiment, the processing can be performed.
- the data becomes a signal for outputting an output sound from a predetermined speed, and is output in synchronization with the above-mentioned frame image data by the above-mentioned sub MG 3 and main MG 200.
- the input of audio data to each GSM 1 is performed from the audio input circuit 74 in FIG. 2, and the output of audio data is performed from the SPU 40.
- an example of a data processing system included in an integrated image processing apparatus including a plurality of image processing apparatuses that perform cooperative processing in cooperation with each other has been described. It can also be implemented as a data processing system.
- a plurality of information processing terminals each of which is installed in a completely different place, are connected via a computer network such as the Internet, and operate as the processing device, arbitration means, and control means according to the present invention.
- these information processing terminals can be realized by mutually transmitting and receiving various signals, for example, the above-described drawing permission signal (DrawNext) and drawing end signal (DrawDone) through a computer network. And it is possible.
- Some of the plurality of information processing terminals in this case operate as GSB 100 described in the first embodiment. Further, some information processing terminals have a function of a main MG 200 that integrates output data of the information processing terminals operating as each GSB 100, and a synchronization signal (V-SYNC) for each GSB 100.
- Main SYNC 300 function to supply other operation data
- Main CP 400 function to comprehensively control image processing and communication procedures
- Main NET to link all GSB 100
- the functions of 500 are shared and provided.
- a display is provided on the output side of the information processing terminal that operates as the main MG200. Make sure the device is connected. The timing of issuing various data from the main SYNC 300 to the GSM 100 is controlled by the main MG 200. Further, to the information processing terminal operating as the main CP 400: an information processing terminal operating as the main MG 200, an external storage device, and an information processing terminal operating as the main NE T are connected.
- the present invention is implemented as a data processing system that controls a plurality of processing devices (for example, the above-described GSM100 in the case of image processing) for performing joint processing through a computer network. It is also possible.
- Such a data processing system includes, for example, a server connectable to a computer network, and an external recording device accessible by the server.
- the supercomputer (the CPU mounted on it) reads and executes the computer program recorded on the external recording medium or a portable recording medium such as a CD-ROM to execute the main control unit.
- the function is formed as follows.
- the main control unit has the following three function modules.
- the first functional module includes an identification signal of one or a plurality of processing units to which an execution permission signal, for example, the above-mentioned drawing permission signal (DrawNext) to be transmitted, a processing result, and an execution end signal, for example, the above drawing end signal (DrawDone ) Has a function of storing a processing table in which identification information of one or a plurality of processing devices that should receive the processing table in a predetermined order is stored in the external recording device for each application.
- an execution permission signal for example, the above-mentioned drawing permission signal (DrawNext) to be transmitted
- a processing result for example, the above drawing end signal (DrawDone )
- an execution end signal for example, the above drawing end signal (DrawDone ) Has a function of storing a processing table in which identification information of one or a plurality of processing devices that should receive the processing table in a predetermined order is stored in the external recording device for each application.
- the second functional module receives a processing request from a certain application. It has a function to specify the processing table for the application in response to communication.
- the third functional module sends an execution permission signal (DrawNext) to the corresponding processing device in the order stored in the processing table specified by the second functional module, and outputs a processing result and an execution end signal (DrawDone). ) From the corresponding processing device.
- DrawNext execution permission signal
- DrawDone execution end signal
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP01974746.8A EP1326204B1 (en) | 2000-10-10 | 2001-10-09 | Data processing system and method, computer program, and recorded medium |
BR0107325-7A BR0107325A (pt) | 2000-10-10 | 2001-10-09 | Sistema e método de processamento de dados, programa de computador, e mìdia de gravação |
AU94208/01A AU9420801A (en) | 2000-10-10 | 2001-10-09 | Data processing system and method, computer program, and recorded medium |
CA002392541A CA2392541A1 (en) | 2000-10-10 | 2001-10-09 | Data processing system and method, computer program, and recorded medium |
MXPA02005310A MXPA02005310A (es) | 2000-10-10 | 2001-10-09 | Sistema de procesamiento de datos, programa de computadora y medio de registro. |
KR1020027007390A KR20020064928A (ko) | 2000-10-10 | 2001-10-09 | 데이터 처리 시스템과 방법, 컴퓨터 프로그램 및 기록 매체 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2000-309787 | 2000-10-10 | ||
JP2000309787 | 2000-10-10 | ||
JP2001306962A JP3688618B2 (ja) | 2000-10-10 | 2001-10-02 | データ処理システム及びデータ処理方法、コンピュータプログラム、記録媒体 |
JP2001-306962 | 2001-10-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2002031769A1 true WO2002031769A1 (fr) | 2002-04-18 |
Family
ID=26601818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2001/008862 WO2002031769A1 (fr) | 2000-10-10 | 2001-10-09 | Procede et systeme de traitement de donnees, programme informatique, et support enregistre |
Country Status (11)
Country | Link |
---|---|
US (1) | US7212211B2 (ja) |
EP (1) | EP1326204B1 (ja) |
JP (1) | JP3688618B2 (ja) |
KR (1) | KR20020064928A (ja) |
CN (1) | CN1236401C (ja) |
AU (1) | AU9420801A (ja) |
BR (1) | BR0107325A (ja) |
CA (1) | CA2392541A1 (ja) |
MX (1) | MXPA02005310A (ja) |
TW (1) | TWI244048B (ja) |
WO (1) | WO2002031769A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007043130A1 (ja) * | 2005-10-03 | 2007-04-19 | Fujitsu Limited | 描画装置、半導体集積回路装置及び描画方法 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7395538B1 (en) * | 2003-03-07 | 2008-07-01 | Juniper Networks, Inc. | Scalable packet processing systems and methods |
US20080211816A1 (en) * | 2003-07-15 | 2008-09-04 | Alienware Labs. Corp. | Multiple parallel processor computer graphics system |
US7996585B2 (en) * | 2005-09-09 | 2011-08-09 | International Business Machines Corporation | Method and system for state tracking and recovery in multiprocessing computing systems |
JP2007233450A (ja) * | 2006-02-27 | 2007-09-13 | Mitsubishi Electric Corp | 画像合成装置 |
US8166165B1 (en) | 2007-03-13 | 2012-04-24 | Adobe Systems Incorporated | Securing event flow in a user interface hierarchy |
US8984446B1 (en) * | 2007-03-13 | 2015-03-17 | Adobe Systems Incorporated | Sharing display spaces |
CN101369345B (zh) * | 2008-09-08 | 2011-01-05 | 北京航空航天大学 | 一种基于绘制状态的多属性对象绘制顺序优化方法 |
US8601192B2 (en) * | 2009-06-08 | 2013-12-03 | Panasonic Corporation | Arbitration device, arbitration system, arbitration method, semiconductor integrated circuit, and image processing device |
CN108711179B (zh) * | 2018-05-21 | 2022-07-19 | 杭州多技教育科技有限公司 | 绘图还原方法和系统 |
CN108898985B (zh) * | 2018-08-01 | 2021-09-28 | 大连海事大学 | 一种主从式光纤视频播放系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62118479A (ja) * | 1985-11-19 | 1987-05-29 | Sony Corp | 情報処理システム |
JPH04131960A (ja) * | 1990-09-25 | 1992-05-06 | Hitachi Ltd | 計算機の運転方法及び計算機システム |
US5237686A (en) * | 1989-05-10 | 1993-08-17 | Mitsubishi Denki Kabushiki Kaisha | Multiprocessor type time varying image encoding system and image processor with memory bus control table for arbitration priority |
JPH07248750A (ja) * | 1994-03-11 | 1995-09-26 | Hitachi Ltd | マルチ画面表示システム |
JPH08138060A (ja) * | 1994-11-04 | 1996-05-31 | Hitachi Ltd | 並列プロセッサを用いる表示処理装置 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4344134A (en) * | 1980-06-30 | 1982-08-10 | Burroughs Corporation | Partitionable parallel processor |
JPS63148372A (ja) * | 1986-12-12 | 1988-06-21 | Agency Of Ind Science & Technol | プログラム変換装置 |
US5010515A (en) * | 1987-07-28 | 1991-04-23 | Raster Technologies, Inc. | Parallel graphics processor with workload distributing and dependency mechanisms and method for distributing workload |
JP2836902B2 (ja) * | 1989-05-10 | 1998-12-14 | 三菱電機株式会社 | マルチプロセッサ型動画像符号化装置及びバス制御方法 |
JPH05258027A (ja) * | 1992-03-12 | 1993-10-08 | Toshiba Corp | 画像処理装置 |
JPH05274400A (ja) * | 1992-03-25 | 1993-10-22 | Toshiba Corp | 画像処理装置 |
JPH06214555A (ja) * | 1993-01-20 | 1994-08-05 | Sumitomo Electric Ind Ltd | 画像処理装置 |
US5493643A (en) * | 1994-05-03 | 1996-02-20 | Loral Aerospace Corp. | Image generator architecture employing tri-level fixed interleave processing and distribution buses |
GB2302743B (en) * | 1995-06-26 | 2000-02-16 | Sony Uk Ltd | Processing apparatus |
US5768594A (en) * | 1995-07-14 | 1998-06-16 | Lucent Technologies Inc. | Methods and means for scheduling parallel processors |
US5821950A (en) * | 1996-04-18 | 1998-10-13 | Hewlett-Packard Company | Computer graphics system utilizing parallel processing for enhanced performance |
JPH11338606A (ja) * | 1998-05-21 | 1999-12-10 | Dainippon Printing Co Ltd | 仮想空間共有システム |
US6329996B1 (en) * | 1999-01-08 | 2001-12-11 | Silicon Graphics, Inc. | Method and apparatus for synchronizing graphics pipelines |
JP2000222590A (ja) * | 1999-01-27 | 2000-08-11 | Nec Corp | 画像処理方法及び装置 |
US6753878B1 (en) * | 1999-03-08 | 2004-06-22 | Hewlett-Packard Development Company, L.P. | Parallel pipelined merge engines |
US6384833B1 (en) * | 1999-08-10 | 2002-05-07 | International Business Machines Corporation | Method and parallelizing geometric processing in a graphics rendering pipeline |
JP3325253B2 (ja) * | 2000-03-23 | 2002-09-17 | コナミ株式会社 | 画像処理装置、画像処理方法、記録媒体及びプログラム |
-
2001
- 2001-10-02 JP JP2001306962A patent/JP3688618B2/ja not_active Expired - Fee Related
- 2001-10-09 CN CNB018030750A patent/CN1236401C/zh not_active Expired - Fee Related
- 2001-10-09 MX MXPA02005310A patent/MXPA02005310A/es unknown
- 2001-10-09 AU AU94208/01A patent/AU9420801A/en not_active Abandoned
- 2001-10-09 CA CA002392541A patent/CA2392541A1/en not_active Abandoned
- 2001-10-09 TW TW090124997A patent/TWI244048B/zh not_active IP Right Cessation
- 2001-10-09 EP EP01974746.8A patent/EP1326204B1/en not_active Expired - Lifetime
- 2001-10-09 WO PCT/JP2001/008862 patent/WO2002031769A1/ja active Application Filing
- 2001-10-09 KR KR1020027007390A patent/KR20020064928A/ko not_active Application Discontinuation
- 2001-10-09 BR BR0107325-7A patent/BR0107325A/pt not_active Application Discontinuation
- 2001-10-10 US US09/974,608 patent/US7212211B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62118479A (ja) * | 1985-11-19 | 1987-05-29 | Sony Corp | 情報処理システム |
US5237686A (en) * | 1989-05-10 | 1993-08-17 | Mitsubishi Denki Kabushiki Kaisha | Multiprocessor type time varying image encoding system and image processor with memory bus control table for arbitration priority |
JPH04131960A (ja) * | 1990-09-25 | 1992-05-06 | Hitachi Ltd | 計算機の運転方法及び計算機システム |
JPH07248750A (ja) * | 1994-03-11 | 1995-09-26 | Hitachi Ltd | マルチ画面表示システム |
JPH08138060A (ja) * | 1994-11-04 | 1996-05-31 | Hitachi Ltd | 並列プロセッサを用いる表示処理装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1326204A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007043130A1 (ja) * | 2005-10-03 | 2007-04-19 | Fujitsu Limited | 描画装置、半導体集積回路装置及び描画方法 |
Also Published As
Publication number | Publication date |
---|---|
MXPA02005310A (es) | 2002-12-11 |
EP1326204B1 (en) | 2018-08-08 |
TWI244048B (en) | 2005-11-21 |
JP3688618B2 (ja) | 2005-08-31 |
US7212211B2 (en) | 2007-05-01 |
KR20020064928A (ko) | 2002-08-10 |
CN1236401C (zh) | 2006-01-11 |
JP2002244646A (ja) | 2002-08-30 |
CA2392541A1 (en) | 2002-04-18 |
US20020052955A1 (en) | 2002-05-02 |
EP1326204A1 (en) | 2003-07-09 |
BR0107325A (pt) | 2002-08-27 |
AU9420801A (en) | 2002-04-22 |
CN1393000A (zh) | 2003-01-22 |
EP1326204A4 (en) | 2007-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3580789B2 (ja) | データ通信システム及び方法、コンピュータプログラム、記録媒体 | |
TW487900B (en) | Image display system, host device, image display device and image display method | |
US7737982B2 (en) | Method and system for minimizing an amount of data needed to test data against subarea boundaries in spatially composited digital video | |
JP3681026B2 (ja) | 情報処理装置および方法 | |
JP4372043B2 (ja) | コマンド実行制御装置、コマンド実行指示装置およびコマンド実行制御方法 | |
US20020130870A1 (en) | Information processing system, integrated information processing system, method for calculating execution load, and computer program | |
JP2001243481A (ja) | 画像生成装置 | |
JP3688618B2 (ja) | データ処理システム及びデータ処理方法、コンピュータプログラム、記録媒体 | |
US20200167119A1 (en) | Managing display data | |
US8447035B2 (en) | Contract based memory management for isochronous streams | |
JP4011082B2 (ja) | 情報処理装置、グラフィックプロセッサ、制御用プロセッサおよび情報処理方法 | |
US7017065B2 (en) | System and method for processing information, and recording medium | |
JP2001255860A (ja) | 映像データ転送装置及び映像データの転送方法 | |
JP3468985B2 (ja) | グラフィック描画装置、グラフィック描画方法 | |
JPH1153528A (ja) | デジタル画像処理装置及び方法 | |
JPH0916807A (ja) | マルチスクリーン表示回路 | |
JPH06274155A (ja) | 画像の合成表示装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AU BR CA CN IN KR MX NZ RU SG |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BE CH DE DK ES FI FR GB IT NL SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 94208/01 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: IN/PCT/2002/00629/MU Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 519057 Country of ref document: NZ |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2392541 Country of ref document: CA Ref document number: 2001974746 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: PA/a/2002/005310 Country of ref document: MX |
|
ENP | Entry into the national phase |
Ref document number: 2002 2002115642 Country of ref document: RU Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020027007390 Country of ref document: KR Ref document number: 018030750 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 1020027007390 Country of ref document: KR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 2001974746 Country of ref document: EP |