US20100328354A1 - Networked Computer Graphics Rendering System with Multiple Displays - Google Patents

Networked Computer Graphics Rendering System with Multiple Displays Download PDF

Info

Publication number
US20100328354A1
US20100328354A1 US12/492,883 US49288309A US2010328354A1 US 20100328354 A1 US20100328354 A1 US 20100328354A1 US 49288309 A US49288309 A US 49288309A US 2010328354 A1 US2010328354 A1 US 2010328354A1
Authority
US
United States
Prior art keywords
server
client
view
display device
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/492,883
Inventor
Brian Watson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Sony Network Entertainment Platform Inc
Original Assignee
Sony Computer Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment Inc filed Critical Sony Computer Entertainment Inc
Priority to US12/492,883 priority Critical patent/US20100328354A1/en
Assigned to SONY COMPUTER ENTERTAINMENT INC. reassignment SONY COMPUTER ENTERTAINMENT INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WATSON, BRIAN
Priority to PCT/US2010/039371 priority patent/WO2010151511A1/en
Priority to JP2012517631A priority patent/JP2012531668A/en
Priority to EP10792556A priority patent/EP2446431A1/en
Publication of US20100328354A1 publication Critical patent/US20100328354A1/en
Assigned to SONY NETWORK ENTERTAINMENT PLATFORM INC. reassignment SONY NETWORK ENTERTAINMENT PLATFORM INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SONY COMPUTER ENTERTAINMENT INC.
Assigned to SONY COMPUTER ENTERTAINMENT INC. reassignment SONY COMPUTER ENTERTAINMENT INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SONY NETWORK ENTERTAINMENT PLATFORM INC.
Assigned to SONY INTERACTIVE ENTERTAINMENT INC. reassignment SONY INTERACTIVE ENTERTAINMENT INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SONY COMPUTER ENTERTAINMENT INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1438Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using more than one graphics controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices
    • G09G2300/026Video wall, i.e. juxtaposition of a plurality of screens to create a display screen of bigger dimensions

Definitions

  • the present invention relates generally to computer graphics rendering systems, and more particularly to graphics rendering systems with multiple displays.
  • Immersion can contribute to the realism of rendered scenes.
  • Immersion refers to moving the perspective of the graphics application user from that of an outsider looking into scene to a perspective of being part of the scene.
  • Immersion is difficult to achieve in a resource-efficient manner within the hardware and processing limitations of a most popular computing platforms. For example, many first and third person computer games render a wide field of view (FOV) of about 60° for display on a display device. While increasing the FOV reduces a feeling a “tunnel vision,” the wide FOV yields a “fish-eyed” view, sacrifices detail, and fails to achieve the immersion effect.
  • FOV field of view
  • a graphics application may also require a platform with a high computing power, for example with one or more multi-core central processing unit (CPU) and a plurality of graphics processing units (GPUs) downstream of the CPU.
  • CPU central processing unit
  • GPUs graphics processing units
  • the CPU typically generates a single graphical realm model which is then processed with each GPU performing rendering passes to separately drive one of a plurality of display devices connected to the platform.
  • This method is extremely hardware intensive, so the graphics platform market is small and as such few application titles authored to take advantage of such a platform's capabilities.
  • FIG. 1 illustrates a schematic diagram of multiple display device graphics rendering system, in accordance with an embodiment of the present invention
  • FIG. 2 illustrates flow diagram illustrating an exemplary method of rendering graphics with a plurality of computing platforms and display devices, in accordance with an embodiment of the present invention
  • FIG. 3 illustrates a plurality of camera viewing frustums each generated by a computing platform and each angularly offset for display by multiple display devices, in accordance with one embodiment of the present invention
  • FIG. 4 illustrates a plan view of a multiple display graphics rendering system, in accordance with an embodiment of the present invention
  • FIG. 5 shows a schematic diagram of a multi-system configuration, according to one embodiment of the present invention.
  • FIG. 6 illustrates hardware and user interfaces that may be used to generate and render an application camera view, in accordance with one embodiment of the present invention.
  • FIG. 7 illustrates additional hardware that may be used to process instructions, in accordance with one embodiment of the present invention.
  • Described herein is a system having multiple displays for rendering and displaying graphics and a method for rendering and displaying graphics on the system.
  • numerous specific details are set forth in order to provide a thorough understanding of embodiments of the invention. However, it will be understood by those skilled in the art that other embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
  • Some portions of the detailed description that follows are presented in terms of algorithms and symbolic representations of operations on data bits or binary digital signals within a computer memory. These algorithmic descriptions and representations may be the techniques used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art.
  • the present invention also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • the apparatus for performing the operations herein includes a game console (e.g., a Sony Playstation®, a Nintendo Wii®, a Microsoft Xbox®, etc.).
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks (e.g., compact disc read only memory (CD-ROMs), digital video discs (DVDs), Blu-Ray DiscsTM, etc.), and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • a computer readable storage medium such as, but not limited to, any type of disk including floppy disks, optical disks (e.g., compact disc read only memory (CD-ROMs), digital video discs (DVDs), Blu-Ray DiscsTM, etc.), and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • CD-ROMs compact disc read only memory
  • Coupled may be used to indicate that two or more elements are in direct physical or electrical contact with each other. “Coupled” my be used to indicated that two or more elements are in either direct or indirect (with other intervening elements between them) physical or electrical contact with each other, and/or that the two or more elements co-operate or interact with each other (e.g., as in a cause an effect relationship).
  • a multiple display graphics rendering system includes a plurality of computing platforms with each computing platform to generate and render graphics for display on a display device of the system.
  • FIG. 1 illustrates a perspective view of a multiple display graphics rendering system 100 , in accordance with one embodiment of the present invention where each computing platform is a game console.
  • the depicted game consoles may each be replaced by any conventional computer having a programmable processor.
  • the system 100 includes three game consoles, a view server console 101 and view client consoles 102 and 103 , although any number of consoles may be included in a system.
  • each computing platform in a multiple display graphics rendering system is connected to a single display device.
  • a one-to-one ratio of computing platforms to display devices is maintained regardless of the number of computing platforms included in a system.
  • Each display device in the system is to display graphics generated and rendered by the computing platform to which it is coupled.
  • a computing platform may be connected to more than one display device, whereby the computing platform would generate and render graphics for the two or more connected display devices.
  • the view server console 101 is coupled to a view server display device 111 by video data I/O link 151 .
  • the view server display device 111 is therefore to display graphical output, such as graphics objects 171 and 172 , generated by the view server console 101 .
  • the view server display device 111 may be of any display technology known in the art, such as, but not limited to, an LCD, a plasma display, a projection display, a cathode ray tube (CRT).
  • the video data I/O link 151 may utilize any technology known in the art for transmitting video, such as, but not limited to, component video, S-video, composite video a High Definition Multimedia Interface (HDMI).
  • the view client consoles 102 and 103 are each similarly coupled through a video data I/O link 152 or 153 to a view client display device, 112 or 113 , respectively.
  • the view client display device 112 displays graphical output, such as graphics object 172 , generated by the view client console 102 while the view client display device 113 displays graphical output, such as graphics object 173 , generated by the view client console 103 .
  • the display devices in a multiple display graphics rendering system have a known physical position with respect to each other.
  • each of the view client display devices 112 - 113 is disposed on a table 109 with the display devices having a linear position and an orientation (i.e. angular position) relative to a reference coordinate system of the view server display device 111 .
  • view client display device 113 has an angular position rotated by ⁇ about the Y-Axis of the reference coordinate system of the view server display device 111 so that the vector N normal to the screen of the display devices 111 and 113 converge at a same reference center, C, with an angular position offset of approximately 45°.
  • View client display device 112 also has a linear position, along the X-Axis of the reference coordinate system of the view server display device 111 . Because the view client display device 112 is not also rotated about the Y-Axis, the vector N normal to the screen does not coverage at the point F. Nonetheless, a ray from the reference C may be made to the centerline of the view client display device 112 to define the position of the view client display device 112 relative to view server display device 111 in terms of a physical angular position offset (i.e. viewing angle offset).
  • display devices may also be rotated about one or more of the X-Axis and Z-Axis of the reference coordinate system. For example, a view client display disposed above the view server display device 111 may be tilted down with an angular position rotated about the X-Axis of the reference coordinate system.
  • each of the computing platforms in a multiple display graphics rendering system is coupled together over a network of communication links.
  • the network may be any network known in the art capable of providing sufficiently high bandwidth to achieve good performance of the graphics rendering methods described elsewhere herein which update a view state between about 20 and 100 times every second, for example.
  • the network is a local area network (LAN) 110 .
  • LAN 110 may be any conventional LAN common to the art, such as, but not limited to one of the Ethernet family of frame-based short haul computer networking technologies.
  • the only computing platforms which are physical nodes on the LAN 110 are those actively generated and rendering one of the multiple displays of the system 100 .
  • the view server and the view clients all share a same local high speed network switch.
  • Such an embodiment is therefore exclusive of wide area networks (WAN) which include many physical nodes merely forwarding and not processing data packets specific to the multiple display graphics rendering system.
  • WAN wide area networks
  • most contemporary WANs offer insufficient bandwidth (allowing perhaps only less than 10 view state updates every second) to achieve good performance of the graphics rendering methods described herein.
  • At least one of the computing platforms in a multiple display graphics rendering system is coupled to a controller designed to be used by system user to control the graphics displayed as a view on at least one of the multiple display devices.
  • only one computing platform in a multiple display graphics rendering system is coupled to a controller and that controller controls the graphics displayed as separate views on all of the display devices in the graphics rendering system.
  • view server console 101 is coupled to the controller 125 through the server controller data I/O link 126 .
  • the controller 125 may be any pointing device, such as, but not limited, to a conventional game controller having a joystick and the server controller data I/O link 126 may be any conventional wired or wireless link.
  • FIG. 2 illustrates flow diagram illustrating an exemplary method 200 of rendering graphics with a plurality of computing platforms and display devices, such as provided in the multiple display graphics rendering system 100 .
  • Method 200 begins with each of a first, second, and nth computing platform connected by a communication link each launching an application configured to establish an intra-system server-client relationship.
  • intra-system refers to a view client-server relationship which spans only the computing platforms making up a multiple display graphics rendering system.
  • a first application is launched on the first computing platform.
  • the first application launched may be any type of graphics intensive computer application known in the art, such as, but not limited to, computer game applications, CAD applications, video editing applications, and the like.
  • the application launched is a first or third person 3D computer game application, such as, but not limit to, the third person 3D game SOCOMTM published by Sony Computer Entertainment, Inc.
  • a second application is launched on the second computing platform.
  • the second application launched may be of any type as that described for the first application.
  • the second application launched on the second computing platform is the same application as the first application launched on the first computing platform so that both platforms are separately executing a same application (e.g., SOCOMTM).
  • the first computing platform is nominated as a view server and the second platform becomes a view client.
  • a similar nomination of one view server may be made in systems having three or more computing platforms, whereby all remaining platforms become client platforms.
  • the nomination of the first computing platform may be made by configuration routine (either performed automatically or manually, requiring user input), that nominates based on one or more of a comparison of processing capability of the first and second computing platforms, a physical relationship between display devices coupled to the computing platforms, or a comparison of display device characteristics. For example, a configuration routine may determine which system display device has a larger display area or higher display resolution and nominate the computing platform coupled to that display device to be the view server.
  • a configuration setting may be set to indicate which display a user desires to be a center of a field of view (FOV).
  • a computing platform having a higher computing power such as a Sony® Playstation 3® computing platform may be nominated a view server over a Sony® Playstation 2® computing platform of the same system.
  • the view server sets a view angular position offset (i.e. view orientation offset) to a default of zero.
  • a view angular position offset of zero a camera view rendered by the view server defaults to a camera view rendered by the application when run on a computing platform in absence of an intra-system view client-server relationship (i.e., a single computing platform).
  • the view client sets a view orientation offset to be non-zero.
  • a configuration setting associated with the view orientation offset is accessed from a storage location on either the view client or the view server.
  • the view orientation offset is view client-specific and is predetermined based on the physical orientation offset between the display device coupled to the view server and the display device coupled to a particular view client.
  • the view orientation offset for the view client console 103 is set to be approximately equal to the physical orientation offset of ⁇ between the view client display device 113 and the view server display device 111 .
  • the view orientation offset for the view client console 102 is similarly set to be approximately equal to the physical orientation offset between the view client display device 112 and the view server display device 111 .
  • the view orientation offset is predetermined based on a calibration routine run on either or both the view server and view client to deduce the physical relative orientation between display devices.
  • Method 200 proceeds to operation 225 where a FOV value for the server is reduced to less than 50° is response to being networked with another platform driving another display device.
  • the greater FOV provided across the plurality of display devices may thereby enable a reduction in the fish-eye effect on the view server display while the secondary (and tertiary, etc.) view client display devices placed proximate to the view server display device combine to provide a user with an immersive experience.
  • a FOV value of the view client may also be reduced to less than 50° and communicated to the view server.
  • each of the view server and view client have a FOV value set to between 20° and 50° with an exemplary embodiment having both the view server and the view client FOV value set to approximately 25°.
  • the view server executes a geometry pipeline in operations 230 , 235 and 250 to display a server camera view at operation 255 on a view server display device.
  • each view client executes a geometry pipeline in operations 245 , 260 and 265 to display a client camera view at operation 270 on a view client display device.
  • execution of these geometry pipelines by the view server and view client(s) is coordinated based internal view state information transmitted by the view server and based on one or more of the view orientation offset and FOV value to collectively render and display graphics spanning a plurality of display devices at substantially the same time.
  • a view control input device such as the controller 125
  • input vertex data associated with input received from a view control input device is converted into model space using any technique conventional in the art.
  • the view server generates a world space from a model space.
  • a “space” is a coordinate system for positioning objects relative to a frame of reference.
  • the Z-axis depth-related axis
  • a model space is a frame of reference that defines a modeled object's vertices relative to an origin local to that model.
  • Each object rendered in a world scene such as graphics objects 171 , 172 and 173 , has a corresponding model with each model including a plurality of vertices in model space.
  • the model space may generate vertices relative to an origin located at the center of the human form.
  • a global origin is defined as common point of reference for determining different locations in the scene and all modeled object vertices are defined relative to that global origin.
  • FIG. 3 depicts an exemplary server viewing frustum 300 corresponding to the system 100 depicted in FIG. 1 .
  • the server viewing frustum 300 has a frame of reference in which the viewer is at the origin, looking in the direction of the positive Z-axis into the conical viewing volume clipped by a front plane 301 and the back plane 302 .
  • the outer boundaries of the server viewing frustum 300 converge at the server camera 305 .
  • the server viewing frustum 300 assumes a truncated pyramidal shape.
  • the server camera 305 defines the vantage point in world space from which a viewer observes the world scene. Accordingly, the world space coordinates are relocated and oriented to a world space position vantage point to provide the server viewing frustum 300 at the server camera 305 .
  • the view server scans through its internal state of all world-space objects and transmits the state of each object in a manner that allows all of the view clients to accurately represent the same world scene.
  • the transport mechanism may be unreliable delivery protocol (UDP) or a transmission control protocol (TCP), which may depend on the choice of the title.
  • UDP transport will have additional state information preserved by the view server console so that if an update packet has been lost, the information can be retransmitted with more up to date information.
  • TCP can also be used, additional latency variation may be expected.
  • TCP is reliable so that no application interaction is required to confirm data delivery. On a local network, such as LAN 110 , it is unlikely that latency will be an issue especially when the view server/client consoles share a local high speed network switch.
  • a view state indicative of a world space position and world space orientation of the server camera 305 is transmitted over the communication link to the view client(s).
  • the server camera view matrix is communicated to the view clients.
  • the view server multicasts game view state information between about 20 and 100 times/second.
  • Object animation information may further be communicated as part of the internal state data transmitted by the view server. Additional information may also be sent by the view server to synchronize locally-generated objects (e.g., objects to be generated by the view client consoles) such as particle effects. For a conventional online-game configuration, such locally-generated objects may be only roughly approximated as it is unnecessary for a remote player to see such effects completely synchronized.
  • a close synchronization of locally-generated objects is achieved based on view state information sent by the view server to advantageously provide the most desirable view to a system user.
  • a view client receives the transmitted state information and generates a substantially same world space as that generated by the view server.
  • the view client regenerates the world space to render a secondary view of the same world scene, replicating graphics objects generated by the view server in response to each view state update received.
  • this secondary view has a vantage point at the server camera 305 .
  • the server and client share a common camera as established by the view server and communicated to the view client.
  • the view client may then employ the view orientation offset determined at operation 220 to angularly offset or rotate the client viewing frustum from the world space orientation so that the client viewing frustum does not overlap the server viewing frustum. In this manner, each client-specific view orientation is applied to the received server state information to render a client view dependent on the server view.
  • a client viewing frustum 310 is generated by a view client based on received server view state information and the view orientation offset specific to the client to have a view orientation offset 350 from the Z-AXIS central to the server viewing frustum 300 (as defined per the world space orientation).
  • the client viewing frustum 310 is offset to be adjacent and to the right of the server viewing frustum 300 .
  • the graphics object 172 is replicated by the view client and a portion of the graphics object 172 outside of the server viewing frustum 300 is rendered.
  • the portion of the graphics object 172 within the client viewing frustum 310 may be further dependent on the client FOV value from operation 226 . As further shown in FIG.
  • another view client also receiving the broadcast view state information from the view server, generates a third client viewing frustum 320 having a view orientation offset 351 from the server viewing frustum 300 .
  • the client viewing frustum 320 is offset to be adjacent and to the left of the server viewing frustum 300 to render the graphics object 173 (generated by the server as part of the world space and communicated to the client) but beyond the server viewing frustum 300 .
  • the viewing frustums 300 - 320 each have a FOV less than 50°, for example between 25°.
  • the a view orientation offset may be set to less than twice the FOV to provide a continuous FOV across the view server and view client display devices to accommodate physical spacing between adjacent display devices (caused by display device bezel, etc.).
  • the view orientation offset may be set to approximately 40° for a 25° FOV in each display.
  • the view orientation offset may be greater than or equal to twice the display FOV such that adjacent viewing frustums are spaced apart and the FOV across the view server and view client displays is discontinuous. Objects in world space may then be “hidden” from view in the region between display devices.
  • the view server and view client(s) each render viewing frustums of a common world space to form a mesh of view spaces of the same world scene.
  • multiple displays are coordinated with rendering performed by separate computing platforms.
  • the computing power of each platform may thereby be lessened.
  • any application title authored for a stand-alone execution on a single computing platform having a single display, such as view server console 101 may be readily scaled to support a plurality of display devices with little application-specific coding.
  • a supplier of the computing platform may provide a common library of functions providing multiple-display graphics rendering support which may then be incorporated by any developer of applications for the computing platform.
  • Titles may then merely be authored to have a greater or lesser number of graphics objects as a function of the FOV provided by the multiple display graphics rendering system.
  • an application title executing in a single platform or “stand-alone” mode may generate graphics objects with a bias for the 60° field of FOV from the camera.
  • the same application title executing in a multiple display system would increase the graphics object generation bias to encompass a greater FOV based on the cumulative FOV available as determined through the number of consoles on the network and FOV configuration settings for each console.
  • server viewing frustum 300 may be rendered to give depth to a scene whereby object 171 may be made to appear smaller than object 172 based on the relative position of the objects in the server viewing frustum 300 , as known in the art. Additional matrix operations may then be performed to generate a server screen space with a frame of reference in which coordinates are related directly to screen X-Y locations in a frame buffer, to be displayed on a server display device at operation 255 .
  • the server viewing frustum 300 may be rendered into a server screen space including the graphics object 171 and a portion of the graphics object 172 within the server viewing frustum 300 , as depicted in FIG. 1 .
  • similar rendering activities may be performed by the view client(s) to display client screen spaces at substantially the same time as the server screen space is rendered so that a combined view across displays of the system are most nearly in lock-step.
  • the client viewing frustum 310 may be rendered into a client screen space including a portion of the graphics object 172 within the client viewing frustum 310 , as determined by the FOV and orientation offset 350 at substantially the same time the viewing frustum 300 is rendered into a server screen space including the portion of the graphics object 172 within the server viewing frustum, as further depicted in FIG. 1 .
  • any changes to the input matrix received from a controller coupled to the view server triggers an update in the server view space which is then transmitted at operation 240 .
  • a view client re-renders a corresponding client view space to reflect the input matrix change.
  • only the view server includes a view control input device, such as controller 125 .
  • a view client computing platform assuming a role of rendering a secondary view of the world scene as controlled through the view server need not include any controller. Because the view client systems are in a passive rendering mode, in certain embodiments intra-system network communication may be substantially unidirectional with the view server performing broadcasts to downstream view clients and view clients also not communicating to each other. Network bandwidth may therefore be substantially dedicated to the view server to allow high internal state refresh rates across all platforms in a system.
  • FIG. 4 illustrates a plan view of a multiple display graphics rendering system 400 , in accordance with an embodiment of the present invention.
  • the system 400 includes all of the components of system 100 with the view client display device 112 rotated about the Y-axis of FIG. 2 and four view client display devices 414 , 415 , 416 and 417 added to nearly surround a user chair 450 .
  • Each of the view client displays is coupled to a view client console, 404 , 405 , 406 and 407 in substantially the same manner as described for system 100 .
  • each of the displays has a physical orientation with respect to the view server display device 111 .
  • view client display device 112 is position relative to the view server display device 111 by a first physical orientation offset 422 and view client display device 416 is positioned relative the view server display device 111 by a second physical orientation offset 426 .
  • each view client display device is positioned relative to the view server display device 111 by a fixed increment of between approximately 40 to 50 degrees such that the first physical orientation offset 422 is approximately 40 to 50 degrees and the second physical orientation offset 426 is approximately 80 to 100 degrees.
  • each computing platform may render a view space modifying a server's view state with a view orientation offset value based on the physical orientation angle to collectively provide a nearly 360° FOV of a same world scene. Any number of display devices may be combined to provide up to a 360° FOV in any dimension.
  • a multiple display rendering system such as system 400
  • FIG. 5 shows a block diagram of two of multiple display graphics rendering systems, communicatively coupled together.
  • the multiple display graphics rendering system 515 including a view server console 501 coupled over a LAN 510 to view client consoles 502 with each console further coupled to one of the display devices 503 , is coupled to another system 615 similarly configured with a view server console 601 coupled over a LAN 610 to view client consoles 602 with each console further coupled to one of the display devices 603 .
  • the view server console 601 is further coupled to view server console 501 via a WAN 550 to allow communication between the server consoles.
  • an inter-system client-server relationship is established between the view server consoles 501 and 601 separate from the intra-system view client-server relationship established within systems 515 and 615 .
  • the inter-system client-server relationship may be established using conventional one-to-one or one-to-many communication techniques whereby the only computing platform of a multiple display rendering system (e.g., system 515 or system 615 ) that is visible or active on the inter-system network is the computing platform acting as the system view server (e.g., server consoles 501 and 601 ).
  • conventional online gaming techniques may be employed whereby object position and animation information may be periodically communicated, using UDP transport for example, between the view server consoles 501 and 601 to allow a player using the multiple display graphics rendering system 515 to play a same game as is a player using the system 615 .
  • the inter-system refresh rates between the system view server consoles 501 and 601 may be relatively lower than the intra-system refresh rates within each system 515 and 615 .
  • all consoles e.g., view server consoles 501 , 601 and respective view client consoles 502 and 602
  • all consoles execute the same application to generate and render graphics for a same world space.
  • the display devices 503 display graphics rendered by the view server console 501 and view client consoles 502 from a first server camera at a first world position (e.g., first player position in a world space) with an orientation offset in the camera views displayed by each display device 503 .
  • the display devices 603 display graphics rendered by the view server console 601 and view client consoles 602 from a second server camera at a second world position (e.g., second player position in the world space) with an orientation offset in the camera views displayed by each display device 603 .
  • a second world position e.g., second player position in the world space
  • FIG. 6 illustrates hardware and user interfaces that may be used to generate and render an application camera view as a component of a multiple display graphics rendering system, in accordance with one embodiment of the present invention.
  • FIG. 6 schematically illustrates the overall system architecture of the Sony® Playstation 3® entertainment device, a console that may be compatible for implementing multiple display graphics rendering system.
  • a platform unit 1400 is provided, with various peripheral devices connectable to the platform unit 1400 .
  • the platform unit 1400 comprises: a Cell processor 1428 ; a Rambus® dynamic random access memory (XDRAM) unit 1426 ; a Reality Simulator graphics unit 1430 with a dedicated video random access memory (VRAM) unit 1432 ; and an I/O bridge 1434 .
  • XDRAM Rambus® dynamic random access memory
  • VRAM dedicated video random access memory
  • the platform unit 1400 also comprises a Blu Ray® Disk BD-ROM® optical disk reader 1440 for reading from a disk 1440 A and a removable slot-in hard disk drive (HDD) 1436 , accessible through the I/O bridge 1434 .
  • the platform unit 1400 also comprises a memory card reader 1438 for reading compact flash memory cards, Memory Stick® memory cards and the like, which is similarly accessible through the I/O bridge 1434 .
  • the I/O bridge 1434 also connects to multiple Universal Serial Bus (USB) 2.0 ports 1424 ; a gigabit Ethernet port 1422 ; an IEEE 802.11b/g wireless network (Wi-Fi) port 1420 ; and a Bluetooth® wireless link port 1418 capable of supporting of up to seven Bluetooth connections.
  • USB Universal Serial Bus
  • Wi-Fi IEEE 802.11b/g wireless network
  • Bluetooth® wireless link port 1418 capable of supporting of up to seven Bluetooth connections.
  • the I/O bridge 1434 handles all wireless, USB and Ethernet data, including data from one or more game controller 1402 .
  • the I/O bridge 1434 receives data from the game controller 1402 via a Bluetooth link and directs it to the Cell processor 1428 , which updates the current state of the game accordingly.
  • the wireless, USB and Ethernet ports also provide connectivity for other peripheral devices in addition to game controller 1402 , such as: a remote control 1404 ; a keyboard 1406 ; a mouse 1408 ; a portable entertainment device 1410 such as a Sony Playstation Portable® entertainment device; a video camera such as an EyeToy® video camera 1412 ; a microphone headset 1414 ; and a microphone 1415 .
  • peripheral devices may therefore in principle be connected to the platform unit 1400 wirelessly; for example the portable entertainment device 1410 may communicate via a Wi-Fi ad-hoc connection, while the microphone headset 1414 may communicate via a Bluetooth link.
  • Playstation 3 device is also potentially compatible with other peripheral devices such as digital video recorders (DVRs), set-top boxes, digital cameras, portable media players, Voice over IP telephones, mobile telephones, printers and scanners.
  • DVRs digital video recorders
  • set-top boxes digital cameras
  • portable media players Portable media players
  • Voice over IP telephones mobile telephones, printers and scanners.
  • a legacy memory card reader 1416 may be connected to the system unit via a USB port 1424 , enabling the reading of memory cards 1448 of the kind used by the Playstation® or Playstation 2® devices.
  • the game controller 1402 is operable to communicate wirelessly with the platform unit 1400 via the Bluetooth link, or to be connected to a USB port, thereby also providing power by which to charge the battery of the game controller 1402 .
  • Game controller 1402 can also include memory, a processor, a memory card reader, permanent memory such as flash memory, light emitters such as LEDs or infrared lights, microphone and speaker for ultrasound communications, an acoustic chamber, a digital camera, an internal clock, a recognizable shape such as a spherical section facing the game console, and wireless communications using protocols such as Bluetooth®, WiFiTM, etc.
  • Game controller 1402 is a controller designed to be used with two hands. In addition to one or more analog joysticks and conventional control buttons, the game controller is susceptible to three-dimensional location determination. Consequently gestures and movements by the user of the game controller may be translated as inputs to a game in addition to or instead of conventional button or joystick commands.
  • other wirelessly enabled peripheral devices such as the PlaystationTM Portable device may be used as a controller.
  • additional game or control information for example, control instructions or number of lives
  • Other alternative or supplementary control devices may also be used, such as a dance mat (not shown), a light gun (not shown), a steering wheel and pedals (not shown) or the like.
  • the remote control 1404 is also operable to communicate wirelessly with the platform unit 1400 via a Bluetooth link.
  • the remote control 1404 comprises controls suitable for the operation of the Blu RayTM Disk BD-ROM reader 1440 and for the navigation of disk content.
  • the Blu RayTM Disk BD-ROM reader 1440 is operable to read CD-ROMs compatible with the Playstation and PlayStation 2 devices, in addition to conventional pre-recorded and recordable CDs, and so-called Super Audio CDs.
  • the reader 1440 is also operable to read DVD-ROMs compatible with the Playstation 2 and PlayStation 3 devices, in addition to conventional pre-recorded and recordable DVDs.
  • the reader 1440 is further operable to read BD-ROMs compatible with the Playstation 3 device, as well as conventional pre-recorded and recordable Blu-Ray Disks.
  • the platform unit 1400 is operable to supply audio and video, either generated or decoded by the Playstation 3 device via the Reality Synthesizer graphics unit 1430 , through audio and video connectors to a display and sound output device such as a monitor or television set having a display and one or more loudspeakers.
  • the audio connectors 1450 may include conventional analogue and digital outputs while the video connectors 1452 may variously include component video, S-video, composite video and one or more High Definition Multimedia Interface (HDMI) outputs. Consequently, video output may be in formats such as PAL or NTSC, or in 720p, 1080i or 1080p high definition.
  • Audio processing (generation, decoding and so on) is performed by the Cell processor 1428 .
  • the Playstation 3 device's operating system supports Dolby® 5.1 surround sound, Dolby® Theatre Surround (DTS), and the decoding of 7.1 surround sound from Blu-Ray® disks.
  • DTS Dolby® Theatre Surround
  • the video camera 1412 comprises a single charge coupled device (CCD), an LED indicator, and hardware-based real-time data compression and encoding apparatus so that compressed video data may be transmitted in an appropriate format such as an intra-image based MPEG (motion picture expert group) standard for decoding by the platform unit 1400 .
  • the camera LED indicator is arranged to illuminate in response to appropriate control data from the platform unit 1400 , for example to signify adverse lighting conditions.
  • Embodiments of the video camera 1412 may variously connect to the platform unit 1400 via a USB, Bluetooth or Wi-Fi communication port.
  • Embodiments of the video camera may include one or more associated microphones and may also be capable of transmitting audio data.
  • the CCD may have a resolution suitable for high-definition video capture.
  • images captured by the video camera may for example be incorporated within a game or interpreted as game control inputs.
  • the camera is an infrared camera suitable for detecting infrared light.
  • an appropriate piece of software such as a device driver should be provided.
  • Device driver technology is well-known and will not be described in detail here, except to say that those skilled in the art are aware that a device driver or similar software interface may be required in the present embodiment described.
  • FIG. 7 illustrates additional hardware that may be used to process instructions, in accordance with one embodiment of the present invention.
  • Cell processor 1428 of FIG. 6 as further illustrated in FIG. 7 has an architecture comprising four basic components: external input and output structures comprising a memory controller 1560 and a dual bus interface controller 1570 A, B; a main processor referred to as the Power Processing Element 1550 ; eight co-processors referred to as Synergistic Processing Elements (SPEs) 1510 A-H; and a circular data bus connecting the above components referred to as the Element Interconnect Bus 1580 .
  • the total floating point performance of the Cell processor is 218 GFLOPS, compared with the 6.2 GFLOPs of the Playstation 2 device's Emotion Engine.
  • the Power Processing Element (PPE) 1550 is based upon a two-way simultaneous multithreading Power 1470 compliant PowerPC core (PPU) 1555 running with an internal clock of 3.2 GHz. It comprises a 512 kB level 2 (L2) cache 1552 and a 32 kB level 1 (L1) cache 1551 .
  • the PPE 1550 is capable of eight single position operations per clock cycle, translating to 25.6 GFLOPs at 3.2 GHz.
  • the primary role of the PPE 1550 is to act as a controller for the Synergistic Processing Elements 1510 A-H, which handle most of the computational workload. In operation the PPE 1550 maintains a job queue, scheduling jobs for the Synergistic Processing Elements 1510 A-H and monitoring their progress. Consequently each Synergistic Processing Element 1510 A-H runs a kernel whose role is to fetch a job, execute it and synchronized with the PPE 1550 .
  • Each Synergistic Processing Element (SPE) 1510 A-H comprises a respective Synergistic Processing Unit (SPU) 1520 A-H, and a respective Memory Flow Controller (MFC) 1540 A-H comprising in turn a respective Dynamic Memory Access Controller (DMAC) 1542 A-H, a respective Memory Management Unit (MMU) 1544 A-H and a bus interface (not shown).
  • SPU 1520 A-H is a RISC processor clocked at 3.2 GHz and comprising 256 kB local RAM 1530 A-H, expandable in principle to 4 GB.
  • Each SPE gives a theoretical 25.6 GFLOPS of single precision performance.
  • An SPU can operate on 4 single precision floating point members, 4 32-bit numbers, 8 16-bit integers, or 16 8-bit integers in a single clock cycle. In the same clock cycle it can also perform a memory operation.
  • the SPU 1520 A-H does not directly access the system memory XDRAM 1426 ; the 64-bit addresses formed by the SPU 1520 A-H are passed to the MFC 1540 A-H which instructs its DMA controller 1542 A-H to access memory via the Element Interconnect Bus 1580 and the memory controller 1560 .
  • the Element Interconnect Bus (EIB) 1580 is a logically circular communication bus internal to the Cell processor 1428 which connects the above processor elements, namely the PPE 1550 , the memory controller 1560 , the dual bus interface 1570 A, B and the 8 SPEs 1510 A-H, totaling 12 participants. Participants can simultaneously read and write to the bus at a rate of 8 bytes per clock cycle. As noted previously, each SPE 1510 A-H comprises a DMAC 1542 A-H for scheduling longer read or write sequences.
  • the EIB comprises four channels, two each in clockwise and anti-clockwise directions. Consequently for twelve participants, the longest step-wise data-flow between any two participants is six steps in the appropriate direction.
  • the theoretical peak instantaneous EIB bandwidth for 12 slots is therefore 96 B per clock, in the event of full utilization through arbitration between participants. This equates to a theoretical peak bandwidth of 307.2 GB/s (gigabytes per second) at a clock rate of 3.2 GHz.
  • the memory controller 1560 comprises an XDRAM interface 1562 , developed by Rambus Incorporated.
  • the memory controller interfaces with the Rambus XDRAM 1426 with a theoretical peak bandwidth of 25.6 GB/s.
  • the dual bus interface 1570 A,B comprises a Rambus FlexIO® system interface 1572 A, B.
  • the interface is organized into 12 channels each being 8 bits wide, with five paths being inbound and seven outbound.

Abstract

To render graphics on multiple display devices, multiple computing platforms are networked and each computing platform separately executes an application to render graphics for a display device. A client computing platform adds an orientation offset to view state information received from a server computing platform to coordinate the graphics rendered by the server and client into a representation of the same world scene.

Description

    RELATED APPLICATIONS
  • This application is related to co-pending Attorney Docket No. 8720P006, entitled, “Networked Computer Graphics Rendering System with Multiple Displays for Displaying Multiple Viewing Frustums”, filed on Jun. 26, 2009 and Attorney Docket No. 8720P007, entitled, “Configuration of Display and Audio Parameters for Computer Graphics Rendering System Having Multiple Displays”, filed on Jun. 26, 2009.
  • FIELD OF THE INVENTION
  • The present invention relates generally to computer graphics rendering systems, and more particularly to graphics rendering systems with multiple displays.
  • DESCRIPTION OF THE RELATED ART
  • Computer graphics applications continue to improve the realism of rendered scenes to increase the entertainment or simulation value of computer game or CAD applications. An effect known as “immersion” can contribute to the realism of rendered scenes. Immersion refers to moving the perspective of the graphics application user from that of an outsider looking into scene to a perspective of being part of the scene. Immersion, however, is difficult to achieve in a resource-efficient manner within the hardware and processing limitations of a most popular computing platforms. For example, many first and third person computer games render a wide field of view (FOV) of about 60° for display on a display device. While increasing the FOV reduces a feeling a “tunnel vision,” the wide FOV yields a “fish-eyed” view, sacrifices detail, and fails to achieve the immersion effect.
  • A graphics application may also require a platform with a high computing power, for example with one or more multi-core central processing unit (CPU) and a plurality of graphics processing units (GPUs) downstream of the CPU. For such a graphics platform, the CPU typically generates a single graphical realm model which is then processed with each GPU performing rendering passes to separately drive one of a plurality of display devices connected to the platform. This method however, is extremely hardware intensive, so the graphics platform market is small and as such few application titles authored to take advantage of such a platform's capabilities.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention are particularly pointed out and distinctly claimed in the concluding portion of the specification. Embodiments of the invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
  • FIG. 1 illustrates a schematic diagram of multiple display device graphics rendering system, in accordance with an embodiment of the present invention;
  • FIG. 2 illustrates flow diagram illustrating an exemplary method of rendering graphics with a plurality of computing platforms and display devices, in accordance with an embodiment of the present invention;
  • FIG. 3 illustrates a plurality of camera viewing frustums each generated by a computing platform and each angularly offset for display by multiple display devices, in accordance with one embodiment of the present invention;
  • FIG. 4 illustrates a plan view of a multiple display graphics rendering system, in accordance with an embodiment of the present invention;
  • FIG. 5 shows a schematic diagram of a multi-system configuration, according to one embodiment of the present invention;
  • FIG. 6 illustrates hardware and user interfaces that may be used to generate and render an application camera view, in accordance with one embodiment of the present invention; and
  • FIG. 7 illustrates additional hardware that may be used to process instructions, in accordance with one embodiment of the present invention.
  • For clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements. Further, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements.
  • DETAILED DESCRIPTION
  • Described herein is a system having multiple displays for rendering and displaying graphics and a method for rendering and displaying graphics on the system. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the invention. However, it will be understood by those skilled in the art that other embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention. Some portions of the detailed description that follows are presented in terms of algorithms and symbolic representations of operations on data bits or binary digital signals within a computer memory. These algorithmic descriptions and representations may be the techniques used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art.
  • Some portions of the detailed description which follows are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of actions or operations leading to a desired result. These include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
  • Unless specifically stated or otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing”, “computing”, “converting”, “reconciling”, “determining” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. In one embodiment, the apparatus for performing the operations herein includes a game console (e.g., a Sony Playstation®, a Nintendo Wii®, a Microsoft Xbox®, etc.). A computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks (e.g., compact disc read only memory (CD-ROMs), digital video discs (DVDs), Blu-Ray Discs™, etc.), and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • The terms “coupled” and “connected,” along with their derivatives, may be used herein to describe structural relationships between components of the apparatus for performing the operations herein. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, “connected” may be used to indicate that two or more elements are in direct physical or electrical contact with each other. “Coupled” my be used to indicated that two or more elements are in either direct or indirect (with other intervening elements between them) physical or electrical contact with each other, and/or that the two or more elements co-operate or interact with each other (e.g., as in a cause an effect relationship).
  • In an embodiment, a multiple display graphics rendering system includes a plurality of computing platforms with each computing platform to generate and render graphics for display on a display device of the system. FIG. 1 illustrates a perspective view of a multiple display graphics rendering system 100, in accordance with one embodiment of the present invention where each computing platform is a game console. In alternative embodiments however, the depicted game consoles may each be replaced by any conventional computer having a programmable processor. As depicted, the system 100 includes three game consoles, a view server console 101 and view client consoles 102 and 103, although any number of consoles may be included in a system.
  • In an embodiment, each computing platform in a multiple display graphics rendering system is connected to a single display device. For such embodiments, a one-to-one ratio of computing platforms to display devices is maintained regardless of the number of computing platforms included in a system. Each display device in the system is to display graphics generated and rendered by the computing platform to which it is coupled. In an alternative embodiment however, a computing platform may be connected to more than one display device, whereby the computing platform would generate and render graphics for the two or more connected display devices. In the particular embodiment depicted in FIG. 2, the view server console 101 is coupled to a view server display device 111 by video data I/O link 151. The view server display device 111 is therefore to display graphical output, such as graphics objects 171 and 172, generated by the view server console 101. The view server display device 111 may be of any display technology known in the art, such as, but not limited to, an LCD, a plasma display, a projection display, a cathode ray tube (CRT). Similarly, the video data I/O link 151 may utilize any technology known in the art for transmitting video, such as, but not limited to, component video, S-video, composite video a High Definition Multimedia Interface (HDMI). The view client consoles 102 and 103 are each similarly coupled through a video data I/ O link 152 or 153 to a view client display device, 112 or 113, respectively. The view client display device 112 displays graphical output, such as graphics object 172, generated by the view client console 102 while the view client display device 113 displays graphical output, such as graphics object 173, generated by the view client console 103.
  • In an embodiment, the display devices in a multiple display graphics rendering system have a known physical position with respect to each other. As illustrated in FIG. 1, each of the view client display devices 112-113 is disposed on a table 109 with the display devices having a linear position and an orientation (i.e. angular position) relative to a reference coordinate system of the view server display device 111. In the exemplary embodiment depicted, view client display device 113 has an angular position rotated by θ about the Y-Axis of the reference coordinate system of the view server display device 111 so that the vector N normal to the screen of the display devices 111 and 113 converge at a same reference center, C, with an angular position offset of approximately 45°. View client display device 112 also has a linear position, along the X-Axis of the reference coordinate system of the view server display device 111. Because the view client display device 112 is not also rotated about the Y-Axis, the vector N normal to the screen does not coverage at the point F. Nonetheless, a ray from the reference C may be made to the centerline of the view client display device 112 to define the position of the view client display device 112 relative to view server display device 111 in terms of a physical angular position offset (i.e. viewing angle offset). In further embodiments, display devices may also be rotated about one or more of the X-Axis and Z-Axis of the reference coordinate system. For example, a view client display disposed above the view server display device 111 may be tilted down with an angular position rotated about the X-Axis of the reference coordinate system.
  • In an embodiment, each of the computing platforms in a multiple display graphics rendering system is coupled together over a network of communication links. The network may be any network known in the art capable of providing sufficiently high bandwidth to achieve good performance of the graphics rendering methods described elsewhere herein which update a view state between about 20 and 100 times every second, for example. In the exemplary embodiment depicted in FIG. 1, the network is a local area network (LAN) 110. LAN 110 may be any conventional LAN common to the art, such as, but not limited to one of the Ethernet family of frame-based short haul computer networking technologies.
  • In one embodiment, the only computing platforms which are physical nodes on the LAN 110 are those actively generated and rendering one of the multiple displays of the system 100. In one such embodiment, the view server and the view clients all share a same local high speed network switch. Such an embodiment is therefore exclusive of wide area networks (WAN) which include many physical nodes merely forwarding and not processing data packets specific to the multiple display graphics rendering system. Generally, most contemporary WANs offer insufficient bandwidth (allowing perhaps only less than 10 view state updates every second) to achieve good performance of the graphics rendering methods described herein.
  • In an embodiment, at least one of the computing platforms in a multiple display graphics rendering system is coupled to a controller designed to be used by system user to control the graphics displayed as a view on at least one of the multiple display devices. In a particular embodiment, only one computing platform in a multiple display graphics rendering system is coupled to a controller and that controller controls the graphics displayed as separate views on all of the display devices in the graphics rendering system. For example, as depicted in FIG. 1, view server console 101 is coupled to the controller 125 through the server controller data I/O link 126. The controller 125 may be any pointing device, such as, but not limited, to a conventional game controller having a joystick and the server controller data I/O link 126 may be any conventional wired or wireless link.
  • FIG. 2 illustrates flow diagram illustrating an exemplary method 200 of rendering graphics with a plurality of computing platforms and display devices, such as provided in the multiple display graphics rendering system 100. Method 200 begins with each of a first, second, and nth computing platform connected by a communication link each launching an application configured to establish an intra-system server-client relationship. As user herein, “intra-system” refers to a view client-server relationship which spans only the computing platforms making up a multiple display graphics rendering system.
  • At operation 205, a first application is launched on the first computing platform. Generally, the first application launched may be any type of graphics intensive computer application known in the art, such as, but not limited to, computer game applications, CAD applications, video editing applications, and the like. In a particular embodiment, the application launched is a first or third person 3D computer game application, such as, but not limit to, the third person 3D game SOCOM™ published by Sony Computer Entertainment, Inc. At operation 206, a second application is launched on the second computing platform. The second application launched may be of any type as that described for the first application. In a particular embodiment, the second application launched on the second computing platform is the same application as the first application launched on the first computing platform so that both platforms are separately executing a same application (e.g., SOCOM™).
  • With the first and second applications executing on the first and second computing platforms, at operation 210, the first computing platform is nominated as a view server and the second platform becomes a view client. A similar nomination of one view server may be made in systems having three or more computing platforms, whereby all remaining platforms become client platforms. The nomination of the first computing platform may be made by configuration routine (either performed automatically or manually, requiring user input), that nominates based on one or more of a comparison of processing capability of the first and second computing platforms, a physical relationship between display devices coupled to the computing platforms, or a comparison of display device characteristics. For example, a configuration routine may determine which system display device has a larger display area or higher display resolution and nominate the computing platform coupled to that display device to be the view server. As another example, a configuration setting may be set to indicate which display a user desires to be a center of a field of view (FOV). As another example, a computing platform having a higher computing power, such as a Sony® Playstation 3® computing platform may be nominated a view server over a Sony® Playstation 2® computing platform of the same system.
  • For the exemplary method 200, at operation 215, the view server sets a view angular position offset (i.e. view orientation offset) to a default of zero. With a view angular position offset of zero, a camera view rendered by the view server defaults to a camera view rendered by the application when run on a computing platform in absence of an intra-system view client-server relationship (i.e., a single computing platform). At operation 220, however, the view client sets a view orientation offset to be non-zero. In one such embodiment, a configuration setting associated with the view orientation offset is accessed from a storage location on either the view client or the view server. In an embodiment, the view orientation offset is view client-specific and is predetermined based on the physical orientation offset between the display device coupled to the view server and the display device coupled to a particular view client. For example, referring back to FIG. 1, the view orientation offset for the view client console 103 is set to be approximately equal to the physical orientation offset of θ between the view client display device 113 and the view server display device 111. The view orientation offset for the view client console 102 is similarly set to be approximately equal to the physical orientation offset between the view client display device 112 and the view server display device 111. In an embodiment, the view orientation offset is predetermined based on a calibration routine run on either or both the view server and view client to deduce the physical relative orientation between display devices.
  • Method 200 proceeds to operation 225 where a FOV value for the server is reduced to less than 50° is response to being networked with another platform driving another display device. The greater FOV provided across the plurality of display devices may thereby enable a reduction in the fish-eye effect on the view server display while the secondary (and tertiary, etc.) view client display devices placed proximate to the view server display device combine to provide a user with an immersive experience. Similarly, at operation 226, a FOV value of the view client may also be reduced to less than 50° and communicated to the view server. In a particular embodiment, each of the view server and view client have a FOV value set to between 20° and 50° with an exemplary embodiment having both the view server and the view client FOV value set to approximately 25°.
  • On the view server side, the view server executes a geometry pipeline in operations 230, 235 and 250 to display a server camera view at operation 255 on a view server display device. On the view client side, each view client executes a geometry pipeline in operations 245, 260 and 265 to display a client camera view at operation 270 on a view client display device. In an embodiment, execution of these geometry pipelines by the view server and view client(s) is coordinated based internal view state information transmitted by the view server and based on one or more of the view orientation offset and FOV value to collectively render and display graphics spanning a plurality of display devices at substantially the same time.
  • During the execution of the first application on the view server, input vertex data associated with input received from a view control input device, such as the controller 125, is converted into model space using any technique conventional in the art. At operation 230, the view server generates a world space from a model space. Generally, a “space” is a coordinate system for positioning objects relative to a frame of reference. In a left-handed system, the Z-axis (depth-related axis) extends away from the user into the scene, consistent with the reference coordinate depicted in FIG. 1. A model space is a frame of reference that defines a modeled object's vertices relative to an origin local to that model. Each object rendered in a world scene, such as graphics objects 171, 172 and 173, has a corresponding model with each model including a plurality of vertices in model space. Thus, if the model pertained to graphics object 171, the model space may generate vertices relative to an origin located at the center of the human form. To generate a world space, a global origin is defined as common point of reference for determining different locations in the scene and all modeled object vertices are defined relative to that global origin.
  • At operation 235, the world space generated at operation 230 is transformed into a server view space or “server viewing frustum.” FIG. 3 depicts an exemplary server viewing frustum 300 corresponding to the system 100 depicted in FIG. 1. As illustrated, the server viewing frustum 300 has a frame of reference in which the viewer is at the origin, looking in the direction of the positive Z-axis into the conical viewing volume clipped by a front plane 301 and the back plane 302. The outer boundaries of the server viewing frustum 300 converge at the server camera 305. Accordingly, the server viewing frustum 300 assumes a truncated pyramidal shape. The server camera 305 defines the vantage point in world space from which a viewer observes the world scene. Accordingly, the world space coordinates are relocated and oriented to a world space position vantage point to provide the server viewing frustum 300 at the server camera 305.
  • At operation 240, the view server scans through its internal state of all world-space objects and transmits the state of each object in a manner that allows all of the view clients to accurately represent the same world scene. For example, if the view server is executing a game application, internal game state information may be broadcast to system view clients. The transport mechanism may be unreliable delivery protocol (UDP) or a transmission control protocol (TCP), which may depend on the choice of the title. UDP transport will have additional state information preserved by the view server console so that if an update packet has been lost, the information can be retransmitted with more up to date information. While TCP can also be used, additional latency variation may be expected. TCP, however, is reliable so that no application interaction is required to confirm data delivery. On a local network, such as LAN 110, it is unlikely that latency will be an issue especially when the view server/client consoles share a local high speed network switch.
  • In an embodiment, a view state indicative of a world space position and world space orientation of the server camera 305 is transmitted over the communication link to the view client(s). In other words, the server camera view matrix is communicated to the view clients. In one such an embodiment, the view server multicasts game view state information between about 20 and 100 times/second. Object animation information may further be communicated as part of the internal state data transmitted by the view server. Additional information may also be sent by the view server to synchronize locally-generated objects (e.g., objects to be generated by the view client consoles) such as particle effects. For a conventional online-game configuration, such locally-generated objects may be only roughly approximated as it is unnecessary for a remote player to see such effects completely synchronized. However, in embodiments of the present invention, a close synchronization of locally-generated objects is achieved based on view state information sent by the view server to advantageously provide the most desirable view to a system user.
  • At operation 245, a view client receives the transmitted state information and generates a substantially same world space as that generated by the view server. In other words, the view client regenerates the world space to render a secondary view of the same world scene, replicating graphics objects generated by the view server in response to each view state update received. In an embodiment, this secondary view has a vantage point at the server camera 305. As such, the server and client share a common camera as established by the view server and communicated to the view client. The view client may then employ the view orientation offset determined at operation 220 to angularly offset or rotate the client viewing frustum from the world space orientation so that the client viewing frustum does not overlap the server viewing frustum. In this manner, each client-specific view orientation is applied to the received server state information to render a client view dependent on the server view.
  • For example, as depicted in FIG. 3, a client viewing frustum 310 is generated by a view client based on received server view state information and the view orientation offset specific to the client to have a view orientation offset 350 from the Z-AXIS central to the server viewing frustum 300 (as defined per the world space orientation). The client viewing frustum 310 is offset to be adjacent and to the right of the server viewing frustum 300. The graphics object 172 is replicated by the view client and a portion of the graphics object 172 outside of the server viewing frustum 300 is rendered. The portion of the graphics object 172 within the client viewing frustum 310 may be further dependent on the client FOV value from operation 226. As further shown in FIG. 3, another view client, also receiving the broadcast view state information from the view server, generates a third client viewing frustum 320 having a view orientation offset 351 from the server viewing frustum 300. The client viewing frustum 320 is offset to be adjacent and to the left of the server viewing frustum 300 to render the graphics object 173 (generated by the server as part of the world space and communicated to the client) but beyond the server viewing frustum 300.
  • In one embodiment, the viewing frustums 300-320 each have a FOV less than 50°, for example between 25°. The a view orientation offset may be set to less than twice the FOV to provide a continuous FOV across the view server and view client display devices to accommodate physical spacing between adjacent display devices (caused by display device bezel, etc.). For example, where the physical orientation offset between view server display device 111 and view client display device 112 is between 40 and 50°, the view orientation offset may be set to approximately 40° for a 25° FOV in each display. In other embodiments, the view orientation offset may be greater than or equal to twice the display FOV such that adjacent viewing frustums are spaced apart and the FOV across the view server and view client displays is discontinuous. Objects in world space may then be “hidden” from view in the region between display devices.
  • With the view orientation offset appropriately predetermined, the view server and view client(s) each render viewing frustums of a common world space to form a mesh of view spaces of the same world scene. In this manner, multiple displays are coordinated with rendering performed by separate computing platforms. The computing power of each platform may thereby be lessened. Furthermore, any application title authored for a stand-alone execution on a single computing platform having a single display, such as view server console 101, may be readily scaled to support a plurality of display devices with little application-specific coding. As such, a supplier of the computing platform may provide a common library of functions providing multiple-display graphics rendering support which may then be incorporated by any developer of applications for the computing platform. Titles may then merely be authored to have a greater or lesser number of graphics objects as a function of the FOV provided by the multiple display graphics rendering system. For example, an application title executing in a single platform or “stand-alone” mode may generate graphics objects with a bias for the 60° field of FOV from the camera. The same application title executing in a multiple display system would increase the graphics object generation bias to encompass a greater FOV based on the cumulative FOV available as determined through the number of consoles on the network and FOV configuration settings for each console.
  • After generation of the view spaces corresponding to each display device, additional matrix manipulations may be performed by the view server and view client(s) to render views on their respective display device. For example, server viewing frustum 300 may be rendered to give depth to a scene whereby object 171 may be made to appear smaller than object 172 based on the relative position of the objects in the server viewing frustum 300, as known in the art. Additional matrix operations may then be performed to generate a server screen space with a frame of reference in which coordinates are related directly to screen X-Y locations in a frame buffer, to be displayed on a server display device at operation 255. For example, the server viewing frustum 300 may be rendered into a server screen space including the graphics object 171 and a portion of the graphics object 172 within the server viewing frustum 300, as depicted in FIG. 1. At operation 270, similar rendering activities may be performed by the view client(s) to display client screen spaces at substantially the same time as the server screen space is rendered so that a combined view across displays of the system are most nearly in lock-step. For example, the client viewing frustum 310 may be rendered into a client screen space including a portion of the graphics object 172 within the client viewing frustum 310, as determined by the FOV and orientation offset 350 at substantially the same time the viewing frustum 300 is rendered into a server screen space including the portion of the graphics object 172 within the server viewing frustum, as further depicted in FIG. 1.
  • At operation 260, any changes to the input matrix received from a controller coupled to the view server triggers an update in the server view space which is then transmitted at operation 240. In response, a view client re-renders a corresponding client view space to reflect the input matrix change. As such, in one embodiment, only the view server includes a view control input device, such as controller 125. A view client computing platform assuming a role of rendering a secondary view of the world scene as controlled through the view server need not include any controller. Because the view client systems are in a passive rendering mode, in certain embodiments intra-system network communication may be substantially unidirectional with the view server performing broadcasts to downstream view clients and view clients also not communicating to each other. Network bandwidth may therefore be substantially dedicated to the view server to allow high internal state refresh rates across all platforms in a system.
  • FIG. 4 illustrates a plan view of a multiple display graphics rendering system 400, in accordance with an embodiment of the present invention. As illustrated the system 400 includes all of the components of system 100 with the view client display device 112 rotated about the Y-axis of FIG. 2 and four view client display devices 414, 415, 416 and 417 added to nearly surround a user chair 450. Each of the view client displays is coupled to a view client console, 404, 405, 406 and 407 in substantially the same manner as described for system 100. As shown, each of the displays has a physical orientation with respect to the view server display device 111. For example, view client display device 112 is position relative to the view server display device 111 by a first physical orientation offset 422 and view client display device 416 is positioned relative the view server display device 111 by a second physical orientation offset 426. In a particular embodiment, each view client display device is positioned relative to the view server display device 111 by a fixed increment of between approximately 40 to 50 degrees such that the first physical orientation offset 422 is approximately 40 to 50 degrees and the second physical orientation offset 426 is approximately 80 to 100 degrees. Upon performing a graphics rendering method, such as method 200, each computing platform may render a view space modifying a server's view state with a view orientation offset value based on the physical orientation angle to collectively provide a nearly 360° FOV of a same world scene. Any number of display devices may be combined to provide up to a 360° FOV in any dimension.
  • In an embodiment, a multiple display rendering system, such as system 400, may be networked with another multiple display rendering system. FIG. 5 shows a block diagram of two of multiple display graphics rendering systems, communicatively coupled together. In the depicted embodiment, the multiple display graphics rendering system 515, including a view server console 501 coupled over a LAN 510 to view client consoles 502 with each console further coupled to one of the display devices 503, is coupled to another system 615 similarly configured with a view server console 601 coupled over a LAN 610 to view client consoles 602 with each console further coupled to one of the display devices 603. The view server console 601 is further coupled to view server console 501 via a WAN 550 to allow communication between the server consoles.
  • In one such embodiment, an inter-system client-server relationship is established between the view server consoles 501 and 601 separate from the intra-system view client-server relationship established within systems 515 and 615. In an implementation, there need not be any inter-system interaction between the view client consoles 502 and the view client consoles 602. As such, the inter-system client-server relationship may be established using conventional one-to-one or one-to-many communication techniques whereby the only computing platform of a multiple display rendering system (e.g., system 515 or system 615) that is visible or active on the inter-system network is the computing platform acting as the system view server (e.g., server consoles 501 and 601). For example, conventional online gaming techniques may be employed whereby object position and animation information may be periodically communicated, using UDP transport for example, between the view server consoles 501 and 601 to allow a player using the multiple display graphics rendering system 515 to play a same game as is a player using the system 615. The inter-system refresh rates between the system view server consoles 501 and 601 may be relatively lower than the intra-system refresh rates within each system 515 and 615.
  • In an embodiment of networked multiple display rendering systems, all consoles (e.g., view server consoles 501, 601 and respective view client consoles 502 and 602) of the network execute the same application to generate and render graphics for a same world space. The display devices 503 display graphics rendered by the view server console 501 and view client consoles 502 from a first server camera at a first world position (e.g., first player position in a world space) with an orientation offset in the camera views displayed by each display device 503. The display devices 603 display graphics rendered by the view server console 601 and view client consoles 602 from a second server camera at a second world position (e.g., second player position in the world space) with an orientation offset in the camera views displayed by each display device 603.
  • FIG. 6 illustrates hardware and user interfaces that may be used to generate and render an application camera view as a component of a multiple display graphics rendering system, in accordance with one embodiment of the present invention. FIG. 6 schematically illustrates the overall system architecture of the Sony® Playstation 3® entertainment device, a console that may be compatible for implementing multiple display graphics rendering system. A platform unit 1400 is provided, with various peripheral devices connectable to the platform unit 1400. The platform unit 1400 comprises: a Cell processor 1428; a Rambus® dynamic random access memory (XDRAM) unit 1426; a Reality Simulator graphics unit 1430 with a dedicated video random access memory (VRAM) unit 1432; and an I/O bridge 1434. The platform unit 1400 also comprises a Blu Ray® Disk BD-ROM® optical disk reader 1440 for reading from a disk 1440A and a removable slot-in hard disk drive (HDD) 1436, accessible through the I/O bridge 1434. Optionally the platform unit 1400 also comprises a memory card reader 1438 for reading compact flash memory cards, Memory Stick® memory cards and the like, which is similarly accessible through the I/O bridge 1434.
  • The I/O bridge 1434 also connects to multiple Universal Serial Bus (USB) 2.0 ports 1424; a gigabit Ethernet port 1422; an IEEE 802.11b/g wireless network (Wi-Fi) port 1420; and a Bluetooth® wireless link port 1418 capable of supporting of up to seven Bluetooth connections.
  • In operation, the I/O bridge 1434 handles all wireless, USB and Ethernet data, including data from one or more game controller 1402. For example when a user is playing a game, the I/O bridge 1434 receives data from the game controller 1402 via a Bluetooth link and directs it to the Cell processor 1428, which updates the current state of the game accordingly.
  • The wireless, USB and Ethernet ports also provide connectivity for other peripheral devices in addition to game controller 1402, such as: a remote control 1404; a keyboard 1406; a mouse 1408; a portable entertainment device 1410 such as a Sony Playstation Portable® entertainment device; a video camera such as an EyeToy® video camera 1412; a microphone headset 1414; and a microphone 1415. Such peripheral devices may therefore in principle be connected to the platform unit 1400 wirelessly; for example the portable entertainment device 1410 may communicate via a Wi-Fi ad-hoc connection, while the microphone headset 1414 may communicate via a Bluetooth link.
  • The provision of these interfaces means that the Playstation 3 device is also potentially compatible with other peripheral devices such as digital video recorders (DVRs), set-top boxes, digital cameras, portable media players, Voice over IP telephones, mobile telephones, printers and scanners.
  • In addition, a legacy memory card reader 1416 may be connected to the system unit via a USB port 1424, enabling the reading of memory cards 1448 of the kind used by the Playstation® or Playstation 2® devices.
  • The game controller 1402 is operable to communicate wirelessly with the platform unit 1400 via the Bluetooth link, or to be connected to a USB port, thereby also providing power by which to charge the battery of the game controller 1402. Game controller 1402 can also include memory, a processor, a memory card reader, permanent memory such as flash memory, light emitters such as LEDs or infrared lights, microphone and speaker for ultrasound communications, an acoustic chamber, a digital camera, an internal clock, a recognizable shape such as a spherical section facing the game console, and wireless communications using protocols such as Bluetooth®, WiFi™, etc.
  • Game controller 1402 is a controller designed to be used with two hands. In addition to one or more analog joysticks and conventional control buttons, the game controller is susceptible to three-dimensional location determination. Consequently gestures and movements by the user of the game controller may be translated as inputs to a game in addition to or instead of conventional button or joystick commands. Optionally, other wirelessly enabled peripheral devices such as the Playstation™ Portable device may be used as a controller. In the case of the Playstation™ Portable device, additional game or control information (for example, control instructions or number of lives) may be provided on the screen of the device. Other alternative or supplementary control devices may also be used, such as a dance mat (not shown), a light gun (not shown), a steering wheel and pedals (not shown) or the like.
  • The remote control 1404 is also operable to communicate wirelessly with the platform unit 1400 via a Bluetooth link. The remote control 1404 comprises controls suitable for the operation of the Blu Ray™ Disk BD-ROM reader 1440 and for the navigation of disk content.
  • The Blu Ray™ Disk BD-ROM reader 1440 is operable to read CD-ROMs compatible with the Playstation and PlayStation 2 devices, in addition to conventional pre-recorded and recordable CDs, and so-called Super Audio CDs. The reader 1440 is also operable to read DVD-ROMs compatible with the Playstation 2 and PlayStation 3 devices, in addition to conventional pre-recorded and recordable DVDs. The reader 1440 is further operable to read BD-ROMs compatible with the Playstation 3 device, as well as conventional pre-recorded and recordable Blu-Ray Disks.
  • The platform unit 1400 is operable to supply audio and video, either generated or decoded by the Playstation 3 device via the Reality Synthesizer graphics unit 1430, through audio and video connectors to a display and sound output device such as a monitor or television set having a display and one or more loudspeakers. The audio connectors 1450 may include conventional analogue and digital outputs while the video connectors 1452 may variously include component video, S-video, composite video and one or more High Definition Multimedia Interface (HDMI) outputs. Consequently, video output may be in formats such as PAL or NTSC, or in 720p, 1080i or 1080p high definition.
  • Audio processing (generation, decoding and so on) is performed by the Cell processor 1428. The Playstation 3 device's operating system supports Dolby® 5.1 surround sound, Dolby® Theatre Surround (DTS), and the decoding of 7.1 surround sound from Blu-Ray® disks.
  • In the present embodiment, the video camera 1412 comprises a single charge coupled device (CCD), an LED indicator, and hardware-based real-time data compression and encoding apparatus so that compressed video data may be transmitted in an appropriate format such as an intra-image based MPEG (motion picture expert group) standard for decoding by the platform unit 1400. The camera LED indicator is arranged to illuminate in response to appropriate control data from the platform unit 1400, for example to signify adverse lighting conditions. Embodiments of the video camera 1412 may variously connect to the platform unit 1400 via a USB, Bluetooth or Wi-Fi communication port. Embodiments of the video camera may include one or more associated microphones and may also be capable of transmitting audio data. In embodiments of the video camera, the CCD may have a resolution suitable for high-definition video capture. In use, images captured by the video camera may for example be incorporated within a game or interpreted as game control inputs. In another embodiment the camera is an infrared camera suitable for detecting infrared light.
  • In general, for successful data communication to occur with a peripheral device such as a video camera or remote control via one of the communication ports of the platform unit 1400, an appropriate piece of software such as a device driver should be provided. Device driver technology is well-known and will not be described in detail here, except to say that those skilled in the art are aware that a device driver or similar software interface may be required in the present embodiment described.
  • FIG. 7 illustrates additional hardware that may be used to process instructions, in accordance with one embodiment of the present invention. Cell processor 1428 of FIG. 6 as further illustrated in FIG. 7 has an architecture comprising four basic components: external input and output structures comprising a memory controller 1560 and a dual bus interface controller 1570A, B; a main processor referred to as the Power Processing Element 1550; eight co-processors referred to as Synergistic Processing Elements (SPEs) 1510A-H; and a circular data bus connecting the above components referred to as the Element Interconnect Bus 1580. The total floating point performance of the Cell processor is 218 GFLOPS, compared with the 6.2 GFLOPs of the Playstation 2 device's Emotion Engine.
  • The Power Processing Element (PPE) 1550 is based upon a two-way simultaneous multithreading Power 1470 compliant PowerPC core (PPU) 1555 running with an internal clock of 3.2 GHz. It comprises a 512 kB level 2 (L2) cache 1552 and a 32 kB level 1 (L1) cache 1551. The PPE 1550 is capable of eight single position operations per clock cycle, translating to 25.6 GFLOPs at 3.2 GHz. The primary role of the PPE 1550 is to act as a controller for the Synergistic Processing Elements 1510A-H, which handle most of the computational workload. In operation the PPE 1550 maintains a job queue, scheduling jobs for the Synergistic Processing Elements 1510A-H and monitoring their progress. Consequently each Synergistic Processing Element 1510A-H runs a kernel whose role is to fetch a job, execute it and synchronized with the PPE 1550.
  • Each Synergistic Processing Element (SPE) 1510A-H comprises a respective Synergistic Processing Unit (SPU) 1520A-H, and a respective Memory Flow Controller (MFC) 1540A-H comprising in turn a respective Dynamic Memory Access Controller (DMAC) 1542A-H, a respective Memory Management Unit (MMU) 1544A-H and a bus interface (not shown). Each SPU 1520A-H is a RISC processor clocked at 3.2 GHz and comprising 256 kB local RAM 1530A-H, expandable in principle to 4 GB. Each SPE gives a theoretical 25.6 GFLOPS of single precision performance. An SPU can operate on 4 single precision floating point members, 4 32-bit numbers, 8 16-bit integers, or 16 8-bit integers in a single clock cycle. In the same clock cycle it can also perform a memory operation. The SPU 1520A-H does not directly access the system memory XDRAM 1426; the 64-bit addresses formed by the SPU 1520A-H are passed to the MFC 1540A-H which instructs its DMA controller 1542A-H to access memory via the Element Interconnect Bus 1580 and the memory controller 1560.
  • The Element Interconnect Bus (EIB) 1580 is a logically circular communication bus internal to the Cell processor 1428 which connects the above processor elements, namely the PPE 1550, the memory controller 1560, the dual bus interface 1570A, B and the 8 SPEs 1510A-H, totaling 12 participants. Participants can simultaneously read and write to the bus at a rate of 8 bytes per clock cycle. As noted previously, each SPE 1510A-H comprises a DMAC 1542A-H for scheduling longer read or write sequences. The EIB comprises four channels, two each in clockwise and anti-clockwise directions. Consequently for twelve participants, the longest step-wise data-flow between any two participants is six steps in the appropriate direction. The theoretical peak instantaneous EIB bandwidth for 12 slots is therefore 96 B per clock, in the event of full utilization through arbitration between participants. This equates to a theoretical peak bandwidth of 307.2 GB/s (gigabytes per second) at a clock rate of 3.2 GHz.
  • The memory controller 1560 comprises an XDRAM interface 1562, developed by Rambus Incorporated. The memory controller interfaces with the Rambus XDRAM 1426 with a theoretical peak bandwidth of 25.6 GB/s.
  • The dual bus interface 1570A,B comprises a Rambus FlexIO® system interface 1572A, B. The interface is organized into 12 channels each being 8 bits wide, with five paths being inbound and seven outbound.
  • It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, while the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is not required (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.). Furthermore, many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. Although the present invention has been described with reference to specific exemplary embodiments, it will be recognized that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (20)

1. A graphics rendering system, comprising:
a client coupled to a communication link, the client including a client display device and a client computing platform to execute a client application, the client computing platform to receive via the communication link periodic view state information from a server computing platform, the view state information including a server camera position corresponding to an origin of a server view space generated by the server computing platform, and the client computing platform to generate a client view space for display on the client display device based on the received view state information, the generated client view space also having an origin at the server camera position.
2. The system of claim 1, wherein the server includes a server display device and wherein the server computing platform is to execute a server application to generate the server view space for display on the server display device.
3. The system of claim 1, wherein the client computing platform is to generate the client view space with an orientation offset from the server view space.
4. The system of claim 2, wherein the client display device has a physical orientation relative to the server display device, and wherein the orientation offset between the server view space and the client view space is determined based on the client display device's relative physical orientation.
5. The system of claim 4, wherein client computing platform locally stores data pertaining to the orientation offset.
6. The system of claim 2, wherein the server and client applications are a same game or CAD application.
7. The system of claim 1, wherein the communication link is a local area network, wherein the server computing device broadcasts the periodic server view state information via the communication link using an unreliable delivery protocol (UDP), and wherein the server and client computing platforms are game consoles.
8. The system of claim 1, wherein only the server of the server and client computing platforms is coupled a controller which controls both the server view space and the client view space.
9. A method of rendering a graphical display, comprising:
receiving view state information over a local area network (LAN);
generating, with a client application executing on a client computing platform, a client view space based on the received view state information, the client view space having an origin at a server camera position provided in the view state information; and
displaying, on a client display device, graphics rendered based on the generated client view space.
10. The method of claim 9, further comprising:
generating, with a server application executing on a server computing platform, a server view space having an origin at the server camera;
displaying on a server display device coupled to the server computing platform, graphics rendered based on the server view space;
transmitting view state information periodically over the LAN, the view state information including a server camera view matrix.
11. The method of claim 10, wherein the client view space is generated with an orientation offset from the server view space.
12. The method of claim 11, wherein the client display device has a physical orientation relative to the server display device, and wherein the orientation offset between the server view space and the client view space is determined based on the client display device's relative physical orientation.
13. The method of claim 12, further comprising:
executing a calibration routine on both the server computing platform and the client computing platform to determine the orientation offset based the relative physical orientation.
14. The method of claim 10, wherein the server and client applications are the same game or CAD application.
15. The method of claim 10, wherein periodically transmitting view state information over a local area network (LAN) further comprises broadcasting the view state information to all clients connected to the LAN.
16. The method of claim 9, wherein the view state information further includes object position information and animation information, and wherein generating a client view space based on the view state information further comprises generating an object in the client view space based on the object position information and animating an object in the client view space based on the animation information.
17. A computer-readable medium having stored thereon a set of instructions which when executed cause a processing system to perform the method comprising:
receiving view state information over a local area network (LAN);
generating a client view space based on the received view state information, the client view space having an origin at a server camera position provided in the received view state information; and
displaying on a client display device graphics rendered based on the generated client view space.
18. The computer-readable medium of claim 17, wherein the client view space is generated with an orientation offset from a server view space generated by a server which transmitted the received view state information.
19. The computer-readable medium of claim 18, wherein the client display device has a physical orientation relative to a server display device, and wherein the orientation offset between the server view space and the client view space is determined based on the client display device's relative physical orientation.
20. The computer-readable medium of claim 19, which when executed cause a processing system to perform the method further comprising:
executing a calibration routine on to determine the orientation offset based on the client display device's relative physical orientation.
US12/492,883 2009-06-26 2009-06-26 Networked Computer Graphics Rendering System with Multiple Displays Abandoned US20100328354A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US12/492,883 US20100328354A1 (en) 2009-06-26 2009-06-26 Networked Computer Graphics Rendering System with Multiple Displays
PCT/US2010/039371 WO2010151511A1 (en) 2009-06-26 2010-06-21 Networked computer graphics rendering system with multiple displays
JP2012517631A JP2012531668A (en) 2009-06-26 2010-06-21 Networked computer graphics rendering system having multiple display devices
EP10792556A EP2446431A1 (en) 2009-06-26 2010-06-21 Networked computer graphics rendering system with multiple displays

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/492,883 US20100328354A1 (en) 2009-06-26 2009-06-26 Networked Computer Graphics Rendering System with Multiple Displays

Publications (1)

Publication Number Publication Date
US20100328354A1 true US20100328354A1 (en) 2010-12-30

Family

ID=43380216

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/492,883 Abandoned US20100328354A1 (en) 2009-06-26 2009-06-26 Networked Computer Graphics Rendering System with Multiple Displays

Country Status (1)

Country Link
US (1) US20100328354A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120007875A1 (en) * 2010-07-12 2012-01-12 International Business Machines Corporation Multiple Monitor Video Control
US20130141308A1 (en) * 2011-12-01 2013-06-06 Institute For Information Industry Electronic device and multi-panel interface displaying method
CN104157245A (en) * 2014-08-26 2014-11-19 广东威创视讯科技股份有限公司 Wireless LED spliced screen and control method and system
CN105139770A (en) * 2015-09-29 2015-12-09 深圳市汉丰光电有限公司 Rapidly spliced LED display system and display method thereof
US20180268614A1 (en) * 2017-03-16 2018-09-20 General Electric Company Systems and methods for aligning pmi object on a model
US20180322683A1 (en) * 2017-05-05 2018-11-08 Nvidia Corporation Method and apparatus for rendering perspective-correct images for a tilted multi-display environment
WO2019129682A1 (en) * 2017-12-28 2019-07-04 Arcelik Anonim Sirketi A method and a system for displaying virtual reality images on a screen

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4787051A (en) * 1986-05-16 1988-11-22 Tektronix, Inc. Inertial mouse system
US4853568A (en) * 1987-02-09 1989-08-01 Mabushi Motor Co., Ltd Miniature motor
US5128671A (en) * 1990-04-12 1992-07-07 Ltv Aerospace And Defense Company Control device having multiple degrees of freedom
US5275565A (en) * 1991-05-23 1994-01-04 Atari Games Corporation Modular display simulator and method
US5528265A (en) * 1994-07-18 1996-06-18 Harrison; Simon J. Orientation-operated cursor control device
US5805117A (en) * 1994-05-12 1998-09-08 Samsung Electronics Co., Ltd. Large area tiled modular display system
US5923307A (en) * 1997-01-27 1999-07-13 Microsoft Corporation Logical monitor configuration in a multiple monitor environment
US5926153A (en) * 1995-01-30 1999-07-20 Hitachi, Ltd. Multi-display apparatus
US6157368A (en) * 1994-09-28 2000-12-05 Faeger; Jan G. Control equipment with a movable control member
US6375572B1 (en) * 1999-10-04 2002-04-23 Nintendo Co., Ltd. Portable game apparatus with acceleration sensor and information storage medium storing a game progam
US20020154145A1 (en) * 2001-02-27 2002-10-24 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Arrangement and method for spatial visualization
US20020167459A1 (en) * 2001-05-11 2002-11-14 Xerox Corporation Methods of using mixed resolution displays
US20040085256A1 (en) * 2002-10-30 2004-05-06 The University Of Chicago Methods and measurement engine for aligning multi-projector display systems
US20040125044A1 (en) * 2002-09-05 2004-07-01 Akira Suzuki Display system, display control apparatus, display apparatus, display method and user interface device
US6804406B1 (en) * 2000-08-30 2004-10-12 Honeywell International Inc. Electronic calibration for seamless tiled display using optical function generator
US20040212589A1 (en) * 2003-04-24 2004-10-28 Hall Deirdre M. System and method for fusing and displaying multiple degree of freedom positional input data from multiple input sources
US20050168399A1 (en) * 2003-12-19 2005-08-04 Palmquist Robert D. Display of visual data as a function of position of display device
US20050206857A1 (en) * 2004-03-22 2005-09-22 Seiko Epson Corporation Image correction method for multi-projection system
US20060001593A1 (en) * 2004-07-02 2006-01-05 Microsoft Corporation System and method for determining display differences between monitors on multi-monitor computer systems
US20060033712A1 (en) * 2004-08-13 2006-02-16 Microsoft Corporation Displaying visually correct pointer movements on a multi-monitor display system
US20060268013A1 (en) * 2005-05-24 2006-11-30 Miles Mark W Immersive environments with multiple points of view
US20070263079A1 (en) * 2006-04-20 2007-11-15 Graham Philip R System and method for providing location specific sound in a telepresence system
US20070285342A1 (en) * 2006-03-27 2007-12-13 National Inst Of Adv Industrial Science And Tech. Image display apparatus
US20080117290A1 (en) * 2006-10-18 2008-05-22 Mgc Works, Inc. Apparatus, system and method for generating stereoscopic images and correcting for vertical parallax
US7532196B2 (en) * 2003-10-30 2009-05-12 Microsoft Corporation Distributed sensing techniques for mobile devices
US20090327418A1 (en) * 2008-06-27 2009-12-31 Microsoft Corporation Participant positioning in multimedia conferencing
US20100053164A1 (en) * 2008-09-02 2010-03-04 Samsung Electronics Co., Ltd Spatially correlated rendering of three-dimensional content on display components having arbitrary positions
US7942530B2 (en) * 2006-10-31 2011-05-17 The Regents Of The University Of California Apparatus and method for self-calibrating multi-projector displays via plug and play projectors
US8159416B1 (en) * 2007-08-06 2012-04-17 Rockwell Collins, Inc. Synthetic vision dynamic field of view

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4787051A (en) * 1986-05-16 1988-11-22 Tektronix, Inc. Inertial mouse system
US4853568A (en) * 1987-02-09 1989-08-01 Mabushi Motor Co., Ltd Miniature motor
US5128671A (en) * 1990-04-12 1992-07-07 Ltv Aerospace And Defense Company Control device having multiple degrees of freedom
US5275565A (en) * 1991-05-23 1994-01-04 Atari Games Corporation Modular display simulator and method
US5805117A (en) * 1994-05-12 1998-09-08 Samsung Electronics Co., Ltd. Large area tiled modular display system
US5528265A (en) * 1994-07-18 1996-06-18 Harrison; Simon J. Orientation-operated cursor control device
US6157368A (en) * 1994-09-28 2000-12-05 Faeger; Jan G. Control equipment with a movable control member
US5926153A (en) * 1995-01-30 1999-07-20 Hitachi, Ltd. Multi-display apparatus
US5923307A (en) * 1997-01-27 1999-07-13 Microsoft Corporation Logical monitor configuration in a multiple monitor environment
US6375572B1 (en) * 1999-10-04 2002-04-23 Nintendo Co., Ltd. Portable game apparatus with acceleration sensor and information storage medium storing a game progam
US6804406B1 (en) * 2000-08-30 2004-10-12 Honeywell International Inc. Electronic calibration for seamless tiled display using optical function generator
US20020154145A1 (en) * 2001-02-27 2002-10-24 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Arrangement and method for spatial visualization
US20020167459A1 (en) * 2001-05-11 2002-11-14 Xerox Corporation Methods of using mixed resolution displays
US20040125044A1 (en) * 2002-09-05 2004-07-01 Akira Suzuki Display system, display control apparatus, display apparatus, display method and user interface device
US20040085256A1 (en) * 2002-10-30 2004-05-06 The University Of Chicago Methods and measurement engine for aligning multi-projector display systems
US20040212589A1 (en) * 2003-04-24 2004-10-28 Hall Deirdre M. System and method for fusing and displaying multiple degree of freedom positional input data from multiple input sources
US7532196B2 (en) * 2003-10-30 2009-05-12 Microsoft Corporation Distributed sensing techniques for mobile devices
US20050168399A1 (en) * 2003-12-19 2005-08-04 Palmquist Robert D. Display of visual data as a function of position of display device
US20050206857A1 (en) * 2004-03-22 2005-09-22 Seiko Epson Corporation Image correction method for multi-projection system
US20060001593A1 (en) * 2004-07-02 2006-01-05 Microsoft Corporation System and method for determining display differences between monitors on multi-monitor computer systems
US20060033712A1 (en) * 2004-08-13 2006-02-16 Microsoft Corporation Displaying visually correct pointer movements on a multi-monitor display system
US20060268013A1 (en) * 2005-05-24 2006-11-30 Miles Mark W Immersive environments with multiple points of view
US20070285342A1 (en) * 2006-03-27 2007-12-13 National Inst Of Adv Industrial Science And Tech. Image display apparatus
US20070263079A1 (en) * 2006-04-20 2007-11-15 Graham Philip R System and method for providing location specific sound in a telepresence system
US20080117290A1 (en) * 2006-10-18 2008-05-22 Mgc Works, Inc. Apparatus, system and method for generating stereoscopic images and correcting for vertical parallax
US7942530B2 (en) * 2006-10-31 2011-05-17 The Regents Of The University Of California Apparatus and method for self-calibrating multi-projector displays via plug and play projectors
US8159416B1 (en) * 2007-08-06 2012-04-17 Rockwell Collins, Inc. Synthetic vision dynamic field of view
US20090327418A1 (en) * 2008-06-27 2009-12-31 Microsoft Corporation Participant positioning in multimedia conferencing
US20100053164A1 (en) * 2008-09-02 2010-03-04 Samsung Electronics Co., Ltd Spatially correlated rendering of three-dimensional content on display components having arbitrary positions

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120007875A1 (en) * 2010-07-12 2012-01-12 International Business Machines Corporation Multiple Monitor Video Control
US20130141308A1 (en) * 2011-12-01 2013-06-06 Institute For Information Industry Electronic device and multi-panel interface displaying method
CN104157245A (en) * 2014-08-26 2014-11-19 广东威创视讯科技股份有限公司 Wireless LED spliced screen and control method and system
CN105139770A (en) * 2015-09-29 2015-12-09 深圳市汉丰光电有限公司 Rapidly spliced LED display system and display method thereof
US20180268614A1 (en) * 2017-03-16 2018-09-20 General Electric Company Systems and methods for aligning pmi object on a model
US20180322683A1 (en) * 2017-05-05 2018-11-08 Nvidia Corporation Method and apparatus for rendering perspective-correct images for a tilted multi-display environment
US20180322682A1 (en) * 2017-05-05 2018-11-08 Nvidia Corporation Method and apparatus for rendering perspective-correct images for a tilted multi-display environment
US10503457B2 (en) * 2017-05-05 2019-12-10 Nvidia Corporation Method and apparatus for rendering perspective-correct images for a tilted multi-display environment
US10503456B2 (en) * 2017-05-05 2019-12-10 Nvidia Corporation Method and apparatus for rendering perspective-correct images for a tilted multi-display environment
WO2019129682A1 (en) * 2017-12-28 2019-07-04 Arcelik Anonim Sirketi A method and a system for displaying virtual reality images on a screen

Similar Documents

Publication Publication Date Title
US11413525B2 (en) Device for interfacing with a computing program using a projected pattern
US8269691B2 (en) Networked computer graphics rendering system with multiple displays for displaying multiple viewing frustums
US10076703B2 (en) Systems and methods for determining functionality of a display device based on position, orientation or motion
US9310882B2 (en) Book object for augmented reality
US8882597B2 (en) Hybrid separable motion controller
US8953029B2 (en) Portable device interaction via motion sensitive controller
US8888593B2 (en) Directional input for a video game
US8142288B2 (en) Base station movement detection and compensation
US8393964B2 (en) Base station for position location
US20100328447A1 (en) Configuration of display and audio parameters for computer graphics rendering system having multiple displays
US20100328354A1 (en) Networked Computer Graphics Rendering System with Multiple Displays
US8311384B2 (en) Image processing method, apparatus and system
JP7411794B2 (en) Systems and methods for adjusting one or more parameters of a GPU
US20090247249A1 (en) Data processing
US8360856B2 (en) Entertainment apparatus and method
JP2023507817A (en) Method and system for maintaining a smooth frame rate while transmitting streaming video content
WO2010151511A1 (en) Networked computer graphics rendering system with multiple displays

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WATSON, BRIAN;REEL/FRAME:023038/0522

Effective date: 20090623

AS Assignment

Owner name: SONY NETWORK ENTERTAINMENT PLATFORM INC., JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT INC.;REEL/FRAME:027446/0001

Effective date: 20100401

AS Assignment

Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SONY NETWORK ENTERTAINMENT PLATFORM INC.;REEL/FRAME:027557/0001

Effective date: 20100401

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT INC.;REEL/FRAME:039239/0343

Effective date: 20160401