WO2002067529A2 - System and method for accessing a storage area network as network attached storage - Google Patents

System and method for accessing a storage area network as network attached storage Download PDF

Info

Publication number
WO2002067529A2
WO2002067529A2 PCT/US2001/005385 US0105385W WO02067529A2 WO 2002067529 A2 WO2002067529 A2 WO 2002067529A2 US 0105385 W US0105385 W US 0105385W WO 02067529 A2 WO02067529 A2 WO 02067529A2
Authority
WO
WIPO (PCT)
Prior art keywords
server
nas
san
storage
data communication
Prior art date
Application number
PCT/US2001/005385
Other languages
French (fr)
Other versions
WO2002067529A3 (en
Inventor
Michael Padovano
Original Assignee
Storageapps Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Storageapps Inc. filed Critical Storageapps Inc.
Priority to AU2001241588A priority Critical patent/AU2001241588A1/en
Priority to EP01912847A priority patent/EP1382176A2/en
Publication of WO2002067529A2 publication Critical patent/WO2002067529A2/en
Publication of WO2002067529A3 publication Critical patent/WO2002067529A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2002Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant
    • G06F11/2007Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media
    • G06F11/201Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media between storage system components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2033Failover techniques switching over of hardware resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2038Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2002Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant
    • G06F11/2007Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2046Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share persistent storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2089Redundant storage control functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1036Load balancing of requests to servers for services different from user content provisioning, e.g. load balancing across domain name servers

Definitions

  • the invention relates generally to the field of storage area networks, and more particularly to providing access to a storage area network as network attached storage.
  • Network attached storage is a term used to refer to storage elements or devices that connect to a network and provide file access services to computer systems.
  • NAS devices attach directly to networks, such as local area networks, using traditional protocols such as Ethernet and TCP/IP, and serve files to any host or client connected to the network.
  • a NAS device typically consists of an engine, which implements the file access services, and one or more storage devices, on which data is stored.
  • a computer host system that accesses NAS devices uses a file system device driver to access the stored data.
  • the file system device driver typically uses file access protocols such as Network File System (NFS) or Common Internet File System (CIFS).
  • NAS devices interpret these commands and perform the internal file and device input/output (I/O) operations necessary to execute them.
  • NFS Network File System
  • CIFS Common Internet File System
  • NAS devices independently attach to the network, the management of these devices generally occurs on a device-by-device basis. For instance, each NAS device must be individually configured to attach to the network. Furthermore, the copying of a NAS device for purposes of creating a back-up must be configured individually.
  • Storage area networks are dedicated networks that connect one or more hosts or servers to storage devices and subsystems.
  • SANs may utilize a storage appliance to provide for management of the SAN.
  • a storage appliance may be used to create and manage back-up copies of the data stored in the storage devices of the SAN by creating point-in-time copies of the data, or by actively mirroring the data. It would be desirable to provide these and other SAN-type storage management functions for storage devices attached to a network, such as a local area network.
  • the present invention is directed to a system and method for interfacing a storage area network (SAN) with a first data communication network.
  • One or more hosts coupled to the first data communication network can access data stored in one or more of a plurality of storage devices in the SAN.
  • the one or more hosts access one or more of the plurality of storage devices as network attached storage (NAS).
  • a SAN server is coupled to a SAN.
  • a NAS server is coupled to the SAN server through a second data communication network.
  • the NAS server is coupled to the first data communication network.
  • a portion of at least one of the plurality of storage devices is allocated from the SAN server to the NAS server. The allocated portion is configured as NAS storage in the NAS server.
  • the configured portion is exported from the NAS server to be accessible to the one or more hosts coupled to the first data communication network.
  • SAN storage area network
  • NAS network attached storage
  • a storage management directive is received from a graphical user interface.
  • a message corresponding to the received storage management directive is sent to a NAS server.
  • a response corresponding to the sent message is received from the NAS server.
  • a SAN server includes a first interface and a second interface.
  • the first interface is configured to be coupled to the SAN.
  • the second interface is coupled to a first data communication network.
  • a NAS server includes a third interface and a fourth interface.
  • the third interface is configured to be coupled to a second data communication network.
  • the fourth interface is coupled to the first data communication network.
  • the SAN server allocates a first portion of the plurality of storage devices in the SAN to be accessible through the second interface to at least one first host coupled to the first data communication network.
  • the SAN server allocates a second portion of the plurality of storage devices in the SAN to the NAS server.
  • the NAS server configures access to the second portion of the plurality of storage devices to at least one second host coupled to the second data communication network.
  • a storage appliance for accessing a plurality of storage devices in a storage area network (SAN) as network attached storage (NAS) in a data communication network.
  • a first SAN server is configured to be coupled to the plurality of storage devices in the SAN via a first data communication network.
  • the first SAN server is configured to be coupled to a second data communication network.
  • a second SAN server is configured to be coupled to the plurality of storage devices in the
  • the second SAN server is configured to be coupled to a fourth data communication network.
  • a first NAS server is configured to be coupled to a fifth data communication network.
  • the first NAS server is coupled to the second and the fourth data communication networks.
  • a second NAS server is configured to be coupled to the fifth data communication network.
  • the second NAS server is coupled to the second and the fourth data communication networks.
  • the first SAN server allocates a first portion of the plurality of storage devices in the SAN to be accessible to at least one first host coupled to the second data communication network.
  • the first SAN server allocates a second portion of the plurality of storage devices in the SAN to the first NAS server.
  • the first NAS server configures access to the second portion of the plurality of storage devices to at least one second host coupled to the fifth data communication network.
  • the second NAS server assumes the configuring of access to the second portion of the plurality of storage devices by the first NAS server during failure of the first NAS server.
  • the second SAN server assumes allocation of the second portion of the plurality of storage devices by the first SAN server during failure of the first SAN server.
  • the present invention provides many advantages. These include:
  • GUI graphical user interface
  • NAS functionality may be added as a new window in an existing administrative GUI, for example.
  • a single appliance is presented that contains data management function and NAS capabilities.
  • the NAS implementation is capable of providing data to a large number of clients. It is capable of providing data to UNIX and Windows hosts.
  • FIG. 1 illustrates a storage appliance coupling computer hosts to storage devices in a storage area network, using a communication protocol such as fibre channel or SCSI, according to an example environment.
  • a communication protocol such as fibre channel or SCSI
  • FIG.2 illustrates a storage appliance coupling computer hosts to storage devices in a storage area network, as shown in FIG. 1, and further coupling computer hosts in a local area network to the storage area network, according to an exemplary embodiment of the present invention.
  • FIG. 3 A illustrates a block diagram of a storage appliance.
  • FIG. 3B illustrates a block diagram of a storage appliance with network attached storage server, according to an exemplary embodiment of the present invention.
  • FIG.4 illustrates a block diagram of a storage area network (SAN) server, according to an exemplary embodiment of the present invention.
  • SAN storage area network
  • FIG. 5 illustrates a block diagram of a network attached storage (NAS) server, according to an exemplary embodiment of the present invention.
  • NAS network attached storage
  • FIG.6 illustrates a storage appliance coupling hosts in two network types to storage devices in a storage area network, with redundant connections, according to an exemplary embodiment of the present invention.
  • FIG. 7 illustrates a block diagram of a storage appliance with redundant SAN and NAS servers, according to an exemplary embodiment of the present invention.
  • FIG. 8 illustrates an example data communication network, according to an embodiment of the present invention.
  • FIG. 9 shows a simplified five-layered communication model, based on an Open System Interconnection (OSI) reference model.
  • OSI Open System Interconnection
  • FIG. 10 shows an example of a computer system for implementing aspects of the present invention.
  • FIG. 11 illustrates the connection of SAN and NAS servers to zoned switches, according to an exemplary embodiment of the present invention.
  • FIG. 12 illustrates an example graphical user interface, according to an exemplary embodiment of the present invention.
  • FIGS. 13 A-B show a flowchart providing operational steps of an example embodiment of the present invention.
  • FIGS . 14 A-B show a flowchart providing operational steps of an example embodiment of the present invention.
  • FIG. 15 illustrates a block diagram of a NAS server, according to an exemplary embodiment of the present invention.
  • FIGS. 16-29 show flowcharts providing operational steps of exemplary embodiments of the present invention.
  • the present invention is directed toward providing full storage area network (SAN) functionality in for Network Attached Storage (NAS) that is attached to, and operating in a network.
  • the present invention utilizes a SAN.
  • the SAN may be providing storage to hosts that communicate with the SAN according to Small Computer Systems Interface (SCSI), Fibre Channel, and/or other data communication protocols on a first network.
  • SCSI Small Computer Systems Interface
  • a storage appliance couples the SAN to the hosts.
  • the present invention attaches the storage appliance to a second network, such that storage in the SAN may be accessed by hosts in the second network as one or more NAS devices.
  • the second network may be a local area network, wide area network, or other network type.
  • the storage appliance provides data management capabilities for an attached SAN.
  • data management capabilities may include data mirroring, point-in-time imaging (snapshot) of data, storage virtualization, and storage security.
  • a storage appliance managing a SAN may also be referred to as a SAN appliance.
  • these functions are controlled by one or more SAN servers within the SAN appliance.
  • the SAN appliance also provides NAS capabilities, such as access to file systems stored in the SAN over the second network.
  • NAS functionality is provided by one or more NAS servers within the SAN appliance.
  • the NAS servers are attached to the SAN servers.
  • the SAN servers communicate with the NAS servers using a protocol containing commands that the NAS servers understand. For instance, these commands may direct the NAS servers to allocate and deallocate storage from the SAN to and from the second network.
  • the NAS servers appear as separate hosts to the SAN servers.
  • the SAN servers allocate storage to the NAS servers.
  • the NAS servers allocate the storage to the second network.
  • storage may be allocated in the form of logical unit numbers (LUNs) to the NAS servers.
  • LUNs logical unit numbers
  • the NAS server LUNs are virtualized on the second network, instead of being dedicated to a single host.
  • the SAN appliance can export LUNs to the entire second network.
  • a user such as a system administrator, can control NAS functions performed by the SAN appliance through an administrative interface, which includes a central graphical user interface (GUI).
  • GUI presents a single management console for controlling multiple NAS servers.
  • the SAN appliance allows storage in a SAN to be accessed as one or more NAS devices.
  • the SAN appliance creates local file systems, and then grants access to those file systems over the second network through standard protocols, such as Network File System (NFS) and Common Internet File System (CIFS) protocols.
  • NFS Network File System
  • CIFS Common Internet File System
  • the present invention unifies local SAN management and provides file systems over the second network.
  • Terminology related to the present invention is described in the following subsection. Next, an example storage area network environment is described, in which the present invention may be applied. Detailed embodiments of the SAN appliance, SAN server, and NAS server of the present invention are presented in the subsequent sections. Sections follow which describe how storage is allocated and de-allocated, and otherwise managed. These sections are followed by sections which describe configuring aNAS server, and handling failure of aNAS server, followed by a summary of NAS protocol messages. Finally, an exemplary computer system in which aspects of the present invention may be implemented is then described. 2.0 Terminology
  • Loop 126 devices and 1 fabric attachment Fabric One or more Fibre Channel switches in a networked topology.
  • HBA Host bus adapter an interface between a server or workstation bus and a Fibre Channel network.
  • Hub In Fibre Channel a wiring concentrator that collapses a loop topology into a physical star topology.
  • Initiator On a Fibre Channel network typically a server or a workstation that initiates transactions to disk or tape targets.
  • JBOD Just a bunch of disks; typically configured as an Arbitrated Loop segment in a single chassis.
  • LAN Local area network A network linking multiple devices in a single geographical location.
  • Logical The entity within a target that executes I/O commands.
  • SCSI I/O commands are sent to a target and executed by a logical unit within that target.
  • a SCSI physical disk typically has a single logical unit.
  • Tape drives and array controllers may incorporate multiple logical units to which I/O commands can be addressed.
  • each logical unit exported by an array controller corresponds to a virtual disk.
  • LUN Logical Unit Number; The identifier of a logical unit within a target, such as a SCSI identifier.
  • NAS Network Attached Storage Storage elements that connect to a network and provide file access services to computer systems.
  • a NAS storage element typically consists of an engine, which implements the file services, and one or more devices, on which data is stored. Point-to- A dedicated Fibre Channel connection between two devices. point Private A free-standing Arbitrated Loop with no fabric attachment. loop
  • Topology The physical or logical arrangement of devices in a networked configuration.
  • UDP User Datagram Protocol a connectionless protocol that, like TCP, runs on top of IP networks.
  • TCP/IP IP Security
  • UDP IP provides very few error recovery services, offering instead a direct way to send and receive datagrams over an IP network.
  • WAN Wide area network a network linking geographically remote sites.
  • a storage area network is a high-speed sub-network of shared storage devices.
  • a SAN operates to provide access to the shared storage devices for all servers on a local area network (LAN), wide area network (WAN), or other network coupled to the SAN.
  • LAN local area network
  • WAN wide area network
  • FIG. 8 illustrates an example data communication network 800, according to an embodiment of the present invention.
  • Network 800 includes a variety of devices which support communication between many different entities, including businesses, universities, individuals, government, and financial institutions. As shown in FIG. 8, a communication network, or combination of networks, interconnects the elements of network 800. Network 800 supports many different types of communication links implemented in a variety of architectures.
  • Network 800 may be considered to include an example of a storage area network that is applicable to the present invention.
  • Network 800 comprises a pool of storage devices, including disk arrays 820, 822, 824, 828, 830, and 832.
  • Network 800 provides access to this pool of storage devices to hosts/servers comprised by or coupled to network 800.
  • Network 800 may be configured as point-to-point, arbitrated loop, or fabric topologies, or combinations thereof.
  • Network 800 comprises a switch 812. Switches, such as switch 812, typically filter and forward packets between LAN segments.
  • Switch 812 may be an Ethernet switch, fast-Ethernet switch, or another type of switching device known to persons skilled in the relevant art(s). In other examples, switch 812 may be replaced by a router or a hub.
  • a router generally moves data from one local segment to another, and to the telecommunications carrier, such as AT & T or WorldCom, for remote sites.
  • a hub is a common connection point for devices in a network. Suitable hubs include passive hubs, intelligent hubs, and switching hubs, and other hub types known to persons skilled in the relevant art(s).
  • a personal computer 802 may interface with network 800.
  • a workstation 804 may interface with network 800.
  • printer 804 may interface with network 800.
  • Network 800 includes one or more hosts and/or servers.
  • network 800 comprises server 814 and server 816.
  • Servers 814 and 816 provide devices 802, 804, 806, 808, and 810 with network resources via switch 812.
  • Servers 814 and 816 are typically computer systems that process end-user requests for data and/or applications.
  • servers 814 and 816 provide redundant services.
  • server 814 and server 816 provide different services and thus share the processing load needed to serve the requirements of devices 802, 804, 806, 808, and 810.
  • one or both of servers 814 and 816 are connected to the Internet, and thus server 814 and/or server 816 may provide Internet access to network 800.
  • servers 814 and 816 may be Windows NT servers or UNIX servers, or other servers known to persons skilled in the relevant art(s).
  • a SAN appliance or device as described elsewhere herein may be inserted into network 800, according to embodiments of the present invention.
  • a SAN appliance 818 may to implemented to provide the required connectivity between the storage device networking (disk arrays 820, 822, 824, 828, 830, and 832) and hosts and servers 814 and 816, and to provide the additional functionality of SAN and NAS management of the present invention described elsewhere herein.
  • the SAN appliance interfaces the storage area network, or SAN, which includes disk arrays 820, 822, 824, 828, 830, and 832, hub 826, and related networking, with servers 814 and 816.
  • Network 800 includes a hub 826.
  • Hub 826 is connected to disk arrays 828, 830, and 832.
  • hub 826 is a fibre channel hub or other device used to allow access to data stored on connected storage devices, such as disk arrays 828, 830, and 832. Further fibre channel hubs may be cascaded with hub
  • hub 826 to allow for expansion of the SAN, with additional storage devices, servers, and other devices.
  • hub 826 is an arbitrated loop hub.
  • disk arrays 828, 830, and 832 are organized in a ring or loop topology, which is collapsed into a physical star configuration by hub 826.
  • Hub 826 allows the loop to circumvent a disabled or disconnected device while maintaining operation.
  • Network 800 may include one or more switches in addition to switch 812 that interface with storage devices.
  • a fibre channel switch or other high-speed device may be used to allow servers 814 and 816 access to data stored on connected storage devices, such as disk arrays 820, 822, and 824, via appliance 818.
  • Fibre channel switches may be cascaded to allow for the expansion of the SAN, with additional storage devices, servers, and other devices.
  • Disk arrays 820, 822, 824, 828, 830, and 832 are storage devices providing data and application resources to servers 814 and 816 through appliance 818 and hub 826. As shown in FIG. 8, the storage of network 800 is principally accessed by servers 814 and 816 through appliance 818.
  • the storage devices may be fibre channel-ready devices, or SCSI (Small Computer Systems Interface) compatible devices, for example. Fibre channel-to-SCSI bridges may be used to allow SCSI devices to interface with fibre channel hubs and switches, and other fibre channel-ready devices.
  • disk arrays 820, 822, 824, 828, 830, and 832 may instead be alternative types of storage devices, including tape systems, JBODs (Just a Bunch Of Disks), floppy disk drives, optical disk drives, and other related storage drive types.
  • the topology or architecture of network 800 will depend on the requirements of the particular application, and on the advantages offered by the chosen topology.
  • One or more hubs 826, one or more switches, and/or one or more appliances 818 may be interconnected in any number of combinations to increase network capacity.
  • Disk arrays 820, 822, 824, 828, 830, and 832, or fewer or more disk arrays as required, may be coupled to network 800 via these hubs 826, switches, and appliances 818.
  • FIG. 9 shows a simplified five-layered communication model, based on Open System Interconnection (OSI) reference model. As shown in FIG.9, this model includes an application layer 908, a transport layer 910, a network layer 920, a data link layer 930, and a physical layer 940. As would be apparent to persons skilled in the relevant art(s), any number of different layers and network protocols may be used as required by a particular application.
  • Application layer 908 provides functionality for the different tools and information services which are used to access information over the communications network.
  • Example tools used to access information over a network include, but are not limited to Telnet log-in service 901, IRC chat 902, Web service 903, and SMTP (Simple Mail Transfer Protocol) electronic mail service 906.
  • Web service 903 allows access to HTTP documents 904, and FTP
  • SSL Secure Socket Layer
  • Transport layer 910 provides transmission control functionality using protocols, such as TCP, UDP, SPX, and others, that add information for acknowledgments that blocks of the file had been received.
  • Network layer 920 provides routing functionality by adding network addressing information using protocols such as IP, IPX, and others, that enable data transfer over the network.
  • Data link layer 930 provides information about the type of media on which the data was originated, such as Ethernet, token ring, or fiber distributed data interface (FDDI), and others.
  • Physical layer 940 provides encoding to place the data on the physical transport, such as twisted pair wire, copper wire, fiber optic cable, coaxial cable, and others.
  • the present invention may be implemented in and operated from a storage appliance or SAN appliance that interfaces between the hosts and the storage subsystems comprising the SAN.
  • the present invention is completely host (operating system) independent and storage system independent.
  • the NAS functionality of the storage appliance according to the present invention does not require special host software. Furthermore, the NAS functionality according to
  • the present invention is not tied to a specific storage vendor and operates with any type of storage, including fibre channel and SCSI.
  • the present invention may be implemented in a storage, or SAN, appliance, such as the SANLinkTM appliance, developed by StorageApps Inc., located in Bridgewater, New Jersey.
  • a storage appliance-based or web-based administrative graphical interface may be used to centrally manage the SAN and NAS functionality.
  • a storage appliance such as the SANLinkTM, unifies SAN management by providing resource allocation to hosts.
  • the SANLinkTM for instance, it also provides data management capabilities. These data management capabilities may include:
  • Storage virtualization/mapping All connected storage in the SAN is provided as a single pool of storage, which may be partitioned and shared among hosts as needed.
  • Data mirroring An exact copy of the data stored in the SAN storage devices is created and maintained in real time. The copy may be kept at a remote location.
  • Point-in-time copying An instantaneous virtual image of the existing storage may be created. The virtual replica can be viewed and manipulated in the same way as the original data.
  • Storage security Access to particular storage devices, or portions thereof, may be restricted. The storage devices or portions may be masked from view of particular hosts or users.
  • a NAS server may be incorporated into a conventional storage appliance, according to the present invention.
  • the SAN servers and/or NAS servers may be located outside of the storage appliance.
  • One or more SAN servers and/or NAS servers may be geographically distant, and coupled to the other SAN/NAS servers via wired or wireless links.
  • FIG. 1 illustrates an example computer environment 100, which may be considered to include a storage area network (SAN).
  • storage appliance 108 couples hosts 102, 104, and 106 to storage devices 110, 112, and 114.
  • Storage devices 110, 112, and 114 are coupled to storage appliance 108 via a first data communication network 118.
  • Storage devices 110, 112, and 114 and first data communication network 118 form the storage portion of computer environment 100, and are referred to collectively as SAN 120 herein.
  • Storage appliance 108 manages SAN 120, allocating storage to hosts 102, 104, and 106.
  • Hosts 102, 104, and 106 may be any type of computer system.
  • FIG.2 illustrates an example computer environment 200, according to an embodiment of the present invention.
  • hosts 202 In computer environment 200, hosts 202,
  • NAS network attached storage
  • a storage appliance 210 couples hosts 102, 104, and 106 to storage devices 110, 112, and 114.
  • Storage devices 110, 112, and 114 are coupled to storage appliance 210 via a first data communication network 118. As described above, storage devices 110, 112, and 114 and first data
  • SAN 120 10 communication network 118 are referred to collectively as SAN 120.
  • example computer environment 200 shows hosts 202, 204, and 206 coupled to storage appliance 210 via a third data communication network 208.
  • Hosts 202, 204, and 206 include hosts, servers, and other computer system types that may be present in a data communication network.
  • hosts 202, 204, and 206 may be workstations or personal computers, and/or may be servers that manage network resources.
  • one or more of the servers of hosts 202, 204, and 206 may be network servers,
  • Hosts 202, 204, and 206 output requests to storage appliance 210 to write to, or read data from storage devices 110, 112, and 114 in SAN 120.
  • the present invention is applicable to additional or fewer hosts than shown in FIG. 2.
  • Storage appliance 210 receives storage read and write requests from hosts
  • the storage read and write requests include references to one or more storage locations in storage devices 110, 112, and 114 in SAN 120.
  • Storage appliance 210 parses the storage read and write requests by extracting various parameters that are included in the requests. In an embodiment, storage appliance 210 uses the parsed read and write
  • Storage appliance 210 outputs read and write requests to physical storage/LUNs.
  • Third data communication network 208 typically is an Ethernet, Fast Ethernet, or Gigabit Ethernet network, or other applicable type of communication network otherwise known or described elsewhere herein.
  • the transport protocol for data on third data communication network 208 is typically TCP/IP, but may also be any applicable protocol otherwise known or mentioned herein.
  • SAN 120 receives storage read and write requests from storage appliance 210 via first data communication network 118.
  • First data communication network 118 routes the received physical storage read and write requests to the corresponding storage device(s), which respond by reading or writing data as requested.
  • Storage devices 110, 112, and 114 comprise one or more storage devices that may be individually coupled directly to storage appliance 210, and/or may be interconnected in a storage area network configuration that is coupled to storage appliance 210.
  • storage devices 110, 112, and 114 comprise one or more of a variety of storage devices, including tape systems, JBODs (Just a Bunch Of Disks), floppy disk drives, optical disk drives, disk arrays, and other applicable types of storage devices otherwise known or described elsewhere herein.
  • First data communication network 118 typically includes one or more fibre channel links, SCSI links, and/or other applicable types of communications link otherwise known or described elsewhere herein.
  • SAN 120 may further include switches and hubs, and other devices, used to enhance the connectivity of first data communication network 118.
  • FIG. 6 illustrates example computer environment 200, according to an exemplary embodiment of the present invention.
  • storage appliance 210 couples with hosts 102, 104, 106, 202, 204, and 206, and storage devices 110, 112, and 114, using redundant connections.
  • Storage devices 110, 112, and 114 are coupled to storage appliance 210 through primary first data communication network 118a and redundant first data communication network 118b.
  • Hosts 102, 104, and 106 are coupled to storage appliance 210 through primary second data communication network 116a and redundant second data communication network 116b.
  • Hosts 202, 204, and 206 are coupled to storage appliance 210 through primary third communications link 602a and redundant third communications link 604, which are each coupled to third data communication network 208. Further details of providing NAS functionality according to the configuration shown in FIG. 6 are provided in sections below. Description in these terms is provided for convenience only. It is not intended that the invention be limited to application in these example environments. In fact, after reading the following description, it will become apparent to a person skilled in the relevant art how to implement the invention in alternative environments known now or developed in the future. Further detailed embodiments of the elements of computer environment 200 are discussed below.
  • Structural implementations for the storage appliance of the present invention are described at a high-level and at a more detailed level. These structural implementations are described herein for illustrative purposes, and are not limiting. In particular, the present invention as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof. For instance, the present invention as described herein may be implemented in a computer system, application-specific box, or other device. Furthermore, the present invention may be implemented in one or more physically separate devices or boxes. In an embodiment, the present invention may be implemented in a SAN appliance, as described above, which provides for an interface between host servers and storage. Such SAN appliances include the SANLinkTM appliance.
  • a storage appliance attached to a SAN provides NAS functionality.
  • One or more SAN servers are present in the SAN appliance to provide data management functionality.
  • One or more NAS servers may be installed in the SAN appliance, to provide the NAS functionality.
  • the NAS server may be physically separate from the SAN appliance, and may be connected to the SAN appliance by wired or wireless links.
  • additional components may be present in the SAN appliance, such as fibre channel switches, to provide enhanced connectivity.
  • FIG.3 A shows an example embodiment of a storage appliance 108, which includes a SAN server 302.
  • SAN server 302 is coupled between first data communication network 118 and second data communication network 116.
  • SAN server 302 allocates storage of SAN 120 to hosts 102, 104, and 106, on an individual or group basis, as shown in FIG. 1.
  • SAN server 302 receives read and write storage requests from hosts 102, 104, and 106, and processes and sends these storage requests to the applicable storage device(s) of storage devices 110, 112, and 114 in SAN 120.
  • Storage devices 110, 112, and 114 process the received storage requests, and send resulting data to SAN server 302.
  • SAN server 302 sends the data to the applicable host(s) of hosts 102, 104, and 106.
  • SAN server 302 also performs data management functionality for SAN 120, as described above.
  • FIG.3B illustrates an example of a storage appliance 210, according to an embodiment of the present invention.
  • Storage appliance 210 includes a SAN server 302 and aNAS server 304.
  • SAN server 302 is coupled between first data communication network 118 and second data communication network 116, similarly to the configuration shown in FIG. 3A.
  • NAS server 304 is coupled between second data communication network 116 and third data communication network 208.
  • SAN server 302 views NAS server 304 as a host.
  • Storage is allocated by SAN server 302 to NAS server 304, in a manner similar to how SAN server 302 would allocate storage to one of hosts 102, 104, and 106.
  • SAN server 302 requires little or no modification to interact with
  • the storage may be allocated in the form of LUNs, for example.
  • NAS server 304 configures the storage allocated to it by SAN server 302, and exports it to third data communication network 208. In this manner, hosts 202, 204, and 206, shown in FIG. 2, can access the storage in SAN 120.
  • FIG. 14A shows a flowchart 1400 providing operational steps of an example embodiment of the present invention.
  • FIG. 14B provides additional steps for flowchart 1400.
  • FIGS. 14A-B show a process for interfacing a SAN with a first data communication network.
  • One or more hosts coupled to the first data communication network can access data stored in one or more of a plurality of storage devices in the SAN.
  • the one or more hosts access one or more of the plurality of storage devices as aNAS device.
  • the steps of FIGS. 14A-B may be implemented in hardware, firmware, software, or a combination thereof.
  • the steps of FIGS. 14A-B do not necessarily have to occur in the order shown, as will be apparent to persons skilled in the relevant art(s) based on the teachings herein. Other structural embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion contained herein. These steps are described in detail below.
  • a SAN server is coupled to a SAN.
  • SAN server 302 is coupled to SAN 120.
  • aNAS server is coupled to the SAN server through a second data communication network.
  • NAS server 304 is coupled to SAN server 302 via second data communication network 116.
  • the NAS server is coupled to the first data communication network.
  • NAS server is coupled to third data communication network 208.
  • a portion of at least one of the plurality of storage devices is allocated from the SAN server to the NAS server.
  • a portion or all of storage devices 110, 112, and 114 are allocated to NAS server 304 by SAN server 302.
  • the NAS server is viewed from the SAN server as a host attached to the second data communication network.
  • SAN server 302 does not require additional configuration in order to be able to allocate NAS storage.
  • the portion of at least one of the plurality of storage devices is allocated from the SAN server to the NAS server in the same manner as the portion would be allocated from the SAN server to a host attached to the second data communication network.
  • the allocated portion is configured as NAS storage in the NAS server.
  • NAS server 304 configures the allocated portion as
  • NAS storage Configuration of storage as NAS storage is described in further detail below.
  • the configured portion is exported from the NAS server to be accessible to the one or more hosts coupled to the first data communication network.
  • NAS server 304 exports the configured portion of storage on third data communication network 208, to be available to one or more of hosts
  • the SAN server is configured to allocate storage from the SAN to at least one host attached to the second data communication network.
  • SAN server 302 is configured to allocate storage from SAN 120 to one or more of hosts 102, 104, and 106 on second data communication network
  • the SAN server is coupled to the second data communication network.
  • FIG. 14B provides additional exemplary steps for flowchart 1400 of FIG. 14A:
  • an administrative interface is coupled to the SAN server.
  • an administrative interface is coupled to SAN server 302.
  • the administrative interface allows for user control of storage allocation by the present invention.
  • the administrative interface may include a graphical user interface.
  • the administrative interface is described in more detail below.
  • the administrative interface is coupled directly to NAS server 304.
  • a storage allocation directive is received from the administrative interface by the SAN server.
  • a user graphically or textually inputs a command to effect NAS or SAN storage management, according to the present invention.
  • redundant NAS servers may be used in a single SAN appliance, and/or each NAS server may itself provide redundant features.
  • each NAS server 304 includes two Host Bus Adaptors (HBAs) that interface with second communications network 118, for redundancy and fail-over capabilities. Examples of these capabilities are further described in the sections below.
  • HBAs Host Bus Adaptors
  • FIG. 7 illustrates a block diagram of storage appliance 210, according to an exemplary embodiment of the present invention.
  • Storage appliance 210 includes a primary SAN server 302a, a redundant SAN server 302b, a primary
  • Switches 702a, 702b, 704a, and 704b are optional, as required by the particular application. Switches 702a, 702b, 704a, and 704b are preferably fibre channel switches, used for high data rate communication with hosts and storage, as described above.
  • Storage appliance 210 of FIG. 7 is applicable to computer system environment 200 shown in FIG. 6, for example.
  • Switch 704a is coupled to primary first data communication network 118a.
  • Primary SAN server 302a and redundant SAN server 302b are coupled to switch 704a.
  • Switch 704a couples SAN servers 302a and 302b to primary first data communication network 118a, as a primary mode of access to storage devices 110, 112, and 114.
  • Switch 704b is coupled to redundant first data communication network
  • Switch 704b couples SAN servers 302a and 302b to redundant first data communication network 118b, so that they can redundantly access storage devices 110, 112, and 114.
  • Primary SAN server 302a is coupled to redundant SAN server 302b by
  • Primary SAN server 302a and redundant SAN server 302b each include two interfaces, such as two HB As, that allow each of them to be coupled with both of switches 704a and 704b. Additional SAN servers and switches may be coupled in parallel with SAN servers 302a and 302b and switches 704a and 704b, as represented by signals 708a and 708b. In further embodiments, additional switches and SAN servers in storage appliance 210 may be coupled to further redundant networks, or to networks coupled to further storage devices.
  • Switch 702a is coupled to primary SAN server 302a.
  • Switch 702a allows for communication between primary SAN server 302a and primary and redundant
  • NAS servers 304a and 304b and between SAN server 302a and hosts attached to primary second data communication network 116a.
  • Switch 702b is coupled to redundant SAN server 302b.
  • Primary NAS server 304a, redundant NAS server 304b, and switch 702b are coupled to redundant second data communication network 116b by switch 702b.
  • 702b allows for communication between redundant SAN server 302b and primary and redundant NAS servers 304a and 304b, and between SAN server 302b and hosts attached to redundant second data communication network 11 b.
  • Primary NAS server 304a and redundant NAS server 304b each include two interfaces, such as two HBAs, that allow each of them to be coupled with both of primary and redundant second data communication network 116a and 116b.
  • Additional NAS servers and switches may be coupled in parallel with NAS servers 304a and 304b and switches 702a and 702b.
  • additional switches and NAS servers in storage appliance 210 may be coupled to further redundant networks, or to networks coupled to additional hosts.
  • Primary NAS server 304a is coupled to primary third communications link 602.
  • Redundant NAS server 304b is coupled to redundant third communications link 604.
  • additional switches and NAS servers in storage appliance 210 may be coupled to third data communication network 208 through links.
  • NAS servers 304a and 304b are considered to be peer NAS servers for each other.
  • SAN servers 302a and 302b are considered to be peer SAN servers for each other.
  • primary NAS server 304a operates to supply NAS functionality to storage appliance 210, as described for NAS server 304 above.
  • redundant NAS server 304b operates as a back-up for primary NAS server 304a, and takes over some or all of the NAS functionality of primary NAS server 304a when NAS server 304a fails.
  • SAN servers 302a and 302b may have a similar relationship.
  • primary and redundant NAS servers 304a and 304b share the NAS functionality for storage appliance 210.
  • each of NAS servers 304a and 304b may configure and export some amount of storage to third data communication network 208.
  • NAS servers 304a and 304b may operates as back-up NAS servers for each other.
  • SAN servers 302a and 302b may have a similar relationship. Further detail on the operation the elements of storage appliance 210 shown in FIG. 7 are provided in sections below.
  • FIG. 11 illustrates a pair of NAS servers 304a and 304b coupled to a pair of SAN servers 302a and 302b via a pair of fibre channel switches 702a and 702b, according to an embodiment of the present invention.
  • the fibre channel switches 702a and 702b may be zoned to allow NAS servers 304a and 304b to reserve a target on the SAN servers 302a and 302b.
  • FIG. 11 shows the zoning of fibre channel switches 702a and 702b. The first three ports of each fibre channel switch 702a and 702b are zoned into a NAS zone 704a and 704b. This provides NAS servers 304a and 304b uncontested access to target 0 of SAN servers 302a and 302b. Further zoning arrangements for switches are within the scope and spirit of the present invention. Furthermore, the present invention is applicable to additional redundant configurations.
  • SAN server Exemplary implementations for a SAN server are described in more detail as follows. These structural implementations are described herein for illustrative purposes, and are not limiting. In particular, the present invention as described herein can be achieved using any number of structural and operational implementations, including hardware, firmware, software, or any combination thereof.
  • a SAN server as described herein may be implemented in a computer system, application-specific box, or other device.
  • the SAN server may be implemented in a physically separate device or box from the
  • the SAN server is implemented in a SAN appliance, as described above.
  • FIG. 4 illustrates an exemplary block diagram of a SAN server 302, according to an embodiment of the present invention.
  • SAN server 302 comprises a first network interface 406, a second network interface 402, a SAN storage manager 404, a SAN server interface 408, and an operating system 410.
  • FIG. 4 also shows an administrative interface 412, that is coupled to SAN server 302 through GUI communication link 426.
  • Administrative interface 412 may be coupled to SAN server 302 if SAN server 302 is a primary SAN server for a SAN appliance. Administrative interface 412 is more fully described below.
  • First network interface 406, second network interface 402, and SAN server interface 408 each include one or more host bus adaptors (HBA), network interface cards (NICs), and/or other adaptors/ports that interface the internal architecture of SAN server 302 with first data communication network 118.
  • First and second network interfaces 406 and 402, and SAN server interface 408, may each support fibre channel, SCSI, Ethernet, TCP/IP, and further data communication mediums and protocols on first data communication network 118, second data communication network 116, and SAN server communication link
  • Operating system 410 provides a platform on top of which application programs executing in SAN server 302 can run.
  • Operating system 410 may be a customized operating system, and may be any available operating system, such as Linux, UNIX, DOS, OS/2, and Windows NT.
  • SAN storage manager 404 provides data management functionality for SAN server 302.
  • SAN storage manager 404 includes one or more modules that are directed towards controlling aspects of data management for the SAN.
  • SAN storage manager 404 includes a storage allocator module 416, a storage mapper module 418, a data mirror module 420, a snapshot module 422, and a storage security module 424.
  • Storage allocator module 416 controls the allocation and deallocation of storage in SAN 120, shown in FIG. 2, to hosts 102, 104, and 106, and to NAS server 304. Further details about the operation of storage allocator module 416 are described below.
  • Storage mapper module 418 controls the mapping of logical storage addresses received from hosts 102, 104, and 106, and from NAS server 304, to actual physical storage addresses for data stored in the storage devices of SAN 120.
  • Data mirror module 420 controls the mirroring of data stored in SAN 120 with a remote SAN, when such data mirroring is desired. For instance, data mirror module 420 may communicate with a data mirror module located in a remote SAN server, via SAN server interface 408.
  • the remote SAN server is typically located in a remote SAN appliance, and manages data stored in the remote SAN.
  • Data mirror module 420 interacts with the remote data mirror module to mirror data back and forth between the local and remote SANs.
  • Snapshot module 422 controls single point-in-time copying of data in one or more storage devices of SAN 120 to another location, when a snapshot of data is desired.
  • Storage security module 424 controls the masking of storage devices, or portions of storage devices in SAN 120, from particular hosts and users.
  • Second network interface 402 receives read and write storage requests from hosts, and from NAS server 304, via second data communication network 116. Second network interface 402 also sends responses to the read and write storage requests to the hosts and NAS server 304 from SAN server 302.
  • SAN storage manager 404 receives the read and write storage requests from second network interface 402, and processes them accordingly. For instance, SAN storage manager 404 may map the received storage request from a logical storage address to one or more physical storage address. The SAN storage manager 404 outputs the physical storage address(s) to first network interface 406.
  • First network interface 406 receives a physical read/write storage request from SAN storage manager 404, and transmits it on first data communication network 118. In this manner, first network interface 406 issues the received read/write storage request to the actual storage device or devices comprising the determined physical storage address(s) in SAN 120. First network interface 406 also receives responses to the read/write storage requests from the storage devices in SAN 120. The responses may include data stored in the storage devices of SAN 120 that is being sent to a requesting host, and/or may include an indication of whether the read/write storage request was successful. First network interface 406 outputs the responses to SAN storage manager 404.
  • SAN storage manager 404 receives the responses from first network interface 406, and processes them accordingly. For example, SAN storage manager 404 may output data received in the response to second network interface 402. Second network interface 402 outputs the request to second data communication network 116 to be received by the requesting host, or by NAS server 304.
  • SAN storage manager 404 also communicates with NAS server 304 through second network interface 402, to allocate and deallocate storage from
  • SAN storage manager 404 may send allocation and deallocation directives, and status directives, to NAS server 304.
  • Network interface 402 may receive responses from NAS server 304, and send these to SAN storage manager 404.
  • storage allocator module 416 controls this NAS related functionality. Further details of storage allocation and deallocation are provided in sections below.
  • NAS server Structural implementations for the NAS server of the present invention are described as follows. These structural implementations are described herein for illustrative purposes, and are not limiting. In particular, the present invention as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof.
  • a NAS server as described herein may be implemented in a computer system, application-specific box, or other device.
  • the NAS server may be implemented in a physically separate device or box from the SAN appliance and SAN server(s).
  • the NAS server is implemented in a SAN appliance, as described above,
  • FIG. 5 illustrates a block diagram of NAS server 304, according to an exemplary embodiment of the present invention.
  • NAS server 304 includes a first network interface 508, a second network interface 502, a NAS file manager 512, and an operating system 510.
  • First network interface 508 includes one or more host bus adaptors (HBA), network interface cards (NICs), and/or other adaptors/ports that interface the internal architecture of NAS server 304 with second data communication network 116.
  • First network interface 508 may support fibre channel, SCSI, and further data communication mediums and protocols on second data communication network 116.
  • Second network interface 502 includes one or more host bus adaptors
  • Second network interface 502 may support Ethernet, Fast Ethernet, Gigabit Ethernet, TCP/IP, and further data communication mediums and protocols on third data communication network 208.
  • Operating system 510 provides a platform on top of which application programs executing in NAS server 304 can run.
  • Operating system 510 may be a customized operating system, and may be any commercially available operating system, such as Linux, UNIX, DOS, OS/2, and Windows NT.
  • operating system 510 includes Linux OS version 2.2.18 or greater.
  • Operating system 510 includes a kernel. Linux provides an advantage over the file system limitations of NT, and allows access to kernel source code.
  • NAS file manager 512 provides file management functionality for NAS server 304.
  • NAS file manager 512 includes one or more modules that are directed towards keeping a record of exported files, and configuring and managing the files exported to third data communication network 208.
  • NAS file manager 512 includes a NFS protocol module 504, a CIFS protocol module 506, and a storage configuration module 514.
  • Storage configuration module 514 configures storage allocated by SAN server 302 to NAS server 304, to be made available to hosts on third data communication network 208. Further description of storage configuration module 514 is provided in sections below.
  • NFS protocol module 504 allows NAS server 304 to use Network File System (NFS) protocol to make file systems available to UNIX hosts on third data communication network 208.
  • CIFS protocol module 506 allows NAS server 304 to use Common Internet File System (CIFS) protocol to make file systems available to Windows clients.
  • NAS server 304 may include a product called Samba, which implements CIFS.
  • Second network interface 502 receives read and write storage requests from hosts attached to third data communication network 208. The requests relate to storage exported to third data communication network 208 by NAS server 304. Second network interface 502 also sends responses to the read and write storage requests to the hosts.
  • NAS file manager 512 receives the read and write storage requests from second network interface 502, and processes them accordingly. For instance, NAS file manager 512 may determine whether the received storage request related to storage exported by NAS server 304. NAS file manager 512 outputs the storage request to first network interface 508.
  • First network interface 508 receives a physical read/write request from
  • first network interface 508 issues the received read/write storage request to the SAN server 302.
  • First network interface 508 also receives responses to the read/write storage requests from SAN server 302 on second data communication network 116.
  • the responses may include data stored in the storage devices of SAN 120.
  • First network interface 508 outputs the responses to NAS file manager 512.
  • NAS file manager 512 receives the responses from first network interface 508, and processes them accordingly. For example, NAS file manager 512 may output data received in the response to second network interface 502 through one or both of NFS protocol module 504 and CIFS protocol module 506.
  • NFS protocol module 504 formats the response per NFS protocol.
  • CIFS protocol module 506 formats the response per CIFS protocol.
  • Second network interface 502 outputs the formatted response on third data communication network 208 to be received by the requesting host.
  • First network interface 508 also receives storage allocation and deallocation directives, and status directives from SAN server 302, and sends them to NAS file manager 512. Note that in embodiments, second network interface 502 may also or alternatively receive these storage allocation and deallocation directives. Responses to these received storage allocation and deallocation directives, and status directives are generated by NAS file manager 512.
  • NAS file manager 512 sends these responses to first network interface 508, which outputs the responses onto second communication network 116 for SAN server 302.
  • the present invention includes an administrative interface to allow a user to configure aspects of the operation of the invention.
  • the administrative interface includes a graphical user interface to provide a convenient location for a user to provide input.
  • an administrative interface 412 is coupled to SAN storage manager 404 of SAN server 302.
  • the administrative interface couples to the primary SAN server in the storage appliance 210.
  • the primary SAN server forwards directives to NAS servers.
  • the directives are forwarded to the NAS servers via second data communication network 116 and/or third data communication network 208, and may include a common or custom SAN-to-NAS protocol.
  • An exemplary NAS protocol is described below. Many features of the present invention are described below in relation to the administrative interface. However, in alternative embodiments, an administrative interface is not required and therefore is not present. In an embodiment, an existing administrative interface that accommodates
  • SAN servers may not require any modification to handle allocation of storage to NAS servers.
  • the SAN servers view the NAS servers as separate hosts, in an embodiment, an existing administrative interface may be enhanced to allow integration of NAS functionality.
  • the administrative interface may be configured to show the NAS servers as themselves, rather than as hosts.
  • the administrative interface may allow the storage appliance administrator to allocate a storage portion, such as a LUN, to a NAS server, as a NAS LUN. Any LUN mapping may be done automatically.
  • the NAS servers create a file system for that LUN and export that file system to the network. This process is described in further detail below.
  • the administrative interface In order to represent the NAS servers as NAS servers in the administrative interface, the administrative interface must be able to differentiate NAS servers from hosts.
  • the NAS servers issue a special registration command via second communications interface 116 to the SAN servers, to identify themselves. For example, in a SAN appliance that includes two NAS servers, the first NAS server may identify itself to the SAN servers as "NASServerNASOne", while the second NAS server may identify itself as
  • NASServerNASTwo These names are special identifiers used by the SAN servers when allocating storage to the NAS servers.
  • FIG. 12 illustrates a graphical user interface (GUI) 1200 for administrative interface 412 of FIG. 4, according to an exemplary embodiment of the present invention.
  • GUI 1200 in FIG. 12 displays panels related to management of NAS storage. This is because a NAS button 1214 has been selected in a GUI mode select panel 1216 of GUI 1200. Additional features related to SAN management may be displayed by selecting other buttons in GUI mode select panel 1216.
  • two NAS servers, labeled NAS 1 andNAS2 are available for storage allocation.
  • GUI 1200 further includes a first panel 1202, a second panel 1204, a third panel 1206, a fourth panel 1208, and a fifth panel 1210. Each of these panels are more fully described in the following text and sections. In alternative embodiments, fewer or more panels may be displayed in GUI 1200, and fewer or more features may be displayed in each panel, as required by the particular application.
  • Panel 1202 displays available storage units, in the form of LUNs, for example.
  • LUNs in the form of LUNs, for example.
  • six LUNs are available for allocation: LUNO, LUN1, LUN2, LUN3, LUN4 and LUN5.
  • the storage units, or LUNs, displayed in panel 1202 may be virtual storage units, or actual physical storage units. Any of the storage units displayed in panel 1202 may be allocated as network attached storage via one or more NAS servers.
  • a LUN may be allocated to NAS using panel 1202, by selecting the box in the NAS1 column, or the box in the NAS2 column next to the LUN to be allocated, and then pressing the box labeled "Assign." If the box in the NAS1 column was checked, the first NAS server will be instructed to create a file system on the LUN. If the box in the NAS2 column was checked, the second NAS server will be instructed to create a file system on the LUN. For example, the instructed NAS server will create a file system named /exportxxxx, where xxxx is a four-digit representation of the LUN number, such as 0001 for LUN1. After the NAS server creates the file system, it exports the file system via NFS and/or CIFS. For example, the file system is exported by NFS protocol module 504 and/or CIFS protocol module 506. Hence, the file system will be available to one or more hosts and users on third data communication network 208.
  • a LUN may be deallocated by clearing the box next to the LUN in the NAS1 or NAS2 column of panel 1202, and pressing the box labeled "Assign.” That instructs the NAS server to relinquish the file system created for the deallocated LUN, making the file system inaccessible on third data communication network 208.
  • Panel 1204 displays all NAS file systems that are currently exported by NAS servers. There is one file system for each exported NAS LUN. In an embodiment, an administrator may select a file system name in panel 1204, and GUI 1200 will fill the properties of the selected file system into third, fourth, and fifth panels 1206, 1208, and 1210.
  • Panel 1206 displays the size, NAS server, ownership, group, and permission attributes of the file system selected in panel 1204. Panel 1206 allows an administrator to change ownership, group, and permission attributes of the file system selected in panel 1204 by modifying these entries in panel 1206.
  • Panel 1208 displays whether the file system selected in panel 1204 is exported by NFS, whether the NFS file system is read-only, and a list of hosts which may access the file system via NFS.
  • Panel 1208 allows an administrator to change whether the file system is exported by NFS, whether the file system is read-only, and to modify the list of users and hosts which may access the file system via NFS. Once an administrator has made the desired changes, the changes are implemented by selecting "Commit" in panel 1204.
  • Panel 1210 displays whether the file system selected in panel 1204 is exported by CIFS, whether the CIFS file system is read-only, and a list of users and hosts which may access the file system via CIFS.
  • Panel 1210 allows an administrator to change whether the file system is exported by CIFS, whether the file system is read-only, and to modify the list of users and hosts which may access the file system via CIFS. Once an administrator has made the desired changes, the changes are implemented by selecting "Commit" in panel 1204.
  • GUI 1200 has numerous advantages. First, an administrator can allocate a NAS LUN with a single click of a mouse button. Second, GUI 1200 hides that the storage appliance has separate NAS servers (the NAS servers do not appear as hosts, but rather as network interfaces). Third, the administrator can easily provide or eliminate access via NFS and CIFS, restrict permissions, and limit access over the network for a file system. Further advantages are apparent from the teachings herein.
  • FIG. 13 A shows a flowchart 1300 providing operational steps of an example embodiment of the present invention.
  • FIG. 13B provides additional steps for flowchart 1300.
  • FIGS. 13A-B show a process for managing the allocation of storage from a storage area network (SAN) as network attached storage (NAS) to a data communication network.
  • the steps of FIGS . 13 A-B may be implemented in hardware, firmware, software, or a combination thereof.
  • the steps of FIGS. 13 A-B do not necessarily have to occur in the order shown, as will be apparent to persons skilled in the relevant art(s) based on the teachings herein.
  • Other structural embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion contained herein. These steps are described in detail below.
  • a storage management directive is received from a graphical user interface.
  • directives are received from a GUI such as GUI 1200, over GUI communication link 426.
  • the storage management directive may be received by SAN server 302.
  • Example storage management directives that may be received from GUI 1200 are described below.
  • storage allocator module 416 in SAN storage manager 404 of SAN server 302 processes received storage management directives.
  • a message corresponding to the received storage management directive is sent to a NAS server.
  • the NAS server is NAS server 304.
  • Example messages that may be received by the NAS server are described below.
  • storage allocator module 416 in SAN storage manager 404 of SAN server 302 generates messages corresponding to received storage management directives to be sent to NAS server 304.
  • a response corresponding to the sent message is received from the NAS server.
  • SAN server 302 may receive the response from NAS server 304.
  • Example responses are described below.
  • storage configuration module 514 receives messages from SAN server 302, processes them, and generates the response for SAN server 302.
  • the SAN server sends the response received from NAS server 304 to GUI 1200.
  • GUI 1200 can then display the received response information.
  • storage allocator module 416 receives the response form NAS server 304, processes the response, and sends the response to GUI 1200.
  • FIG. 13B provides additional exemplary steps for flowchart 1300 of FIG. 13A:
  • a command line interface may be provided at the graphical user interface.
  • GUI 1200 may include a command line interface where a user may input textual storage management instructions.
  • An example command line interface is described below.
  • step 1310 a user is allowed to input the storage directive as a CLI command into the CLI.
  • GUI 1200 may use an existing storage appliance management communication facility.
  • GUI 1200 sends management directives to SAN server 302, via GUI communication link 426, shown in FIG. 4.
  • the management directives may be network messages that start with the letter "c", immediately followed by an integer, and then followed by parameters.
  • the SAN servers receive a management directive, they perform actions defined by that directive.
  • GUI 1200 may send a c35 storage allocation management directive to the SAN server 302, via GUI communication link 426.
  • the c35 management directive includes three parameters: the LUN number, a flag specifying whether to enable or disable the LUN, and a NAS server number.
  • GUI 1200 sends the following management directive to SAN server 302: c35 15 1 2
  • the directive instructs SAN server 302 to allocate LUN 15 to NAS server NAS2.
  • the directive may be expanded to include further information, such as a directory name to be associated with the LUN. If no directory name is given, a default directory name may be used. For example, the default directory name may be /exportxxxx, where xxxx is the LUN number (the use of this directory name is shown elsewhere herein for example purposes).
  • GUI 1200 sends the following management directive to the SAN server 302:
  • GUI 1200 sends the following management directive to SAN server 302:
  • the directive instructs SAN server 302 to remove LUN 4 from NAS server NAS 1 , such that LUN4 is no longer allocated to NAS server NAS 1.
  • SAN server 302a when SAN server 302a receives the c35 management directive, SAN server 302a executes the steps shown in flowchart 1600 of FIG. 16, and further described below.
  • the steps below may be executed by SAN storage manager 404 in NAS server 304a.
  • the steps below may be executed by storage allocator module 416 of SAN storage manager 404 to allocate and deallocate storage:
  • the NAS servers are found within one or more host mapping tables.
  • SAN server 302a does this by looking up the special names of the NAS servers registered with SAN server 302a.
  • the names of the NAS servers are registered with SAN servers 302a and 302b when the NAS servers boot up, as described above.
  • 304b are NASServerNASOne and NASServerNASTwo.
  • the value of the second parameter is determined, indicating whether the LUN is enabled. If the second parameter is a 1, SAN server 302a maps the LUN to NASServerNASOne and NASServerNASTwo. The LUN is mapped to both servers in the event of fail-over, as described in further detail below. If SAN server 302a determines that the second parameter is a 0, indicating that the LUN is not enabled, SAN server 302a removes the LUN from the host map for NAS servers 304a and 304b.
  • step 1606 a network message is sent to the redundant SAN server, requesting that the redundant SAN server perform steps 1602 and 1604.
  • step 1608 a network message is sent to NAS servers 304a and 304b, to inform them that the LUN is available.
  • SAN server 302 When LUNs are being allocated by SAN server 302 to NAS server 304, SAN server 302 sends NAS servers 304a and 304b a packet containing the following string:
  • the LunNumber parameter is the identifier for the LUN being allocated, and the
  • CreateFsFlag parameter is a "0" or a " 1 ", depending upon whether the NAS server should create a file system on the LUN. Further information, such as a directory name, may also be provided with the LUNkEnable string.
  • SAN server 302a That instructs SAN server 302a to allocate LUN 15 to NAS server 304b. After mapping the LUN to both NAS servers (as described in steps 1 through 4 above), SAN server 302a sends the following string to NAS server 304a:
  • the string informs NAS server 304a that LUN 15 is available, and that NAS server
  • NAS server 304a should configure the LUN into its kernel in case of fail-over. However, NAS server 304a will not create a file system on the LUN, nor export the LUN as a NAS device.
  • SAN server 302a sends the following string to NAS server 304b:
  • the string inform NAS server 304b that LUN 15 is available, and that NAS server 304b should configure the LUN into its kernel, create a file system on the LUN, and export the file system via CIFS and NFS.
  • NAS servers 304a and 304b respond with a packet containing the following string:
  • NAS 1:0 If unsuccessful, NAS servers 304aand304b respond with two messages.
  • the first packet contains the following string:
  • the NumBytes parameter is the number of bytes in the second message to follow.
  • the second message contains strings describing why the operation failed.
  • SAN server 302a When LUNs are being deallocated by SAN server 302a, SAN server 302a sends each of NAS servers 304a and 304b a packet containing the following string :
  • the string instructs NAS servers 304a and 304b to remove the LUN from their kernels. Also, if either NAS server had exported the LUN, the NAS server un-exports it.
  • the response to the LUN:Disable string is the same as to the LUN:Enable string. That is, if the operation is successful, the NAS servers respond with a packet containing the "NAS : 1 :0" string. If unsuccessful, the NAS servers respond with the "NAS:0:NumBytes” string, followed by a string that describes the error.
  • first network interface 508' may include two HBAs that each interface with second data communication network 116.
  • Each HB A is configured into the NAS server ' s operating system, creating two new disk devices.
  • the first disk device refers to the LUN on the first HB A
  • the second disk device refers to the same LUN on the second HBA.
  • a NAS server uses a Linux operating system.
  • the Linux operating system uses symbolic names for disk devices, such as /dev/sda, /dev/sdb, and /dev/sdc. Because of this, it is difficult to determine the LUN number from the symbolic name.
  • the NAS server maintains a map of LUN numbers to symbolic names.
  • the map may be maintained via Linux symbolic links.
  • the symbolic links may be kept in a directory named /dev/StorageDir, and contain the HBA number, the controller number, the target number, and the LUN number.
  • NAS server 304a may receive a directive to enable LUN 15.
  • the kernel may assign LUN 15 the device name of /dev/sde.
  • NAS server 304a may create a symbolic link to /dev/sde named /dev/StorageDir/1.0.0.15. Subsequent NAS operations may be performed on /dev/StorageDir/1.0.0.15.
  • NAS servers 304a and 304b may execute the steps shown in flowchart 1700 of FIG. 17, and further described below, for configuring and exporting allocated storage.
  • the steps for below may be executed by NAS file manager 512 in each NAS server.
  • steps 1702-1718, and 1722-1724 may be executed by storage configuration module 514.
  • the steps below are adaptable to one or more NAS servers:
  • the LUN is configured on the first HBA into the operating system.
  • the LUN is configured on the first HBA into operating system 510.
  • the LUN is configured on the second HBA into the operating system.
  • the LUN is configured on the second HBA into operating system 510.
  • a symbolic link is created in /dev/StorageDir, linking the LUNs with the Linux symbolic device names.
  • a directory is created, named /exportxxxx, where xxxx is a 4-digit representation of the LUN number (as mentioned elsewhere herein, alternative directory names may be specified).
  • step 1710 the value of CreateFSFlag is determined. If CreateFSFlag is 0, then processing is complete. In this case, the NAS : 1 :0 string is sent to the SAN server. If CreateFSFlag is 1, the processing continues to step 1712.
  • step 1712 the IP address upon which the request arrived is determined.
  • the IP address is important to determine, because the NAS server may be in a recovery mode, and may have several IP addresses.
  • the recovery mode is described in a section below.
  • step 1714 the LUN number is inserted into a file, for example, named /usr/StorageFile/etc/Ipaddress.NASvolumes, where Ipaddress is the IP address upon which the request arrived. This file is important when fail-over occurs.
  • step 1716 a file system is created on the LUN.
  • step 1718 the file system is mounted on /exportxxxx.
  • step 1720 the file system is exported via NFS and CIFS.
  • NFS protocol module 504 exports the file via NFS
  • CIFS protocol module 506 exports the file via CIFS.
  • step 1722 files storing NFS and CIFS exported file systems are updated.
  • a file named /usr/StorageFile/etc/Ipaddress.NFSexports is updated. This file contains a list of all file systems exported via NFS, along with their attributes.
  • a file named /usr/StorageFile/etc/Ipaddress.CIFSexports is updated. This file contains a list of all file systems exported via CIFS.
  • step 1724 a response is sent to the SAN server.
  • the NAS : 1 : 0 string is sent to the SAN server if the previous steps were successful. Otherwise, the
  • NAS:0:NumBytes string followed by the error strings, are sent to the SAN server.
  • NAS server 304a may have been instructed to export the LUN, while NAS server 304b was not. Per the steps above, NAS server 304a will have updated the following files: Ipaddress.NASvolumes, Ipaddress.NFSexports, and Ipaddress.CIFSexports. NAS servers 304a and 304b both would have the LUN configured in their operating system (with symbolic links in /dev/StorageDir). One or more of these files may be used for fail-over and recovery, described in further detail in a section below.
  • a NAS server when a NAS server receives the LUN:Disable string, the steps shown in flowchart 1800 of FIG. 18, and further described below, are executed by the NAS server, for deconfiguring and unexporting unallocated storage.
  • the steps below may be executed by NAS file manager 512 in the NAS server.
  • steps 1802-1808 may be executed by storage configuration module 514.
  • the steps below are adaptable to one or more NAS servers:
  • step 1802 the IP address upon which the request arrived is determined.
  • step 1804 whether the corresponding file system is exported is determined. If the NAS server exports the file system, it unexports it from NFS, unexports it from CIFS, unmounts the file system, and removes the information from Ipaddress.NASvolumes, Ipaddress.NFSexports, and Ipaddress.CIFSexports.
  • step 1806 the LUN is removed from the kernel configuration.
  • step 1808 the symbolic links for the LUN in /dev/StorageDir are removed.
  • the administrator selects a single box in panel 1202 of GUI 1200 to have a LUN become a NAS LUN.
  • the processing described above occurs within the NAS servers, and is not visible to, and does not require interaction with the administrator.
  • GUI 1200 may display the resultant file system in panel 1204. GUI 1200 also may show the size of the file system, the NAS server that owns the file system, the protocols upon which the file system is exported, and various security attributes in panel 1206. To obtain that information, GUI 1200 sends a c36 list file systems management directive to SAN server 302a.
  • the c36 management directive includes one parameter that specifies the type of information being requested.
  • the parameter may be one of the following keywords: PERM, NFS, or CIFS. If the parameter used is PERM, the SAN server returns a string including the number of NAS file systems, followed by a space character, followed a list of strings that correspond to all NAS file systems.
  • the strings may be of the following form:
  • the /exportxxxx string is the file system name (where xxxx corresponds to the
  • the servernum parameter is either 1 or 2, which corresponds to the NAS server (NAS server 304a or NAS server 304b) to which the file system is allocated.
  • the size parameter is the size of the file system.
  • the owner parameter is the username that owns the file system.
  • the group parameter is the group name of the file system.
  • the perm parameter is a string that lists the permissions on the file system.
  • GUI 1200 would issue the following management directive:
  • GUI 1200 This information is used by GUI 1200 to build the list of file systems in panel 1204.
  • panel 1208 displays whether the file system is exported via NFS, and, if so, displays a host-restriction list and whether the file system is read-only.
  • panel 1210 displays whether the file system is exported via CIFS, and attributes of that protocol as well. Note that in an embodiment, when a LUN is initially allocated, it is automatically exported via NFS and CIFS by the NAS server. However, as further described below, the administrator can choose to restrict the protocols under which the file system is exported.
  • GUI 1200 may send the c36 directive to SAN server 302a using one of the keywords NFS and CIFS as the parameter. If the parameter is NFS, the SAN server returns a string containing the number of
  • NFS file systems followed by a space character, followed a list of strings that correspond to all NFS file systems.
  • the strings may be of the following form:
  • the servernum parameter is the same as was returned with a c36 directive using the PERM parameter.
  • the flag parameter is the string "rw” or "ro” (for read- write or read-only).
  • the hostlist parameter is a comma-separated list of hosts (or IP addresses) that have access to the file system via NFS.
  • the directive returns a string containing the number of CIFS file systems, followed by a space character, followed a list of strings that correspond to all CIFS file systems.
  • the strings may be of the following form:
  • the userlist parameter is a comma-separated list of user names that can access the file system via CIFS.
  • GUI 1200 can fill in panels 1206, 1208, and 1210.
  • SAN server 302a When SAN server 302a receives the c36 management directive, it creates a NAS protocol message and forwards it to NAS servers 304a and 304b.
  • the message may contain a string of the following form:
  • type parameter is one of the keywords PERM, NFS, and CIFS.
  • NAS servers 304a and 304b may execute the steps shown in flowchart 1900 of
  • steps below may be executed by NAS file manager 512 in each NAS server.
  • 1908 may be executed by storage configuration module 514:
  • step 1902 the IP address upon which the request arrived is determined.
  • step 1904 the value of the type parameter is determined. If the type parameter is PERM, the file /usr/StorageFile/etc/Ipaddress.NASvolumes is opened to obtain the list of NAS LUNs, and the required information is returned to the SAN server. In step 1906, if the type parameter is NFS, the file /usr/StorageFile/etc/Ipaddress.NFSexports is opened to obtain the list of file systems exported via NFS, and the required information is returned.
  • the type parameter is PERM
  • the file /usr/StorageFile/etc/Ipaddress.NASvolumes is opened to obtain the list of NAS LUNs, and the required information is returned to the SAN server.
  • step 1906 if the type parameter is NFS, the file /usr/StorageFile/etc/Ipaddress.NFSexports is opened to obtain the list of file systems exported via NFS, and the required information is returned.
  • step 1908 if the type parameter is CIFS, the file /usr/StorageFile/etc/Ipaddress.CIFSexports is opened to obtain the list of file systems exported via CIFS, and the required information is returned.
  • the NAS servers respond with a packet containing the "NAS: 1:0" string. If the steps are unsuccessful, the NAS servers respond with the "NAS:0:NumBytes” string, followed by a string that describes the error.
  • a NAS server makes a NAS LUN available via NFS and CIFS when the corresponding file system is created and exported.
  • an administrator may unselect the NFS box in panel 1208, or the CIFS box in panel 1210, for a file system selected in panel 1204. This causes the file system to be unexported, making it unavailable via the unselected protocol. Further details regarding the unexporting of a file system are provided in the following section.
  • GUI 1200 may show the resulting file system in panel 1204. If the administrator selects the file system in panel 1204, panels 1208 and 1210 indicate that the file system is exported via NFS and/or CIFS. An administrator may choose to deny access to the file system via NFS or CIFS by unselecting the NFS box in panel 1208, or the CIFS box in panel 1210, respectively. When the administrator denies access to NFS or CIFS in this manner, GUI
  • the c38 unexport file system management directive includes tliree parameters: the file system name, the protocol from which to unexport the file system, and the number of the NAS server that owns the file system.
  • the administrator may allocate LUN 15 to NAS server 304a, creating file system /exportOOl 5.
  • the administrator may want to deny access via CIFS to file system /exportOOl 5.
  • the administrator may select /exportOOl 5 in panel 1204, unselect the CIFS box in panel 1210, and press "Commit" in panel
  • GUI 1200 sends the following management directive to SAN server 302a:
  • GUI 1200 may send the following management directive to SAN server 302a:
  • the directive instructs SAN server 302a to deny NFS access for file system /exportOOl 5.
  • SAN server 302a receives the c38 management directive, SAN server 302a creates a NAS protocol message, and forwards the message to the NAS server (NAS server 304a or 304b) that is specified by the third parameter.
  • the message may contain a string of the following forms:
  • NAS servers 304a and 304b may execute the steps shown in flowchart 2000 of FIG. 20, and further described below.
  • the steps below may be executed by NAS file manager 512 in each NAS server.
  • steps 2002- 2008 may be executed by storage configuration module 514:
  • step 2002 the IP address upon which the request arrived is determined.
  • step 2004 whether the NAS server has been allocated the LUN associated with the file system is determined. If the NAS server has not been allocated the LUN, an error string is returned, and the process ends.
  • step 2006 if the message specifies CIFS, the related file system information is removed from the system file that lists all CIFS exported file systems (referred to as the CIFS configuration file elsewhere herein), and the file system is removed from /usr/StorageFile/etc/Ipaddress.CIFSexports.
  • step 2008 if the message specifies NFS, the related file system information is removed from the system file that lists all NFS exported file systems (referred to as the NFS configuration file elsewhere herein), and the file system is removed from /usr/StorageFile/etc/Ipaddress.NFSexports. If these steps are successful, the NAS servers respond with a packet containing the "NAS: 1:0" string. If the steps are unsuccessful, the NAS servers respond with the "NAS : 0 :NumBytes" string, followed by a string that describes the error.
  • GUI 1200 allows the administrator to export a previously unexported file system, and to change attributes of an exported file system (such as access lists and read-only access).
  • GUI 1200 an administrator can view attributes of a NAS file system displayed in GUI 1200. If a file system is unexported, the administrator can choose to export the file system by selecting the NFS box in panel 1208, and/or the CIFS box in panel 1210, and pressing "Commit" in panel 1204. An administrator may also change access lists, or make the file system read-only, for these protocols through panels 1208 and 1210. When directed to export a file system, GUI 1200 sends a c37 management directive to SAN server 302a.
  • the c37 export file system management directive includes five parameters: the file system name, the protocol in which to export the file system, the number of the NAS server that was allocated the file system, a flag that specifies read-only or read-write, and a comma-separated access list.
  • the administrator may assign LUN 17 to NAS server 304b.
  • that file system is made available over CIFS and NFS by default.
  • the administrator may want to change attributes of the file system, such that NFS access is read-only, and that access to the file system is restricted only to hosts named clientl, client2, and client3. Accordingly, the administrator may select /exportOOl 7 in panel 1204, modify the respective attributes in panel 1208, and press "Commit" in panel 1204.
  • GUI 1200 sends a resulting management directive to SAN server 302a:
  • the directive instructs SAN server 302a to re-export the file system /exportOOl 7 via NFS as read-only, and to restrict access only to hosts named clientl, client2, and client3.
  • the administrator may want to set CIFS access to file system
  • /exportOOl 7 to be read- write, and restrict access only to users named betty, fred, and wilma. Accordingly, the administrator may select /exportOO 17 in panel 1204, modify the respective attributes in panel 1210, and press "Commit" in panel 1204.
  • GUI 1200 sends a resulting management directive to SAN server 302a:
  • the directive instructs SAN server 302a to export file system /exportOO 17 via
  • SAN server 302a After SAN server 302a receives the c37 management directive, it creates aNAS protocol message and forwards the message to the NAS server (NAS server
  • Two messages may be required for exporting a file system.
  • the first message contains a string of the following forms:
  • the fileSystemName parameter is the name of the file system whose attributes are being modified.
  • the rwFlag parameter includes the string "ro" or "rw”.
  • NumBytes parameter is the number of bytes in an access list (including commas). If NumBytes is greater than 0, the SAN server sends a second message containing the comma-separated access list.
  • NAS servers 304a and 304b may execute the steps shown in flowchart 2100 of
  • step 2102 the IP address upon which the request arrived is determined.
  • step 2104 the NAS server determines whether it owns the file system.
  • step 2106 if the message specifies CIFS, the relevant information is added or replaced in the CIFS configuration file, and the file /usr/StorageFile/etc/Ipaddress.CIFSexports is updated.
  • step 2108 if the message specifies NFS, the relevant information is added or replaced in the NFS configuration file, and the file
  • the NAS servers respond with a packet containing the "NAS: 1:0" string. If the steps are unsuccessful, the NAS servers respond with the "NAS :0:NumBytes” string, followed by a string that describes the error.
  • the administrator can modify the file system's owner, group, and permissions attributes. This may be accomplished by selecting the file system from the list in panel 1204, modifying the relevant attributes in panel 1206, and pressing "Commit" in panel 1204.
  • GUI 1200 When directed to change attributes of a file system, GUI 1200 sends a set permissions management directive to SAN server 302a.
  • the c39 management directive includes five parameters: the file system name, the number of the NAS server that has been allocated the file system, the new owner of the file system, the new group name of the file system, and the new permissions of the file system.
  • GUI 1200 sends a resulting management directive to SAN server 302a:
  • the directive instructs SAN server 302a to change the owner to fred, the group to research, and the permissions to rwxr-w — , for file system /exportOO 17.
  • SAN server 302a receives the c39 management directive, it creates aNAS protocol message and forwards the message to both of NAS servers 304a and 304b. Sending the message to both NAS servers keeps the permissions consistent during fail-over, which is further described below.
  • the message may contain a string of the following form:
  • a NAS server When a NAS server receives the string, it changes the owner, group, and world permissions as specified. If the change of permissions is successful, the NAS server responds with the NAS: 1:0 string. If the change of permissions is unsuccessful, the NAS servers respond with two messages. The first packet contains the NAS:0:NumBytes sfring, where NumBytes is the number of bytes in the second message. The second message contains a description of the error.
  • GUI 1200 may provide a command-line interface (CLI) to the NAS functionality to receive CLI commands.
  • CLI commands which correspond to functionality described above are presented below.
  • the administrator may input the following CLI commands to allocate and de-allocate aNAS LUN: makeNASMap and unMakeNASMap.
  • the commands are followed by two parameters.
  • the NASServerNumber parameter is 1 or 2, depending upon which NAS server being allocated or deallocated the LUN (i.e. NAS servers 304a and 304b).
  • the SANNASLUN parameter is the LUN to be allocated or deallocated:
  • the administrator may input the following CLI commands to export or unexport a file system: makeExport (followed by five parameters) and unMakeExport (followed by four parameters) .
  • the protocolFlag parameter is 0 for CIFS protocol, and is 1 for NFS protocol.
  • the NASserverNumber parameter is e,qual to 1 or 2 (for NAS server 304a or 304b, respectively) .
  • the rwFlag parameter is equal to 0 for read-only, and equal to 1 for read- write.
  • the List parameter is a comma-separated list of hosts or users:
  • CLI commands may also be used to retrieve NAS file system listings.
  • CLI commands that may be used are listed as follows, followed by their description:
  • This CLI command lists LUNs assigned to NAS. For example, this is determined by determining the LUNs assigned to NASServerNASOne and NASServerNASTwo (i.e., NAS server 304a and 304b).
  • This CLI command lists all NAS file systems and their properties.
  • This CLI command lists all NAS file systems exported via CIFS, along with their properties.
  • This CLI command refreshes all NAS related configurations and deletes all uncommitted NAS changes.
  • a management directive may be provided that obtains NAS statistics.
  • the management directive allows monitoring of the NAS functionality, and alerts a user upon error.
  • monitoring users or applications may send a c40 management directive to a SAN server.
  • the c40 obtain statistics management directive is followed by no parameters.
  • the directive is sent to the SAN server, and the SAN server returns a series of strings that show network statistics, remote procedure call statistics, file system statistics, and error conditions.
  • a sfring returned by the SAN server that contains network statistics may have the following form: NAS Server :NET: OutputPackets : Collisions :InputPackets :InputErrors
  • the NASServer parameter is the number 1 or 2, which corresponds to the first and second NAS servers (i.e., NAS servers 304a and 304b). Each NAS server will be represented by one instance of that line.
  • the value of the OutputPackets parameter is the number of network packets sent out by the NAS server.
  • the value of the Collisions parameter is the number of network collisions that have occurred. Those two values may be used to determine a collision rate (the collision rate is Collisions divided by OutputPackets). If the collision rate is greater than 0.05, the user's "wire” is considered “hot. " This means that there likely are too many machines coupled to the user's "wire” or network, causing network collisions and greatly reducing performance. If that happens, entity monitoring the network statistics may recommend that the user install bridges or switches into the network.
  • the value of the InputPackets parameter is the number of packets received by the NAS server.
  • the value of the InputErrors parameter is the number of bad packets received. Input errors may be caused by electrical problems on the network, or by receiving bad checksums. If the entity monitoring the network statistics sees the InputErrors rising, the user may have a client machine coupled to the network with a faulty network interface card, or the client may have damaged cables.
  • InputPackets on NAS server 304a is significantly higher or lower than that of NAS server 304b, the user may consider reassigning NAS LUNs across the NAS servers to help balance the load.
  • RPC remote procedure call
  • NASServer:RPC:TotalCalls:MalformedCalls As in the prior string, the value of NASServer is 1 or 2, which corresponds to the first or second NAS server (i.e., NAS server 304a and 304b). Each NAS server will be represented by one instance of the RPC statistics line.
  • the value of the TotalCalls parameter is the number of RPC calls received.
  • the value of the MalformedCalls parameter is the number of RPC calls that had errors.
  • a malformed call is one that was damaged by the network (but still passed the checksum). If the entity monitoring the RPC statistics sees a large number of malformed calls, the user may have a network that is jittery.
  • the user is providing more NFS traffic to one of the servers.
  • the user should think about reassigning NAS LUNs across the NAS servers to help balance the load.
  • a third returned string type contains file system information, and may have the following form:
  • Each NAS Server may provide one of these strings for each file system it owns.
  • the /exportxxxx parameter is the name of the file system (where xxxx is a four-digit representation of the LUN number).
  • the value of the TotalSize parameter is the size of the file system in kilobytes, for example. The value of the
  • AvailableSize parameter is the amount of free space (in kilobytes, for example). If the amount of free space becomes too small, the entity monitoring the file system information should inform the user.
  • a fourth returned string type contains error information, and may have the following form:
  • the Severity parameter is a number (with 1 being the most critical).
  • the Message parameter describes an error that occurred. For example, if NAS server 304b went down and NAS server 304a took over, the following error message may be returned:
  • the output indicates that network is OK. However, NAS server 304a took over for NAS server 304b at a time of 15 : 03 on August 23 , as indicated by a timestamp in the output above. The output also indicates that NAS server 304b came back up at a time of 18:22. Further description of fail-over and recovery are provide in a section below. (Note that in embodiments, the timestamp may further indicate the particular time zone.)
  • a SAN server receives the c40 management directive, it creates a NAS protocol message and forwards it to the NAS servers.
  • the message may contain a sfring of the following form:
  • each of NAS servers 304a and 304b may execute the steps shown in flowchart 2200 of FIG. 22, and further described below.
  • the steps below may be executed by NAS file manager 512 in each NAS server.
  • steps 2202-2210 may be executed by storage configuration module 514: In step 2202, the IP address upon which the request arrived is determined.
  • step 2204 the network statistics are obtained.
  • step 2206 the RPC statistics are obtained.
  • step 2208 items listed in /usr/StorageFile/etc/Ipaddress.NASvolumes are analyzed, and file system information about each item is returned.
  • step 2210 error messages from are retrieved from error logs, and the error logs are moved to having a name ending with ".old". In this manner, a subsequent call will not return the same errors.
  • the NAS servers each respond with two messages.
  • the first message containing the following string:
  • the value of the NumBytes parameter is the number of bytes in the information that follows.
  • the second message is the information collected in the above steps, such as shown in the example output above. If unsuccessful, the unsuccessful NAS server respond with the "NAS:0:NumBytes" string, followed by a string that describes the error. 9.0 Providing High Availability According to The Present Invention
  • a goal of the NAS implementation of the present invention is to provide high-availability.
  • the following sections present the NAS high-availability features by describing NAS configuration, boot-up, and fail-over.
  • the NAS configuration is accomplished by the SAN servers.
  • a command is issued on the SAN server, informing it of the initial IP addresses of the SAN servers.
  • the SAN server can communicate with the NAS servers. Further NAS configuration may be accomplished from the SAN server.
  • a NAS Configuration Sheet may be supplied to each user configuring the system.
  • the user fills out the NAS Configuration sheet, and a configuring entity may run three commands (described in the following sub-section) to configure the NAS servers.
  • the commands send a c41 management directive to the SAN server.
  • the c41 management directive takes on several instances, each configuring a different part of a NAS server.
  • the first instance configures the NAS server addresses, as described in the following section. Further instances of the c41 management directive may be used to configure NFS and CIFS on a NAS server. 9.1.1 Configuring NAS Server Addresses
  • each NAS server provides redundancy, each includes two Internet protocol (IP) addresses.
  • IP Internet protocol
  • the first IP address is a "boot up" IP address, and the second is a public IP address.
  • Two IP addresses are necessary for fail-over, as further described in a section below.
  • the entity performing the configuration may obtain the completed NAS Configuration Sheet, and use this information to perform the configuration.
  • a CLI or GUI sends the c41 management directive to the SAN server.
  • the c41 configure NAS server management directive may use the following parameters:
  • the hostname of the first NAS server 5. The public IP address of the first NAS server;
  • the SAN server may execute the steps shown in flowchart 2300 of FIG. 23, and further described below. In particular, the steps below may be executed by SAN storage manager 404 in the SAN server
  • a network message is sent to the first NAS server (for example, using the NAS Protocol) including the information listed above.
  • the SAN server uses the configured IP address to communicate with the NAS server.
  • the SAN server informs the NAS server that it is NAS server 1.
  • a network message is sent to the second NAS server, including the information listed above.
  • the SAN server uses the configured IP address to communicate with the NAS server.
  • the SAN server informs the NAS server that it is NAS server 2.
  • the SAN server configuration is updated with the public address of NAS server 1 and NAS server 2.
  • future communication with the NAS servers occurs via the public IP address.
  • the network message of steps 2302 and 2304 above may include a string of the following form:
  • the ServNum parameter is the NAS Server number.
  • the SAN server places a 1 in that field when it sends the message to the first NAS server, and places a 2 in that field when it sends the message to the second NAS server.
  • the IPl and IP2 parameters are the addresses of the SAN servers, IP3 and IP4 are the public and boot-up addresses of the first NAS server, and IP5 and IP6 are the public and boot-up addresses of the second NAS server.
  • NM, BC, and GW are the Netmask, Broadcast address, and Gateway of the network.
  • NAS servers when they receive the network message, they may execute the steps shown in flowchart 2400 of FIG. 24, and further described below.
  • the steps below may be executed by NAS file manager 512 in each NAS server.
  • steps 2402-2406 may be executed by storage configuration module 514:
  • step 2402 a file named /usr/StorageFile/etc/NASconfig is created, and the following information is placed in the file:
  • IPl IP2 1 IP3 IP4 2: IP5 IP6
  • the first line contains the addresses of both SAN servers
  • the second line contains the public and boot-up addresses of the first NAS server
  • the third line contains the public and boot-up addresses of the second NAS server. Both NAS servers will use that file to figure out their NAS server number.
  • the value of the ServNum parameter is determined. If the
  • ServNum parameter is 1 , the NAS server modifies its Linux configuration files to assign itself the hostname of HostNamel, and the IP address of IP4 (i.e., it's boot-up IP address). If the ServNum parameter is 2, the NAS server modifies its Linux configuration to assign itself the hostname of HostName2, and the IP address of IP6.
  • step 2406 the NAS server is rebooted.
  • each NAS server After rebooting, each NAS server have been assigned and configured with the desired hostname and boot-up IP address. Also, the SAN servers have stored the public address of each NAS server. After boot-up, the NAS servers may assign themselves their public IP address, which is described in the following section.
  • each NAS Server may execute the steps shown in flowchart 2500 of FIG. 25, and further described below.
  • the steps below may be executed by NAS file manager 512 in each NAS server.
  • steps 2502-2526 may be executed by storage configuration module 514:
  • the file /usr/StorageFile/etc/NASconfig is searched for the line that contains its boot-up IP address. From that line, the NAS server determines its NAS server number and its public IP address.
  • step 2504 the file /usr/StorageFile/etc/NASconfig is searched for the other NAS server ' s public IP address. This may be accomplished by searching the file for the other NAS server number.
  • step 2506 whether the NAS server is attached to the network is verified. This may be verified by attempting to communicate with the SAN servers, for example (the IP addresses of the SAN servers are stored in /usr/StorageFile/etc/NASconfig). If the NAS server cannot communicate with the SAN servers.
  • the NAS server may go into a loop, where it sleeps for 10 seconds, for example, and the retries step 3.
  • step 2508 whether the NAS server's public IP address is in use is determined. This may be determined by attempting to send a network message to its public IP address, for example. If its public address is in use, then fail-over has occurred. In this case, the NAS server sends a message to the peer NAS server, informing the peer NAS server that it has come back up. The peer NAS server relinquishes control of the assumed public IP address, and relinquishes control of the file systems it assumed confrol over during fail-over.
  • step 2510 the boot-up IP address of the NAS server is changed to its public IP address.
  • a Gratuitous ARP request is issued, which allows clients to update their IP-to-Ethernet mapping information.
  • the directory /dev/StorageDir is examined for LUNs. To avoid constantly reassigning Linux symbolic device names, the NAS server does not query for all LUNs on startup. Instead, NAS server examines the file /dev/StorageDir. For each symbolic link in that directory, the NAS server adds the LUN into its operating system and re-creates the symbolic link. The links in /dev/StorageDir are more fully described above.
  • the NAS server name is registered with the SAN servers as NASServerNASOne or NASServerNASTwo, depending on its server number.
  • step 2518 the file /usr/StorageFile/etc/PublicIPAddress.NASvolumes is searched for file systems. For each file system listed in the directory, the NAS server checks the file system and mounts it.
  • step 2520 each of the entries in the file /usr/StorageFile/etc PublicIPAddress.NFSexports are made available by NFS.
  • step 2522 each of the entries in the file /usr/StorageFile/etc/PublicIPAddress.CIFSexports are made available by CIFS.
  • step' 2524 a NAS server process that implements the NAS protocol is started.
  • step 2526 the NAS server sleeps for a period of time, such as 5 minutes, and a heartbeat process is started to monitor the public IP address of the peer NAS server.
  • the NAS server waits for a period of time because the peer NAS server may also be booting up.
  • the NAS server After the NAS server performs the above steps, it has determined its public IP address, the process that implements the NAS protocol is running, and a heartbeat process that monitors the peer NAS server has been started.
  • the following sub-section describes what happens when the peer NAS server crashes.
  • a NAS server After a NAS server boots, it starts a heartbeat process that monitors its peer. For example, the NAS server may send a heartbeat pulse to the peer NAS server, and receive a heartbeat pulse sent by the peer NAS server. If the NAS server determines that it cannot communicate with the peer NAS server, by monitoring the peer NAS server's heartbeat pulse, fail-over occurs.
  • the NAS server takes on the public IP address of the peer NAS server, takes on the hostname of the peer NAS server (as an alias to its own, for example), and exports all of the file systems that were exported by the peer NAS server.
  • FIG. 15 illustrates a NAS server 304 that includes a heartbeat process module 1502.
  • Embodiments for the heartbeat process module 1502 of the NAS server of the present invention are described as follows. These implementations are described herein for illustrative purposes, and are not limiting. In particular, the heartbeat process module as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof. The present invention is applicable to further ways of determining network failures, through the use of heartbeat signals and other means. Example implementations for determining network failures are described in pending U.S. Patent Application entitled "Internet Protocol Data Mirroring," Serial No. 09/664,499, Attorney Docket Number 1942.0040000.
  • Heartbeat process module 1502 generates a heartbeat process. Under the control of the heartbeat process executing on the NAS server, the NAS server connects to the NAS Protocol server on the public IP address of the peer NAS server (for example, the connection may be made every 10 seconds). After the connection is made, the NAS server sends a message containing the following string:
  • the peer NAS server may execute the steps shown in flowchart 2600 of FIG. 26, and further described below:
  • step 2602 the IP address upon which the request arrived is determined.
  • step 2604 the following files are checked for any modifications since the last "AYT:" message was received:
  • step 2606 if it was determined in step 2604. that any of those files have been modified, the modified file is sent in a response to the message.
  • step 2608 if any of the NFS locking status files have been modified since the last "AYT:" message, the modified NFS locking status file(s) are sent in a response to the message.
  • the response to the "AYT:" message may include several messages.
  • a first message contains a string that may have the following form:
  • the NumFiles parameter indicates the number of files that the peer NAS server is sending in the response to the NAS server. If value of the NumFiles parameter is not zero, the peer NAS server sends two messages for each file found modified in the steps above. A first of the two messages may contain the following string:
  • the Filename parameter is the name of the file being sent in the response.
  • the fileSize parameter is the number of bytes in the file. A second of the two messages may contain the contents of the file.
  • the peer NAS server responds with the "NAS:0:NumBytes" string, followed by a string that describes the error.
  • the parameter NumBytes is the length of the string that followed.
  • the NAS server may receive files containing the peer NAS server's NAS volumes, CIFS exports, and NFS exports.
  • NAS server 304a may have a public IP address of 192.11.109.8, and NAS server 304b may have the public IP address of 192.11.109.9. Accordingly, NAS server 304a would store the following files containing its information. These files are typically populated when the NAS server receives NAS Protocol messages:
  • NAS server 304a would also store the following files containing the corresponding information of the peer NAS server, NAS server 304b:
  • NAS server 304a sends an "ATY:" message to peer NAS server 304b, and peer NAS server 304b responds with updates to the above described NAS files. If the NAS server cannot connect to the peer NAS server, the peer may be down, and fail-over may be necessary. If the NAS server cannot connect to the public IP address of the peer NAS server, it first checks to see if it can send a "ping" to the public IP address of the peer. If so, the NAS server may assume the NAS Protocol server on the peer NAS server has exited. The NAS server may accordingly record an error message. The error message may be displayed the next time a user sends the c40 directive to the NAS server, for example.
  • the NAS server may attempt to contact each SAN server. If the NAS server cannot contact either SAN server, the NAS server may assume that something is wrong with its network interface card. In that event, the NAS server may sleep for some interval of time, such as 10 seconds, and then attempt to contact the NAS and SAN servers again. By sleeping for a period of time, fail-over due to temporary network outages may be avoided. After the second attempt, if the NAS server cannot contact its peer NAS server and the SAN servers, the NAS server may shut down NAS services. Specifically, the NAS server may execute the steps shown in flowchart 2700 of FIG. 27, and further described below:
  • step 2702 export of file systems by NFS and CIFS is stopped.
  • step 2704 all NAS file systems are unmounted.
  • step 2706 the NAS server public IP address is shut down, and the boot-up IP address is re-assumed.
  • step 2708 all LUNs are removed from the operating system.
  • step 2710 the boot-up process described in the previous section is executed, and further operations described in the previous section regarding NAS server boot-up may be performed. If the NAS server is unable to connect to the public IP address of the peer
  • the NAS server may assume the peer NAS server is down. In that event, the NAS server may sleep for a period of time (for example, 10 seconds). Sleeping for a period of time may aid in preventing fail-over from occurring during temporary network outages. After sleeping, the NAS server may re-attempt connecting to the peer NAS server. If, after the second attempt, the
  • the NAS server may perform NAS fail-over.
  • the NAS server may execute the steps shown in flowchart 2800 of FIG. 28, and further described below:
  • step 2802 the public IP address of the peer NAS server is assumed.
  • step 2804 a Gratuitous ARP request is issued, causing clients/hosts to update their IP-to-Ethernet mapping tables.
  • step 2806 a list of the peer NAS server' s NAS volumes/file systems are obtained from /usr/StorageFile/etc Ipaddr.NASvolumes. "Ipaddr" is the public IP address of the peer NAS server.
  • step 2808 the file systems obtained in step 2806 are checked.
  • step 2810 the file systems obtained in step 2806 are mounted.
  • step 2812 the list of NFS exports is obtained from /usr/StorageFile/etc/Tpaddr.NFSexports, and is exported via NFS.
  • step 2814 the NFS lock manager is stopped and re-started, causing clients/hosts to reclaim their locks.
  • step 2816 the list of CIFS exports is obtained from /usr/StorageFile/etc/Ipaddr.CIFSexports, and is exported via CIFS.
  • step 2818 the file /etc/smb.conf is modified to list its peer name as an alias for CIFS access.
  • step 2820 the heartbeat process is stopped. The heartbeat process is resumed when the peer comes back up, as described below.
  • aNAS server resumes its NAS functionality when it comes back up.
  • the NAS server checks to see if its public IP address is in use. For example, the NAS server may determine this by attempting to send a network message to its public IP address. If the NAS server's public address is in use, then fail-over has likely occurred. If that event, the NAS server may notify the peer NAS server that it has recovered. For example, the NAS server may send a message containing the following string to the peer NAS server:
  • the peer NAS server may perform steps to return control of the original storage resources to the NAS server. For example, the peer NAS server may execute the steps shown in flowchart 2900 of FIG. 29, and further described below:
  • step 2902 the public IP address of the NAS server is brought down.
  • step 2904 file systems in Ipaddr.NFSexports are unexported.
  • step 2906 file systems in Ipaddr.CIFSexports are unexported.
  • step 2908 file systems in Ipaddr.NASvolumes are unexported
  • step 2910 the heartbeat to the NAS server is re-started.
  • failure of a NAS server results in the peer NAS server taking over all of the failed NAS server's resources. When the failed NAS server comes back up, it resumes confrol over its original resources.
  • FIG. 15 illustrates a NAS server 304 that includes aNAS protocol module 1504, according to an embodiment of the present invention.
  • Embodiments for the NAS protocol module 1504 of the present invention are described as follows. These implementations are described herein for illustrative purposes, and are not limiting. In particular, the NAS protocol module as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof.
  • NAS protocol module 1504 generates a NAS protocol process.
  • the NAS protocol process binds to TCP port number 8173.
  • the NAS protocol may use ASCII strings.
  • the first packet is 256 bytes. If the sfring in the packet is less than 256 bytes, the string may be NULL-terminated and the receiving process ignores the remainder of the 256 bytes.
  • Messages in the NAS protocol are listed below. Further description of each message is presented elsewhere herein. In an embodiment, for all cases, a failure response consists of two messages.
  • the first message is a 256-byte packet that contains the string "NAS:0:NumBytes", where NumBytes is the number of bytes in the second message.
  • the second message contains a sfring describing the error.
  • the responses listed below are example responses indicating successful completion.
  • FIG. 10 An example of a computer system 1040 is shown in FIG. 10.
  • the computer system 1040 represents any single or multi-processor computer. In conjunction, single-threaded and multi-threaded applications can be used. Unified or distributed memory systems can be used.
  • Computer system 1040, or portions thereof, may be used to implement the present invention.
  • each of the SAN servers and NAS servers of the present invention may comprise software running on a computer system such as computer system 1040.
  • elements of the present invention may be implemented in a multi-platform (platform independent) programming language such as JAVA 1.1, programming language/structured query language (PL/SQL), hyper-text mark-up language (HTML), practical exfraction report language (PERL), common gateway interface/structured query language (CGI/SQL) or the like.
  • JAVA 1.1 programming language/structured query language
  • HTML hyper-text mark-up language
  • PROL practical exfraction report language
  • CGI/SQL common gateway interface/structured query language
  • JavaTM- enabled and JavaScriptTM- enabled browsers are used, such as, NetscapeTM, HotJavaTM, and MicrosoftTM ExplorerTM browsers.
  • Active content Web pages can be used. Such active content Web pages can include JavaTM applets or ActiveXTM controls, or any other active content technology developed now or in the future.
  • Computer system 1040 includes one or more processors, such as processor
  • processors 1044 can execute software implementing routines described above, such as shown in flowchart 1200.
  • processors 1044 is connected to a communication infrastructure 1042 (e.g., a communications bus, cross-bar, or network).
  • a communication infrastructure 1042 e.g., a communications bus, cross-bar, or network.
  • Various software embodiments are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or computer architectures.
  • SAN server 302 and/or NAS server 304 may include one or more of processor 1044.
  • Computer system 1040 can include a display interface 1002 that forwards graphics, text, and other data from the communication infrastructure 1042 (or from a frame buffer not shown) for display on the display unit 1030.
  • administrative interface 412 may include a display unit 1030 that displays GUI 1200.
  • the display unit 1030 may be included in the structure of storage appliance 210, or may be separate.
  • GUI communication link 426 may be included in display interface 1002.
  • Display interface 1002 may include a network connection, including a LAN, WAN, or the Internet, such that GUI 1200 may be viewed remotely from SAN server 302.
  • Computer system 1040 also includes a main memory 1046, preferably random access memory (RAM), and can also include a secondary memory 1048.
  • the secondary memory 1048 can include, for example, a hard disk drive 1050 and/or a removable storage drive 1052, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc.
  • the removable storage drive 1052 reads from and/or writes to a removable storage unit 1054 in a well known manner.
  • Removable storage unit 1054 represents a floppy disk, magnetic tape, optical disk, etc., which is read by and written to by removable storage drive 1052.
  • the removable storage unit 1054 includes a computer usable storage medium having stored therein computer software and/or data.
  • secondary memory 1048 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 1040.
  • Such means can include, for example, a removable storage unit 1062 and an interface 1060. Examples can include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 1062 and interfaces 1060 which allow software and data to be transferred from the removable storage unit 1062 to computer system 1040.
  • Computer system 1040 can also include a communications interface 1064.
  • first network interface 406, second network interface 402, and SAN server interface 408 shown in FIG. 4, and first network interface 508 and second network interface 502 shown in FIG. 5, may include one or more aspects of communications interface 1064.
  • Communications interface 1064 allows software and data to be transferred between computer system 1040 and external devices via communications path 1066.
  • Examples of communications interface 1064 can include a modem, a network interface (such as Ethernet card), a communications port, interfaces described above, etc.
  • Software and data transferred via communications interface 1064 are in the form of signals which can be electronic, electromagnetic, optical or other signals capable of being received by communications interface 1064, via communications path 1066.
  • communications interface 1064 provides a means by which computer system 1040 can interface to a network such as the Internet.
  • the present invention can be implemented using software running (that is, executing) in an environment similar to that described above with respect to FIG. 8.
  • computer program product is used to generally refer to removable storage unit 1054, a hard disk installed in hard disk drive 1050, or a carrier wave carrying software over a communication path 1066
  • a computer useable medium can include magnetic media, optical media, or other recordable media, or media that transmits a carrier wave or other signal.
  • Computer program products are means for providing software to computer system 1040.
  • Computer programs also called computer confrol logic
  • main memory 1046 and/or secondary memory 1048 are stored in main memory 1046 and/or secondary memory 1048. Computer programs can also be received via communications interface 1064. Such computer programs, when executed, enable the computer system 1040 to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, enable the processor 1044 to perform features of the present invention.
  • the present invention can be implemented as confrol logic in software, firmware, hardware or any combination thereof.
  • the software may be stored in a computer program product and loaded into computer system 1040 using removable storage drive 1052, hard disk drive 1050, or interface 1060.
  • the computer program product may be downloaded to computer system 1040 over communications path 1066.
  • the control logic when executed by the one or more processors 1044, causes the processor(s) 1044 to perform functions of the invention as described herein.
  • the invention is implemented primarily in firmware and/or hardware using, for example, hardware components such as application specific integrated circuits (ASICs).
  • ASICs application specific integrated circuits

Abstract

A method, system, and apparatus for accessing a plurality of storage devices in a storage area network (SAN) as network attached storage (NAS) in a data communication network is described. A SAN server includes a first interface and a second interface. The first interface is configured to be coupled to the SAN. The second interface is coupled to a first data communication network. A NAS server includes a third interface and fourth interface. The third interface is configured to be coupled to a second data communication network. The fourth interface is coupled to the first data communication network. The SAN server allocates a first portion of the plurality of storage devices in the SAN to be accessible through the second interface to at least one first host coupled to the first data communication network. The SAN server allocates a second portion of the plurality of storage devices in the SAN to the NAS server. The NAS server configures access to the second portion of the plurality of storage devices to at least one second host coupled to the second data communication network.

Description

System and Method for Accessing a Storage Area Network as Network Attached Storage
Background of the Invention
Field of the Invention
The invention relates generally to the field of storage area networks, and more particularly to providing access to a storage area network as network attached storage.
Related Art
Network attached storage (NAS) is a term used to refer to storage elements or devices that connect to a network and provide file access services to computer systems. NAS devices attach directly to networks, such as local area networks, using traditional protocols such as Ethernet and TCP/IP, and serve files to any host or client connected to the network. A NAS device typically consists of an engine, which implements the file access services, and one or more storage devices, on which data is stored. A computer host system that accesses NAS devices uses a file system device driver to access the stored data. The file system device driver typically uses file access protocols such as Network File System (NFS) or Common Internet File System (CIFS). NAS devices interpret these commands and perform the internal file and device input/output (I/O) operations necessary to execute them.
Because NAS devices independently attach to the network, the management of these devices generally occurs on a device-by-device basis. For instance, each NAS device must be individually configured to attach to the network. Furthermore, the copying of a NAS device for purposes of creating a back-up must be configured individually.
Storage area networks (SANs) are dedicated networks that connect one or more hosts or servers to storage devices and subsystems. SANs may utilize a storage appliance to provide for management of the SAN. For instance, a storage appliance may be used to create and manage back-up copies of the data stored in the storage devices of the SAN by creating point-in-time copies of the data, or by actively mirroring the data. It would be desirable to provide these and other SAN-type storage management functions for storage devices attached to a network, such as a local area network.
Summary of the Invention
The present invention is directed to a system and method for interfacing a storage area network (SAN) with a first data communication network. One or more hosts coupled to the first data communication network can access data stored in one or more of a plurality of storage devices in the SAN. The one or more hosts access one or more of the plurality of storage devices as network attached storage (NAS). A SAN server is coupled to a SAN. A NAS server is coupled to the SAN server through a second data communication network. The NAS server is coupled to the first data communication network. A portion of at least one of the plurality of storage devices is allocated from the SAN server to the NAS server. The allocated portion is configured as NAS storage in the NAS server. The configured portion is exported from the NAS server to be accessible to the one or more hosts coupled to the first data communication network. In a further aspect of the present invention, a system and method for managing the allocation of storage from a storage area network (SAN) as network attached storage (NAS) to a data communication network, is described. A storage management directive is received from a graphical user interface. A message corresponding to the received storage management directive is sent to a NAS server. A response corresponding to the sent message is received from the NAS server.
In still a further aspect of the present invention, an apparatus for accessing a plurality of storage devices in a storage area network (SAN) as network attached storage (NAS) in a data communication network is described. A SAN server includes a first interface and a second interface. The first interface is configured to be coupled to the SAN. The second interface is coupled to a first data communication network. A NAS server includes a third interface and a fourth interface. The third interface is configured to be coupled to a second data communication network. The fourth interface is coupled to the first data communication network. The SAN server allocates a first portion of the plurality of storage devices in the SAN to be accessible through the second interface to at least one first host coupled to the first data communication network. The SAN server allocates a second portion of the plurality of storage devices in the SAN to the NAS server. The NAS server configures access to the second portion of the plurality of storage devices to at least one second host coupled to the second data communication network.
In still a further aspect of the present invention, a storage appliance for accessing a plurality of storage devices in a storage area network (SAN) as network attached storage (NAS) in a data communication network is described. A first SAN server is configured to be coupled to the plurality of storage devices in the SAN via a first data communication network. The first SAN server is configured to be coupled to a second data communication network. A second SAN server is configured to be coupled to the plurality of storage devices in the
SAN via a third data communication network. The second SAN server is configured to be coupled to a fourth data communication network. A first NAS server is configured to be coupled to a fifth data communication network. The first NAS server is coupled to the second and the fourth data communication networks. A second NAS server is configured to be coupled to the fifth data communication network. The second NAS server is coupled to the second and the fourth data communication networks. The first SAN server allocates a first portion of the plurality of storage devices in the SAN to be accessible to at least one first host coupled to the second data communication network. The first SAN server allocates a second portion of the plurality of storage devices in the SAN to the first NAS server. The first NAS server configures access to the second portion of the plurality of storage devices to at least one second host coupled to the fifth data communication network. The second NAS server assumes the configuring of access to the second portion of the plurality of storage devices by the first NAS server during failure of the first NAS server. The second SAN server assumes allocation of the second portion of the plurality of storage devices by the first SAN server during failure of the first SAN server.
The present invention provides many advantages. These include:
1. Ease of use. A graphical user interface (GUI) of the present invention provides a convenient administrative interface. A system administrator may allocate a NAS file system with a single press of a mouse button.
2. Seamless integration into an existing SAN appliance administrative interface. The NAS functionality may be added as a new window in an existing administrative GUI, for example. 3. Maintaining the benefits of a single appliance. In a preferred embodiment, a single appliance is presented that contains data management function and NAS capabilities.
4. Full use of existing SAN appliance data management functions. The features of a SAN appliance (such as data mirroring, virtualization of storage, and instant snapshot copying of storage) are made available to NAS file systems.
5. High-availability. With multiple NAS server capability, the implementation is resistant to single points of failure.
6. High capacity. The NAS implementation is capable of providing data to a large number of clients. It is capable of providing data to UNIX and Windows hosts.
7. Minimal impact on an existing SAN appliance architecture. The NAS implementation has a minimal impact on any existing SAN appliance software code. Addition of the NAS servers does not appreciably degrade the performance of existing features of the SAN appliance. Further aspects of the present invention, and further features and benefits thereof, are described below. The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention.
Brief Description of the Figures
In the drawings, like reference numbers indicate identical or functionally similar elements. Additionally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears.
FIG. 1 illustrates a storage appliance coupling computer hosts to storage devices in a storage area network, using a communication protocol such as fibre channel or SCSI, according to an example environment.
FIG.2 illustrates a storage appliance coupling computer hosts to storage devices in a storage area network, as shown in FIG. 1, and further coupling computer hosts in a local area network to the storage area network, according to an exemplary embodiment of the present invention.
FIG. 3 A illustrates a block diagram of a storage appliance.
FIG. 3B illustrates a block diagram of a storage appliance with network attached storage server, according to an exemplary embodiment of the present invention.
FIG.4 illustrates a block diagram of a storage area network (SAN) server, according to an exemplary embodiment of the present invention.
FIG. 5 illustrates a block diagram of a network attached storage (NAS) server, according to an exemplary embodiment of the present invention.
FIG.6 illustrates a storage appliance coupling hosts in two network types to storage devices in a storage area network, with redundant connections, according to an exemplary embodiment of the present invention. FIG. 7 illustrates a block diagram of a storage appliance with redundant SAN and NAS servers, according to an exemplary embodiment of the present invention.
FIG. 8 illustrates an example data communication network, according to an embodiment of the present invention.
FIG. 9 shows a simplified five-layered communication model, based on an Open System Interconnection (OSI) reference model.
FIG. 10 shows an example of a computer system for implementing aspects of the present invention. FIG. 11 illustrates the connection of SAN and NAS servers to zoned switches, according to an exemplary embodiment of the present invention.
FIG. 12 illustrates an example graphical user interface, according to an exemplary embodiment of the present invention.
FIGS. 13 A-B show a flowchart providing operational steps of an example embodiment of the present invention.
FIGS . 14 A-B show a flowchart providing operational steps of an example embodiment of the present invention.
FIG. 15 illustrates a block diagram of a NAS server, according to an exemplary embodiment of the present invention. FIGS. 16-29 show flowcharts providing operational steps of exemplary embodiments of the present invention.
The present invention will now be described with reference to the accompanying drawings.
Detailed Description of the Preferred Embodiments
Table of Contents
1.0 Overview 2.0 Terminology 3.0 Example Storage Area Network Environment
3.1 Example Storage Appliance 4.0 Network Attached Storage Embodiments of the Present Invention 5.0 Storage Appliance Embodiments According to the Present Invention
5.1 Example SAN Server Embodiments According to the Present Invention
5.2 Example NAS Server Embodiments According to the Present Invention
5.3 Administrative Interface of the Present Invention 6.0 Allocation and Deallocation of NAS Storage 6.1 Example Protocol for NAS LUN Allocation and De-allocation
6.2 NAS Server Configuration of Allocated Storage
6.3 Listing Volumes
6.4 Unexporting File Systems
6.5 Exporting File Systems 6.6 Setting Permissions
7.0 GUI Command Line Interface
8.0 Obtaining Statistics
9.0 Providing High Availability According to The Present Invention
9.1 NAS Server Configuration 9.1.1 Configuring NAS Server Addresses
9.2 NAS Server Boot-up
9.3 NAS Server Failure and Recovery 10.0 Example NAS Protocol Directives
11.0 Example Computer System 12.0 Conclusion
1.0 Overview
The present invention is directed toward providing full storage area network (SAN) functionality in for Network Attached Storage (NAS) that is attached to, and operating in a network. The present invention utilizes a SAN. The SAN may be providing storage to hosts that communicate with the SAN according to Small Computer Systems Interface (SCSI), Fibre Channel, and/or other data communication protocols on a first network. A storage appliance couples the SAN to the hosts. The present invention attaches the storage appliance to a second network, such that storage in the SAN may be accessed by hosts in the second network as one or more NAS devices. The second network may be a local area network, wide area network, or other network type.
According to an aspect of the present invention, the storage appliance provides data management capabilities for an attached SAN. Such data management capabilities may include data mirroring, point-in-time imaging (snapshot) of data, storage virtualization, and storage security. A storage appliance managing a SAN may also be referred to as a SAN appliance. Typically, these functions are controlled by one or more SAN servers within the SAN appliance. According to the present invention, the SAN appliance also provides NAS capabilities, such as access to file systems stored in the SAN over the second network.
According to an aspect of the present invention, NAS functionality is provided by one or more NAS servers within the SAN appliance. The NAS servers are attached to the SAN servers. The SAN servers communicate with the NAS servers using a protocol containing commands that the NAS servers understand. For instance, these commands may direct the NAS servers to allocate and deallocate storage from the SAN to and from the second network.
In a preferred embodiment, the NAS servers appear as separate hosts to the SAN servers. The SAN servers allocate storage to the NAS servers. In turn, the NAS servers allocate the storage to the second network. For instance, storage may be allocated in the form of logical unit numbers (LUNs) to the NAS servers. According to the present invention, the NAS server LUNs are virtualized on the second network, instead of being dedicated to a single host. Thus, the SAN appliance can export LUNs to the entire second network. According to an embodiment of the present invention, a user, such as a system administrator, can control NAS functions performed by the SAN appliance through an administrative interface, which includes a central graphical user interface (GUI). A GUI presents a single management console for controlling multiple NAS servers. In accordance with the present invention, the SAN appliance allows storage in a SAN to be accessed as one or more NAS devices. The SAN appliance creates local file systems, and then grants access to those file systems over the second network through standard protocols, such as Network File System (NFS) and Common Internet File System (CIFS) protocols. As such, the present invention unifies local SAN management and provides file systems over the second network.
Terminology related to the present invention is described in the following subsection. Next, an example storage area network environment is described, in which the present invention may be applied. Detailed embodiments of the SAN appliance, SAN server, and NAS server of the present invention are presented in the subsequent sections. Sections follow which describe how storage is allocated and de-allocated, and otherwise managed. These sections are followed by sections which describe configuring aNAS server, and handling failure of aNAS server, followed by a summary of NAS protocol messages. Finally, an exemplary computer system in which aspects of the present invention may be implemented is then described. 2.0 Terminology
To more clearly delineate the present invention, an effort is made throughout the specification to adhere to the following term definitions as consistently as possible.
Arbitrated A shared lOOMBps Fibre Channel transport supporting up to
Loop 126 devices and 1 fabric attachment. Fabric One or more Fibre Channel switches in a networked topology. HBA Host bus adapter; an interface between a server or workstation bus and a Fibre Channel network. Hub In Fibre Channel, a wiring concentrator that collapses a loop topology into a physical star topology. Initiator On a Fibre Channel network, typically a server or a workstation that initiates transactions to disk or tape targets. JBOD Just a bunch of disks; typically configured as an Arbitrated Loop segment in a single chassis. LAN Local area network; A network linking multiple devices in a single geographical location. Logical The entity within a target that executes I/O commands. For Unit example, SCSI I/O commands are sent to a target and executed by a logical unit within that target. A SCSI physical disk typically has a single logical unit. Tape drives and array controllers may incorporate multiple logical units to which I/O commands can be addressed. Typically, each logical unit exported by an array controller corresponds to a virtual disk. LUN Logical Unit Number; The identifier of a logical unit within a target, such as a SCSI identifier. NAS Network Attached Storage; Storage elements that connect to a network and provide file access services to computer systems. A NAS storage element typically consists of an engine, which implements the file services, and one or more devices, on which data is stored. Point-to- A dedicated Fibre Channel connection between two devices. point Private A free-standing Arbitrated Loop with no fabric attachment. loop
Public loop An Arbitrated Loop attached to a fabric switch. RAID Redundant Array of Independent Disks. SCSI Small Computer Systems Interface; both a protocol for transmitting large blocks of data and a parallel bus architecture. SCSI-3 A SCSI standard that defines transmission of SCSI protocol over serial links. Storage Any device used to store data; typically, magnetic disk media or tape. Switch A device providing full bandwidth per port and high-speed routing of data via link-level addressing. Target Typically a disk array or a tape subsystem on a Fibre Channel network. TCP Transmission Control Protocol; TCP enables two hosts to establish a connection and exchange streams of data; TCP, guarantees delivery of data and also guarantees that packets will be delivered in the same order in which they were sent. Topology The physical or logical arrangement of devices in a networked configuration. UDP User Datagram Protocol; a connectionless protocol that, like TCP, runs on top of IP networks. Unlike TCP/IP, UDP IP provides very few error recovery services, offering instead a direct way to send and receive datagrams over an IP network. WAN Wide area network; a network linking geographically remote sites.
3.0 Example Storage Area Network Environment
In a preferred embodiment, the present invention is applicable to storage area networks. As discussed above, a storage area network (SAN) is a high-speed sub-network of shared storage devices. A SAN operates to provide access to the shared storage devices for all servers on a local area network (LAN), wide area network (WAN), or other network coupled to the SAN.
It is noted that SAN attached storage (S AS) elements can connect directly to the SAN, and provide file, database, block, or other types of data access services. SAS elements that provide such file access services are commonly called Network Attached Storage, or NAS devices. NAS devices can be coupled to the SAN, either directly or through their own network configuration. NAS devices can be coupled outside of a SAN, to a LAN, for example. A SAN configuration potentially provides an entire pool of available storage to each network server, eliminating the conventional dedicated connection between server and disk. Furthermore, because a server's mass data storage requirements are fulfilled by the SAN, the server's processing power is largely conserved for the handling of applications rather than the handling of data requests. FIG. 8 illustrates an example data communication network 800, according to an embodiment of the present invention. Network 800 includes a variety of devices which support communication between many different entities, including businesses, universities, individuals, government, and financial institutions. As shown in FIG. 8, a communication network, or combination of networks, interconnects the elements of network 800. Network 800 supports many different types of communication links implemented in a variety of architectures.
Network 800 may be considered to include an example of a storage area network that is applicable to the present invention. Network 800 comprises a pool of storage devices, including disk arrays 820, 822, 824, 828, 830, and 832. Network 800 provides access to this pool of storage devices to hosts/servers comprised by or coupled to network 800. Network 800 may be configured as point-to-point, arbitrated loop, or fabric topologies, or combinations thereof. Network 800 comprises a switch 812. Switches, such as switch 812, typically filter and forward packets between LAN segments. Switch 812 may be an Ethernet switch, fast-Ethernet switch, or another type of switching device known to persons skilled in the relevant art(s). In other examples, switch 812 may be replaced by a router or a hub. A router generally moves data from one local segment to another, and to the telecommunications carrier, such as AT & T or WorldCom, for remote sites. A hub is a common connection point for devices in a network. Suitable hubs include passive hubs, intelligent hubs, and switching hubs, and other hub types known to persons skilled in the relevant art(s).
Various types of terminal equipment and devices may interface with network 800. For example, a personal computer 802, a workstation 804, a printer
806, a laptop mobile device 808, and a handheld mobile device 810 interface with network 800 via switch 812. Further types of terminal equipment and devices that may interface with network 800 may include local area network (LAN) connections (e.g., other switches, routers, or hubs), personal computers with modems, content servers of multi-media, audio, video, and other information, pocket organizers, Personal Data Assistants (PDAs), cellular phones, Wireless Application Protocol (WAP) phones, and set-top boxes. These and additional types of terminal equipment and devices, and ways to interface them with network 800, will be known by persons skilled in the relevant art(s). Network 800 includes one or more hosts and/or servers. For example, network 800 comprises server 814 and server 816. Servers 814 and 816 provide devices 802, 804, 806, 808, and 810 with network resources via switch 812. Servers 814 and 816 are typically computer systems that process end-user requests for data and/or applications. In one example configuration, servers 814 and 816 provide redundant services. In another example configuration, server 814 and server 816 provide different services and thus share the processing load needed to serve the requirements of devices 802, 804, 806, 808, and 810. In further example configurations, one or both of servers 814 and 816 are connected to the Internet, and thus server 814 and/or server 816 may provide Internet access to network 800. One or both of servers 814 and 816 may be Windows NT servers or UNIX servers, or other servers known to persons skilled in the relevant art(s).
A SAN appliance or device as described elsewhere herein may be inserted into network 800, according to embodiments of the present invention. For example, a SAN appliance 818 may to implemented to provide the required connectivity between the storage device networking (disk arrays 820, 822, 824, 828, 830, and 832) and hosts and servers 814 and 816, and to provide the additional functionality of SAN and NAS management of the present invention described elsewhere herein. Hence, the SAN appliance interfaces the storage area network, or SAN, which includes disk arrays 820, 822, 824, 828, 830, and 832, hub 826, and related networking, with servers 814 and 816.
Network 800 includes a hub 826. Hub 826 is connected to disk arrays 828, 830, and 832. Preferably, hub 826 is a fibre channel hub or other device used to allow access to data stored on connected storage devices, such as disk arrays 828, 830, and 832. Further fibre channel hubs may be cascaded with hub
826 to allow for expansion of the SAN, with additional storage devices, servers, and other devices. In an example configuration for network 800, hub 826 is an arbitrated loop hub. In such an example, disk arrays 828, 830, and 832 are organized in a ring or loop topology, which is collapsed into a physical star configuration by hub 826. Hub 826 allows the loop to circumvent a disabled or disconnected device while maintaining operation.
Network 800 may include one or more switches in addition to switch 812 that interface with storage devices. For example, a fibre channel switch or other high-speed device may be used to allow servers 814 and 816 access to data stored on connected storage devices, such as disk arrays 820, 822, and 824, via appliance 818. Fibre channel switches may be cascaded to allow for the expansion of the SAN, with additional storage devices, servers, and other devices.
Disk arrays 820, 822, 824, 828, 830, and 832 are storage devices providing data and application resources to servers 814 and 816 through appliance 818 and hub 826. As shown in FIG. 8, the storage of network 800 is principally accessed by servers 814 and 816 through appliance 818. The storage devices may be fibre channel-ready devices, or SCSI (Small Computer Systems Interface) compatible devices, for example. Fibre channel-to-SCSI bridges may be used to allow SCSI devices to interface with fibre channel hubs and switches, and other fibre channel-ready devices. One or more of disk arrays 820, 822, 824, 828, 830, and 832 may instead be alternative types of storage devices, including tape systems, JBODs (Just a Bunch Of Disks), floppy disk drives, optical disk drives, and other related storage drive types. The topology or architecture of network 800 will depend on the requirements of the particular application, and on the advantages offered by the chosen topology. One or more hubs 826, one or more switches, and/or one or more appliances 818 may be interconnected in any number of combinations to increase network capacity. Disk arrays 820, 822, 824, 828, 830, and 832, or fewer or more disk arrays as required, may be coupled to network 800 via these hubs 826, switches, and appliances 818.
Communication over a communication network, such as shown in network 800 of FIG. 8, is carried out through different layers. FIG. 9 shows a simplified five-layered communication model, based on Open System Interconnection (OSI) reference model. As shown in FIG.9, this model includes an application layer 908, a transport layer 910, a network layer 920, a data link layer 930, and a physical layer 940. As would be apparent to persons skilled in the relevant art(s), any number of different layers and network protocols may be used as required by a particular application. Application layer 908 provides functionality for the different tools and information services which are used to access information over the communications network. Example tools used to access information over a network include, but are not limited to Telnet log-in service 901, IRC chat 902, Web service 903, and SMTP (Simple Mail Transfer Protocol) electronic mail service 906. Web service 903 allows access to HTTP documents 904, and FTP
(File Transfer Protocol) and Gopher files 905. Secure Socket Layer (SSL) is an optional protocol used to encrypt communications between a Web browser and Web server.
Transport layer 910 provides transmission control functionality using protocols, such as TCP, UDP, SPX, and others, that add information for acknowledgments that blocks of the file had been received.
Network layer 920 provides routing functionality by adding network addressing information using protocols such as IP, IPX, and others, that enable data transfer over the network. Data link layer 930 provides information about the type of media on which the data was originated, such as Ethernet, token ring, or fiber distributed data interface (FDDI), and others.
Physical layer 940 provides encoding to place the data on the physical transport, such as twisted pair wire, copper wire, fiber optic cable, coaxial cable, and others.
Description of this example environment in these terms is provided for convenience only. It is not intended that the invention be limited to application in this example environment. In fact, after reading the description herein, it will become apparent to persons skilled in the relevant art(s) how to implement the invention in alternative environments. Further details on designing, configuring, and operating storage area networks are provided in Tom Clark, "Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel SANs" (1999).
3.1 Example Storage Appliance
The present invention may be implemented in and operated from a storage appliance or SAN appliance that interfaces between the hosts and the storage subsystems comprising the SAN. The present invention is completely host (operating system) independent and storage system independent. The NAS functionality of the storage appliance according to the present invention does not require special host software. Furthermore, the NAS functionality according to
* the present invention is not tied to a specific storage vendor and operates with any type of storage, including fibre channel and SCSI.
The present invention may be implemented in a storage, or SAN, appliance, such as the SANLink™ appliance, developed by StorageApps Inc., located in Bridgewater, New Jersey. In embodiments, a storage appliance-based or web-based administrative graphical interface may be used to centrally manage the SAN and NAS functionality.
A storage appliance, such as the SANLink™, unifies SAN management by providing resource allocation to hosts. In the case of the SANLink™, for instance, it also provides data management capabilities. These data management capabilities may include:
1. Storage virtualization/mapping. All connected storage in the SAN is provided as a single pool of storage, which may be partitioned and shared among hosts as needed. 2. Data mirroring. An exact copy of the data stored in the SAN storage devices is created and maintained in real time. The copy may be kept at a remote location. 3. Point-in-time copying (Snapshot). An instantaneous virtual image of the existing storage may be created. The virtual replica can be viewed and manipulated in the same way as the original data.
4. Storage security. Access to particular storage devices, or portions thereof, may be restricted. The storage devices or portions may be masked from view of particular hosts or users.
Further data management capabilities may also be provided by a storage appliance. According to the present invention, these SAN data management capabilities are now applicable to NAS storage. In embodiments described below, a NAS server may be incorporated into a conventional storage appliance, according to the present invention. In alternative embodiments, the SAN servers and/or NAS servers may be located outside of the storage appliance. One or more SAN servers and/or NAS servers may be geographically distant, and coupled to the other SAN/NAS servers via wired or wireless links.
4.0 Network Attached Storage Embodiments of the Present Invention
FIG. 1 illustrates an example computer environment 100, which may be considered to include a storage area network (SAN). In FIG. 1 , storage appliance 108 couples hosts 102, 104, and 106 to storage devices 110, 112, and 114. Storage devices 110, 112, and 114 are coupled to storage appliance 108 via a first data communication network 118. Storage devices 110, 112, and 114 and first data communication network 118 form the storage portion of computer environment 100, and are referred to collectively as SAN 120 herein. Storage appliance 108 manages SAN 120, allocating storage to hosts 102, 104, and 106. Hosts 102, 104, and 106 may be any type of computer system. Hosts 102, 104, and 106 are coupled to storage appliance 108 via a second data communication network 116. First and second data communication networks 118 and 116 typically transport data using a data storage communication protocol such as fibre channel or SCSI. FIG.2 illustrates an example computer environment 200, according to an embodiment of the present invention. In computer environment 200, hosts 202,
204, and 206 attached to a network 208 may access the storage devices in SAN
120 as if they are one or more network attached storage (NAS) devices attached
5 directly to network 208.
In FIG.2, similarly to FIG. 1, a storage appliance 210 couples hosts 102, 104, and 106 to storage devices 110, 112, and 114. Storage devices 110, 112, and 114 are coupled to storage appliance 210 via a first data communication network 118. As described above, storage devices 110, 112, and 114 and first data
10 communication network 118 are referred to collectively as SAN 120. Hosts 102,
104, and 106 are coupled to storage appliance 210 via second data communication network 116. Furthermore, example computer environment 200 shows hosts 202, 204, and 206 coupled to storage appliance 210 via a third data communication network 208.
15 Hosts 202, 204, and 206, include hosts, servers, and other computer system types that may be present in a data communication network. For instance, one or more of hosts 202, 204, and 206 may be workstations or personal computers, and/or may be servers that manage network resources. For instance, one or more of the servers of hosts 202, 204, and 206 may be network servers,
20 application servers, database servers, or other types of server. Hosts 202, 204, and 206 output requests to storage appliance 210 to write to, or read data from storage devices 110, 112, and 114 in SAN 120. The present invention is applicable to additional or fewer hosts than shown in FIG. 2.
Storage appliance 210 receives storage read and write requests from hosts
25 202, 204, and 206 via third data communication network 208. The storage read and write requests include references to one or more storage locations in storage devices 110, 112, and 114 in SAN 120. Storage appliance 210 parses the storage read and write requests by extracting various parameters that are included in the requests. In an embodiment, storage appliance 210 uses the parsed read and write
'30 request to determine physical storage locations corresponding to the target locations in a logical data space of SAN 120. The structure and operation of storage appliance 210 is further described below. Storage appliance 210 outputs read and write requests to physical storage/LUNs.
Third data communication network 208 typically is an Ethernet, Fast Ethernet, or Gigabit Ethernet network, or other applicable type of communication network otherwise known or described elsewhere herein. The transport protocol for data on third data communication network 208 is typically TCP/IP, but may also be any applicable protocol otherwise known or mentioned herein.
SAN 120 receives storage read and write requests from storage appliance 210 via first data communication network 118. First data communication network 118 routes the received physical storage read and write requests to the corresponding storage device(s), which respond by reading or writing data as requested. Storage devices 110, 112, and 114 comprise one or more storage devices that may be individually coupled directly to storage appliance 210, and/or may be interconnected in a storage area network configuration that is coupled to storage appliance 210. For example, storage devices 110, 112, and 114 comprise one or more of a variety of storage devices, including tape systems, JBODs (Just a Bunch Of Disks), floppy disk drives, optical disk drives, disk arrays, and other applicable types of storage devices otherwise known or described elsewhere herein.
First data communication network 118 typically includes one or more fibre channel links, SCSI links, and/or other applicable types of communications link otherwise known or described elsewhere herein. SAN 120 may further include switches and hubs, and other devices, used to enhance the connectivity of first data communication network 118.
Redundant configurations may be used to increase system reliability. FIG. 6 illustrates example computer environment 200, according to an exemplary embodiment of the present invention. In FIG. 6, storage appliance 210 couples with hosts 102, 104, 106, 202, 204, and 206, and storage devices 110, 112, and 114, using redundant connections. Storage devices 110, 112, and 114 are coupled to storage appliance 210 through primary first data communication network 118a and redundant first data communication network 118b. Hosts 102, 104, and 106 are coupled to storage appliance 210 through primary second data communication network 116a and redundant second data communication network 116b. Hosts 202, 204, and 206 are coupled to storage appliance 210 through primary third communications link 602a and redundant third communications link 604, which are each coupled to third data communication network 208. Further details of providing NAS functionality according to the configuration shown in FIG. 6 are provided in sections below. Description in these terms is provided for convenience only. It is not intended that the invention be limited to application in these example environments. In fact, after reading the following description, it will become apparent to a person skilled in the relevant art how to implement the invention in alternative environments known now or developed in the future. Further detailed embodiments of the elements of computer environment 200 are discussed below.
5.0 Storage Appliance Embodiments According to the Present Invention
Structural implementations for the storage appliance of the present invention are described at a high-level and at a more detailed level. These structural implementations are described herein for illustrative purposes, and are not limiting. In particular, the present invention as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof. For instance, the present invention as described herein may be implemented in a computer system, application-specific box, or other device. Furthermore, the present invention may be implemented in one or more physically separate devices or boxes. In an embodiment, the present invention may be implemented in a SAN appliance, as described above, which provides for an interface between host servers and storage. Such SAN appliances include the SANLink™ appliance. According to the present invention, a storage appliance attached to a SAN provides NAS functionality. One or more SAN servers are present in the SAN appliance to provide data management functionality. One or more NAS servers, as further described below, may be installed in the SAN appliance, to provide the NAS functionality. In alternative embodiments, the NAS server may be physically separate from the SAN appliance, and may be connected to the SAN appliance by wired or wireless links. Furthermore, additional components may be present in the SAN appliance, such as fibre channel switches, to provide enhanced connectivity. FIG.3 A shows an example embodiment of a storage appliance 108, which includes a SAN server 302. SAN server 302 is coupled between first data communication network 118 and second data communication network 116. SAN server 302 allocates storage of SAN 120 to hosts 102, 104, and 106, on an individual or group basis, as shown in FIG. 1. SAN server 302 receives read and write storage requests from hosts 102, 104, and 106, and processes and sends these storage requests to the applicable storage device(s) of storage devices 110, 112, and 114 in SAN 120. Storage devices 110, 112, and 114 process the received storage requests, and send resulting data to SAN server 302. SAN server 302 sends the data to the applicable host(s) of hosts 102, 104, and 106. SAN server 302 also performs data management functionality for SAN 120, as described above.
Note that storage appliance 108 may include more than one SAN server 302. Additional SAN servers 302 may be provided for reasons of redundancy, greater bandwidth and I/O capability, and for additional reasons. FIG.3B illustrates an example of a storage appliance 210, according to an embodiment of the present invention. Storage appliance 210 includes a SAN server 302 and aNAS server 304. SAN server 302 is coupled between first data communication network 118 and second data communication network 116, similarly to the configuration shown in FIG. 3A. NAS server 304 is coupled between second data communication network 116 and third data communication network 208.
In an embodiment, SAN server 302 views NAS server 304 as a host.
Storage is allocated by SAN server 302 to NAS server 304, in a manner similar to how SAN server 302 would allocate storage to one of hosts 102, 104, and 106.
In other words, SAN server 302 requires little or no modification to interact with
NAS server 304. The storage may be allocated in the form of LUNs, for example.
NAS server 304 configures the storage allocated to it by SAN server 302, and exports it to third data communication network 208. In this manner, hosts 202, 204, and 206, shown in FIG. 2, can access the storage in SAN 120.
FIG. 14A shows a flowchart 1400 providing operational steps of an example embodiment of the present invention. FIG. 14B provides additional steps for flowchart 1400. FIGS. 14A-B show a process for interfacing a SAN with a first data communication network. One or more hosts coupled to the first data communication network can access data stored in one or more of a plurality of storage devices in the SAN. The one or more hosts access one or more of the plurality of storage devices as aNAS device. The steps of FIGS. 14A-B may be implemented in hardware, firmware, software, or a combination thereof. Furthermore, the steps of FIGS. 14A-B do not necessarily have to occur in the order shown, as will be apparent to persons skilled in the relevant art(s) based on the teachings herein. Other structural embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion contained herein. These steps are described in detail below.
The process begins with step 1402. In step 1402, a SAN server is coupled to a SAN. For example, SAN server 302 is coupled to SAN 120.
In step 1404, aNAS server is coupled to the SAN server through a second data communication network. For example, NAS server 304 is coupled to SAN server 302 via second data communication network 116. In step 1406, the NAS server is coupled to the first data communication network. For example, NAS server is coupled to third data communication network 208.
In step 1408, a portion of at least one of the plurality of storage devices is allocated from the SAN server to the NAS server. For example a portion or all of storage devices 110, 112, and 114 are allocated to NAS server 304 by SAN server 302. In an embodiment, the NAS server is viewed from the SAN server as a host attached to the second data communication network. In this manner, SAN server 302 does not require additional configuration in order to be able to allocate NAS storage. For example, the portion of at least one of the plurality of storage devices is allocated from the SAN server to the NAS server in the same manner as the portion would be allocated from the SAN server to a host attached to the second data communication network.
In step 1410, the allocated portion is configured as NAS storage in the NAS server. For example, NAS server 304 configures the allocated portion as
NAS storage. Configuration of storage as NAS storage is described in further detail below.
In step 1412, the configured portion is exported from the NAS server to be accessible to the one or more hosts coupled to the first data communication network. For example, NAS server 304 exports the configured portion of storage on third data communication network 208, to be available to one or more of hosts
202, 204, and 206.
In an embodiment, the SAN server is configured to allocate storage from the SAN to at least one host attached to the second data communication network. For example, SAN server 302 is configured to allocate storage from SAN 120 to one or more of hosts 102, 104, and 106 on second data communication network
116. The SAN server is coupled to the second data communication network.
FIG. 14B provides additional exemplary steps for flowchart 1400 of FIG. 14A: In step 1414, an administrative interface is coupled to the SAN server. For example, in an embodiment, an administrative interface is coupled to SAN server 302. The administrative interface allows for user control of storage allocation by the present invention. The administrative interface may include a graphical user interface. The administrative interface is described in more detail below. In an alternative embodiment, the administrative interface is coupled directly to NAS server 304.
In step 1416, a storage allocation directive is received from the administrative interface by the SAN server. For example, a user graphically or textually inputs a command to effect NAS or SAN storage management, according to the present invention.
In embodiments, redundant NAS servers may be used in a single SAN appliance, and/or each NAS server may itself provide redundant features. For example, in an embodiment, each NAS server 304 includes two Host Bus Adaptors (HBAs) that interface with second communications network 118, for redundancy and fail-over capabilities. Examples of these capabilities are further described in the sections below.
FIG. 7 illustrates a block diagram of storage appliance 210, according to an exemplary embodiment of the present invention. Storage appliance 210 includes a primary SAN server 302a, a redundant SAN server 302b, a primary
NAS server 304a, a redundant NAS server 304b, and switches 702a, 702b, 704a, and 704b. Switches 702a, 702b, 704a, and 704b are optional, as required by the particular application. Switches 702a, 702b, 704a, and 704b are preferably fibre channel switches, used for high data rate communication with hosts and storage, as described above. Storage appliance 210 of FIG. 7 is applicable to computer system environment 200 shown in FIG. 6, for example.
Switch 704a is coupled to primary first data communication network 118a. Primary SAN server 302a and redundant SAN server 302b are coupled to switch 704a. Switch 704a couples SAN servers 302a and 302b to primary first data communication network 118a, as a primary mode of access to storage devices 110, 112, and 114.
Switch 704b is coupled to redundant first data communication network
118b. Primary SAN server 302a and redundant SAN server 302b are coupled to switch 704b. Switch 704b couples SAN servers 302a and 302b to redundant first data communication network 118b, so that they can redundantly access storage devices 110, 112, and 114.
Primary SAN server 302a is coupled to redundant SAN server 302b by
SAN server communication link 414. Primary SAN server 302a and redundant SAN server 302b each include two interfaces, such as two HB As, that allow each of them to be coupled with both of switches 704a and 704b. Additional SAN servers and switches may be coupled in parallel with SAN servers 302a and 302b and switches 704a and 704b, as represented by signals 708a and 708b. In further embodiments, additional switches and SAN servers in storage appliance 210 may be coupled to further redundant networks, or to networks coupled to further storage devices.
Switch 702a is coupled to primary SAN server 302a. Primary NAS server
304a, redundant NAS server 304b, and switch 702a are coupled to primary second data communication network 116a. Switch 702a allows for communication between primary SAN server 302a and primary and redundant
NAS servers 304a and 304b, and between SAN server 302a and hosts attached to primary second data communication network 116a.
Switch 702b is coupled to redundant SAN server 302b. Primary NAS server 304a, redundant NAS server 304b, and switch 702b are coupled to redundant second data communication network 116b by switch 702b. Switch
702b allows for communication between redundant SAN server 302b and primary and redundant NAS servers 304a and 304b, and between SAN server 302b and hosts attached to redundant second data communication network 11 b.
Primary NAS server 304a and redundant NAS server 304b each include two interfaces, such as two HBAs, that allow each of them to be coupled with both of primary and redundant second data communication network 116a and 116b. Additional NAS servers and switches may be coupled in parallel with NAS servers 304a and 304b and switches 702a and 702b. In further embodiments, additional switches and NAS servers in storage appliance 210 may be coupled to further redundant networks, or to networks coupled to additional hosts.
Primary NAS server 304a is coupled to primary third communications link 602. Redundant NAS server 304b is coupled to redundant third communications link 604. In further embodiments, additional switches and NAS servers in storage appliance 210 may be coupled to third data communication network 208 through links.
NAS servers 304a and 304b are considered to be peer NAS servers for each other. SAN servers 302a and 302b are considered to be peer SAN servers for each other. In an embodiment, primary NAS server 304a operates to supply NAS functionality to storage appliance 210, as described for NAS server 304 above. In this configuration, redundant NAS server 304b operates as a back-up for primary NAS server 304a, and takes over some or all of the NAS functionality of primary NAS server 304a when NAS server 304a fails. SAN servers 302a and 302b may have a similar relationship. In an alternative embodiment, primary and redundant NAS servers 304a and 304b share the NAS functionality for storage appliance 210. For instance, each of NAS servers 304a and 304b may configure and export some amount of storage to third data communication network 208. Furthermore, NAS servers 304a and 304b may operates as back-up NAS servers for each other. SAN servers 302a and 302b may have a similar relationship. Further detail on the operation the elements of storage appliance 210 shown in FIG. 7 are provided in sections below.
FIG. 11 illustrates a pair of NAS servers 304a and 304b coupled to a pair of SAN servers 302a and 302b via a pair of fibre channel switches 702a and 702b, according to an embodiment of the present invention. In order to enhance performance, the fibre channel switches 702a and 702b may be zoned to allow NAS servers 304a and 304b to reserve a target on the SAN servers 302a and 302b. FIG. 11 shows the zoning of fibre channel switches 702a and 702b. The first three ports of each fibre channel switch 702a and 702b are zoned into a NAS zone 704a and 704b. This provides NAS servers 304a and 304b uncontested access to target 0 of SAN servers 302a and 302b. Further zoning arrangements for switches are within the scope and spirit of the present invention. Furthermore, the present invention is applicable to additional redundant configurations.
5.1 Example SAN Server Embodiments According to the Present Invention
Exemplary implementations for a SAN server are described in more detail as follows. These structural implementations are described herein for illustrative purposes, and are not limiting. In particular, the present invention as described herein can be achieved using any number of structural and operational implementations, including hardware, firmware, software, or any combination thereof. For instance, a SAN server as described herein may be implemented in a computer system, application-specific box, or other device. Furthermore, the SAN server may be implemented in a physically separate device or box from the
SAN appliance and NAS server(s). In a preferred embodiment, the SAN server is implemented in a SAN appliance, as described above.
FIG. 4 illustrates an exemplary block diagram of a SAN server 302, according to an embodiment of the present invention. SAN server 302 comprises a first network interface 406, a second network interface 402, a SAN storage manager 404, a SAN server interface 408, and an operating system 410. FIG. 4 also shows an administrative interface 412, that is coupled to SAN server 302 through GUI communication link 426. Administrative interface 412 may be coupled to SAN server 302 if SAN server 302 is a primary SAN server for a SAN appliance. Administrative interface 412 is more fully described below.
First network interface 406, second network interface 402, and SAN server interface 408 each include one or more host bus adaptors (HBA), network interface cards (NICs), and/or other adaptors/ports that interface the internal architecture of SAN server 302 with first data communication network 118. First and second network interfaces 406 and 402, and SAN server interface 408, may each support fibre channel, SCSI, Ethernet, TCP/IP, and further data communication mediums and protocols on first data communication network 118, second data communication network 116, and SAN server communication link
414, respectively.
Operating system 410 provides a platform on top of which application programs executing in SAN server 302 can run. Operating system 410 may be a customized operating system, and may be any available operating system, such as Linux, UNIX, DOS, OS/2, and Windows NT.
SAN storage manager 404 provides data management functionality for SAN server 302. SAN storage manager 404 includes one or more modules that are directed towards controlling aspects of data management for the SAN. In the embodiment shown in FIG. 4, SAN storage manager 404 includes a storage allocator module 416, a storage mapper module 418, a data mirror module 420, a snapshot module 422, and a storage security module 424.
Storage allocator module 416 controls the allocation and deallocation of storage in SAN 120, shown in FIG. 2, to hosts 102, 104, and 106, and to NAS server 304. Further details about the operation of storage allocator module 416 are described below.
Storage mapper module 418 controls the mapping of logical storage addresses received from hosts 102, 104, and 106, and from NAS server 304, to actual physical storage addresses for data stored in the storage devices of SAN 120. Data mirror module 420 controls the mirroring of data stored in SAN 120 with a remote SAN, when such data mirroring is desired. For instance, data mirror module 420 may communicate with a data mirror module located in a remote SAN server, via SAN server interface 408. The remote SAN server is typically located in a remote SAN appliance, and manages data stored in the remote SAN. Data mirror module 420 interacts with the remote data mirror module to mirror data back and forth between the local and remote SANs.
Snapshot module 422 controls single point-in-time copying of data in one or more storage devices of SAN 120 to another location, when a snapshot of data is desired.
Storage security module 424 controls the masking of storage devices, or portions of storage devices in SAN 120, from particular hosts and users.
Second network interface 402 receives read and write storage requests from hosts, and from NAS server 304, via second data communication network 116. Second network interface 402 also sends responses to the read and write storage requests to the hosts and NAS server 304 from SAN server 302.
SAN storage manager 404 receives the read and write storage requests from second network interface 402, and processes them accordingly. For instance, SAN storage manager 404 may map the received storage request from a logical storage address to one or more physical storage address. The SAN storage manager 404 outputs the physical storage address(s) to first network interface 406.
First network interface 406 receives a physical read/write storage request from SAN storage manager 404, and transmits it on first data communication network 118. In this manner, first network interface 406 issues the received read/write storage request to the actual storage device or devices comprising the determined physical storage address(s) in SAN 120. First network interface 406 also receives responses to the read/write storage requests from the storage devices in SAN 120. The responses may include data stored in the storage devices of SAN 120 that is being sent to a requesting host, and/or may include an indication of whether the read/write storage request was successful. First network interface 406 outputs the responses to SAN storage manager 404.
SAN storage manager 404 receives the responses from first network interface 406, and processes them accordingly. For example, SAN storage manager 404 may output data received in the response to second network interface 402. Second network interface 402 outputs the request to second data communication network 116 to be received by the requesting host, or by NAS server 304.
SAN storage manager 404 also communicates with NAS server 304 through second network interface 402, to allocate and deallocate storage from
NAS server 304. SAN storage manager 404 may send allocation and deallocation directives, and status directives, to NAS server 304. Network interface 402 may receive responses from NAS server 304, and send these to SAN storage manager 404. In an embodiment, storage allocator module 416 controls this NAS related functionality. Further details of storage allocation and deallocation are provided in sections below.
5.2 Example NAS Server Embodiments According to the Present Invention
Structural implementations for the NAS server of the present invention are described as follows. These structural implementations are described herein for illustrative purposes, and are not limiting. In particular, the present invention as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof. For instance, a NAS server as described herein may be implemented in a computer system, application-specific box, or other device. Furthermore, the NAS server may be implemented in a physically separate device or box from the SAN appliance and SAN server(s). In a preferred embodiment, the NAS server is implemented in a SAN appliance, as described above,
FIG. 5 illustrates a block diagram of NAS server 304, according to an exemplary embodiment of the present invention. NAS server 304 includes a first network interface 508, a second network interface 502, a NAS file manager 512, and an operating system 510.
First network interface 508 includes one or more host bus adaptors (HBA), network interface cards (NICs), and/or other adaptors/ports that interface the internal architecture of NAS server 304 with second data communication network 116. First network interface 508 may support fibre channel, SCSI, and further data communication mediums and protocols on second data communication network 116. Second network interface 502 includes one or more host bus adaptors
(HB A), network interface cards (NICs), and/or other adaptors/ports that interface the internal architecture of NAS server 304 with third data communication network208. Second network interface 502 may support Ethernet, Fast Ethernet, Gigabit Ethernet, TCP/IP, and further data communication mediums and protocols on third data communication network 208.
Operating system 510 provides a platform on top of which application programs executing in NAS server 304 can run. Operating system 510 may be a customized operating system, and may be any commercially available operating system, such as Linux, UNIX, DOS, OS/2, and Windows NT. In a preferred embodiment, operating system 510 includes Linux OS version 2.2.18 or greater.
Operating system 510 includes a kernel. Linux provides an advantage over the file system limitations of NT, and allows access to kernel source code.
NAS file manager 512 provides file management functionality for NAS server 304. NAS file manager 512 includes one or more modules that are directed towards keeping a record of exported files, and configuring and managing the files exported to third data communication network 208. In the embodiment shown in FIG. 5, NAS file manager 512 includes a NFS protocol module 504, a CIFS protocol module 506, and a storage configuration module 514. Storage configuration module 514 configures storage allocated by SAN server 302 to NAS server 304, to be made available to hosts on third data communication network 208. Further description of storage configuration module 514 is provided in sections below.
NFS protocol module 504 allows NAS server 304 to use Network File System (NFS) protocol to make file systems available to UNIX hosts on third data communication network 208. CIFS protocol module 506 allows NAS server 304 to use Common Internet File System (CIFS) protocol to make file systems available to Windows clients. For instance, NAS server 304 may include a product called Samba, which implements CIFS. Second network interface 502 receives read and write storage requests from hosts attached to third data communication network 208. The requests relate to storage exported to third data communication network 208 by NAS server 304. Second network interface 502 also sends responses to the read and write storage requests to the hosts. NAS file manager 512 receives the read and write storage requests from second network interface 502, and processes them accordingly. For instance, NAS file manager 512 may determine whether the received storage request related to storage exported by NAS server 304. NAS file manager 512 outputs the storage request to first network interface 508. First network interface 508 receives a physical read/write request from
NAS file manager 512, and transmits it on second data communication network 116. In this manner, first network interface 508 issues the received read/write storage request to the SAN server 302. First network interface 508 also receives responses to the read/write storage requests from SAN server 302 on second data communication network 116. The responses may include data stored in the storage devices of SAN 120. First network interface 508 outputs the responses to NAS file manager 512.
NAS file manager 512 receives the responses from first network interface 508, and processes them accordingly. For example, NAS file manager 512 may output data received in the response to second network interface 502 through one or both of NFS protocol module 504 and CIFS protocol module 506. NFS protocol module 504 formats the response per NFS protocol. CIFS protocol module 506 formats the response per CIFS protocol. Second network interface 502 outputs the formatted response on third data communication network 208 to be received by the requesting host. First network interface 508 also receives storage allocation and deallocation directives, and status directives from SAN server 302, and sends them to NAS file manager 512. Note that in embodiments, second network interface 502 may also or alternatively receive these storage allocation and deallocation directives. Responses to these received storage allocation and deallocation directives, and status directives are generated by NAS file manager 512. NAS file manager 512 sends these responses to first network interface 508, which outputs the responses onto second communication network 116 for SAN server 302.
5.3 Administrative Interface of the Present Invention
In a preferred embodiment, the present invention includes an administrative interface to allow a user to configure aspects of the operation of the invention. The administrative interface includes a graphical user interface to provide a convenient location for a user to provide input. As shown in FIG.4, an administrative interface 412 is coupled to SAN storage manager 404 of SAN server 302. When present, the administrative interface couples to the primary SAN server in the storage appliance 210. The primary SAN server forwards directives to NAS servers. The directives are forwarded to the NAS servers via second data communication network 116 and/or third data communication network 208, and may include a common or custom SAN-to-NAS protocol. An exemplary NAS protocol is described below. Many features of the present invention are described below in relation to the administrative interface. However, in alternative embodiments, an administrative interface is not required and therefore is not present. In an embodiment, an existing administrative interface that accommodates
SAN servers may not require any modification to handle allocation of storage to NAS servers. However, although the SAN servers view the NAS servers as separate hosts, in an embodiment, an existing administrative interface may be enhanced to allow integration of NAS functionality. For example, in an embodiment, the administrative interface may be configured to show the NAS servers as themselves, rather than as hosts. For instance, the administrative interface may allow the storage appliance administrator to allocate a storage portion, such as a LUN, to a NAS server, as a NAS LUN. Any LUN mapping may be done automatically. Once the administrator chooses a LUN to be a NAS LUN, the NAS servers create a file system for that LUN and export that file system to the network. This process is described in further detail below.
In order to represent the NAS servers as NAS servers in the administrative interface, the administrative interface must be able to differentiate NAS servers from hosts. In an embodiment, the NAS servers issue a special registration command via second communications interface 116 to the SAN servers, to identify themselves. For example, in a SAN appliance that includes two NAS servers, the first NAS server may identify itself to the SAN servers as "NASServerNASOne", while the second NAS server may identify itself as
"NASServerNASTwo." These names are special identifiers used by the SAN servers when allocating storage to the NAS servers.
FIG. 12 illustrates a graphical user interface (GUI) 1200 for administrative interface 412 of FIG. 4, according to an exemplary embodiment of the present invention. In particular, GUI 1200 in FIG. 12 displays panels related to management of NAS storage. This is because a NAS button 1214 has been selected in a GUI mode select panel 1216 of GUI 1200. Additional features related to SAN management may be displayed by selecting other buttons in GUI mode select panel 1216. In the embodiment of GUI 1200 shown in FIG. 12, two NAS servers, labeled NAS 1 andNAS2, are available for storage allocation. GUI 1200 further includes a first panel 1202, a second panel 1204, a third panel 1206, a fourth panel 1208, and a fifth panel 1210. Each of these panels are more fully described in the following text and sections. In alternative embodiments, fewer or more panels may be displayed in GUI 1200, and fewer or more features may be displayed in each panel, as required by the particular application.
Panel 1202 displays available storage units, in the form of LUNs, for example. In the example panel 1202 of FIG. 12, six LUNs are available for allocation: LUNO, LUN1, LUN2, LUN3, LUN4 and LUN5. The storage units, or LUNs, displayed in panel 1202 may be virtual storage units, or actual physical storage units. Any of the storage units displayed in panel 1202 may be allocated as network attached storage via one or more NAS servers.
A LUN may be allocated to NAS using panel 1202, by selecting the box in the NAS1 column, or the box in the NAS2 column next to the LUN to be allocated, and then pressing the box labeled "Assign." If the box in the NAS1 column was checked, the first NAS server will be instructed to create a file system on the LUN. If the box in the NAS2 column was checked, the second NAS server will be instructed to create a file system on the LUN. For example, the instructed NAS server will create a file system named /exportxxxx, where xxxx is a four-digit representation of the LUN number, such as 0001 for LUN1. After the NAS server creates the file system, it exports the file system via NFS and/or CIFS. For example, the file system is exported by NFS protocol module 504 and/or CIFS protocol module 506. Hence, the file system will be available to one or more hosts and users on third data communication network 208.
A LUN may be deallocated by clearing the box next to the LUN in the NAS1 or NAS2 column of panel 1202, and pressing the box labeled "Assign." That instructs the NAS server to relinquish the file system created for the deallocated LUN, making the file system inaccessible on third data communication network 208.
Panel 1204 displays all NAS file systems that are currently exported by NAS servers. There is one file system for each exported NAS LUN. In an embodiment, an administrator may select a file system name in panel 1204, and GUI 1200 will fill the properties of the selected file system into third, fourth, and fifth panels 1206, 1208, and 1210. Panel 1206 displays the size, NAS server, ownership, group, and permission attributes of the file system selected in panel 1204. Panel 1206 allows an administrator to change ownership, group, and permission attributes of the file system selected in panel 1204 by modifying these entries in panel 1206. Panel 1208 displays whether the file system selected in panel 1204 is exported by NFS, whether the NFS file system is read-only, and a list of hosts which may access the file system via NFS. Additional access rights, such as root access, may be selectable for file systems. Panel 1208 allows an administrator to change whether the file system is exported by NFS, whether the file system is read-only, and to modify the list of users and hosts which may access the file system via NFS. Once an administrator has made the desired changes, the changes are implemented by selecting "Commit" in panel 1204.
Panel 1210 displays whether the file system selected in panel 1204 is exported by CIFS, whether the CIFS file system is read-only, and a list of users and hosts which may access the file system via CIFS. Panel 1210 allows an administrator to change whether the file system is exported by CIFS, whether the file system is read-only, and to modify the list of users and hosts which may access the file system via CIFS. Once an administrator has made the desired changes, the changes are implemented by selecting "Commit" in panel 1204. GUI 1200 has numerous advantages. First, an administrator can allocate a NAS LUN with a single click of a mouse button. Second, GUI 1200 hides that the storage appliance has separate NAS servers (the NAS servers do not appear as hosts, but rather as network interfaces). Third, the administrator can easily provide or eliminate access via NFS and CIFS, restrict permissions, and limit access over the network for a file system. Further advantages are apparent from the teachings herein.
FIG. 13 A shows a flowchart 1300 providing operational steps of an example embodiment of the present invention. FIG. 13B provides additional steps for flowchart 1300. FIGS. 13A-B show a process for managing the allocation of storage from a storage area network (SAN) as network attached storage (NAS) to a data communication network. The steps of FIGS . 13 A-B may be implemented in hardware, firmware, software, or a combination thereof. Furthermore, the steps of FIGS. 13 A-B do not necessarily have to occur in the order shown, as will be apparent to persons skilled in the relevant art(s) based on the teachings herein. Other structural embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion contained herein. These steps are described in detail below.
The process begins with step 1302. In step 1302, a storage management directive is received from a graphical user interface. For example, directives are received from a GUI such as GUI 1200, over GUI communication link 426. For example, the storage management directive may be received by SAN server 302. Example storage management directives that may be received from GUI 1200 are described below. In an embodiment, storage allocator module 416 in SAN storage manager 404 of SAN server 302 processes received storage management directives.
In step 1304, a message corresponding to the received storage management directive is sent to a NAS server. For example, the NAS server is NAS server 304. Example messages that may be received by the NAS server are described below. In an embodiment, storage allocator module 416 in SAN storage manager 404 of SAN server 302 generates messages corresponding to received storage management directives to be sent to NAS server 304.
In step 1306, a response corresponding to the sent message is received from the NAS server. For example, SAN server 302 may receive the response from NAS server 304. Example responses are described below. In an embodiment, storage configuration module 514 receives messages from SAN server 302, processes them, and generates the response for SAN server 302.
In an embodiment, the SAN server sends the response received from NAS server 304 to GUI 1200. GUI 1200 can then display the received response information. In an embodiment, storage allocator module 416 receives the response form NAS server 304, processes the response, and sends the response to GUI 1200.
FIG. 13B provides additional exemplary steps for flowchart 1300 of FIG. 13A: In step 1308, a command line interface (CLI) may be provided at the graphical user interface. For example, GUI 1200 may include a command line interface where a user may input textual storage management instructions. An example command line interface is described below.
In step 1310, a user is allowed to input the storage directive as a CLI command into the CLI.
In an embodiment, communication between GUI 1200 and the SAN servers may use an existing storage appliance management communication facility. GUI 1200 sends management directives to SAN server 302, via GUI communication link 426, shown in FIG. 4. For example, the management directives may be network messages that start with the letter "c", immediately followed by an integer, and then followed by parameters. When the SAN servers receive a management directive, they perform actions defined by that directive.
The following sections explain the management directives that GUI 1200 sends to the SAN servers when the administrator performs NAS administrative functions. The NAS protocol messages that the SAN servers send to the NAS servers are also described. A complete listing of example NAS Protocol directives is given in a section below. The directives/messages presented herein are provided for purposes of illustration, and are not intended to limit the invention. Alternate directives/messages, differing slightly or substantially from those described herein, will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Furthermore, the description herein often refers to storage portions as LUNs, for purposes of illustration. The present invention as described herein, however, is applicable to allocating storage portions of any size or type. 6.0 Allocation and Deallocation of NAS Storage
This section, and those that follow, provide exemplary operational steps for embodiments of the present invention. The embodiments presented herein are provided for purposes of illustration, and are not intended to limit the invention. The steps provided below may be implemented in hardware, firmware, software, or a combination thereof. For instance, steps provided in this and following sections, may be implemented by SAN server 302 and/or NAS server 304. Furthermore, the steps of the various embodiments below do not necessarily have to occur in the order shown, as will be apparent to persons skilled in the relevant art(s) based on the teachings herein. Other structural embodiments will be apparent to persons skilled in the relevant art(s) based on the discussions contained herein.
Description of the allocation and deallocation of NAS storage through input applied to GUI 1200 is provided in this section, and following sub-sections, and elsewhere herein. After reading the description herein, it will become apparent to a person skilled in the relevant art how to implement NAS storage allocation and deallocation using any number of processes and structures, in accordance with the present invention.
When an administrator selects the box in the NAS 1 column or the NAS2 column next to a LUN in panel 1202 of FIG. 12, and presses "Assign," the LUN is allocated to the NAS server. For example, GUI 1200 may send a c35 storage allocation management directive to the SAN server 302, via GUI communication link 426. The c35 management directive includes three parameters: the LUN number, a flag specifying whether to enable or disable the LUN, and a NAS server number.
For example, if the administrator selects the box in the NAS2 column next to LUN number 15 and press "Assign," GUI 1200 sends the following management directive to SAN server 302: c35 15 1 2
The directive instructs SAN server 302 to allocate LUN 15 to NAS server NAS2. In embodiments, the directive may be expanded to include further information, such as a directory name to be associated with the LUN. If no directory name is given, a default directory name may be used. For example, the default directory name may be /exportxxxx, where xxxx is the LUN number (the use of this directory name is shown elsewhere herein for example purposes).
Similarly, if the administrator selects the box in the NAS1 column next to LUN number 4 and press "Assign, " GUI 1200 sends the following management directive to the SAN server 302:
c35 4 1 1
The directive instructs SAN server 302 to allocate LUN 4 to NAS server NAS1. If the administrator subsequently clears the box in the NAS 1 column next to LUN 4, and press "Assign," GUI 1200 sends the following management directive to SAN server 302:
c35 4 0 1
The directive instructs SAN server 302 to remove LUN 4 from NAS server NAS 1 , such that LUN4 is no longer allocated to NAS server NAS 1.
In an embodiment, when SAN server 302a receives the c35 management directive, SAN server 302a executes the steps shown in flowchart 1600 of FIG. 16, and further described below. For example, the steps below may be executed by SAN storage manager 404 in NAS server 304a. In particular, the steps below may be executed by storage allocator module 416 of SAN storage manager 404 to allocate and deallocate storage: In step 1602, the NAS servers are found within one or more host mapping tables. SAN server 302a does this by looking up the special names of the NAS servers registered with SAN server 302a. The names of the NAS servers are registered with SAN servers 302a and 302b when the NAS servers boot up, as described above. For example, the registered names of NAS servers 304a and
304b are NASServerNASOne and NASServerNASTwo.
In step 1604, the value of the second parameter is determined, indicating whether the LUN is enabled. If the second parameter is a 1, SAN server 302a maps the LUN to NASServerNASOne and NASServerNASTwo. The LUN is mapped to both servers in the event of fail-over, as described in further detail below. If SAN server 302a determines that the second parameter is a 0, indicating that the LUN is not enabled, SAN server 302a removes the LUN from the host map for NAS servers 304a and 304b.
In step 1606, a network message is sent to the redundant SAN server, requesting that the redundant SAN server perform steps 1602 and 1604.
In step 1608, a network message is sent to NAS servers 304a and 304b, to inform them that the LUN is available.
The following sub-sections describe an example NAS Protocol for LUN allocation and de-allocation, and the actions taken by the NAS servers.
6.1 Example Protocol For NAS LUN Allocation And De-allocation
When LUNs are being allocated by SAN server 302 to NAS server 304, SAN server 302 sends NAS servers 304a and 304b a packet containing the following string:
LUN:Enable:LunNumber:CreateFsFlag
The LunNumber parameter is the identifier for the LUN being allocated, and the
CreateFsFlag parameter is a "0" or a " 1 ", depending upon whether the NAS server should create a file system on the LUN. Further information, such as a directory name, may also be provided with the LUNkEnable string.
For example, suppose SAN server 302a received the following management directive:
c35 15 1 2
That instructs SAN server 302a to allocate LUN 15 to NAS server 304b. After mapping the LUN to both NAS servers (as described in steps 1 through 4 above), SAN server 302a sends the following string to NAS server 304a:
LUN:Enable:15:0
The string informs NAS server 304a that LUN 15 is available, and that NAS server
304a should configure the LUN into its kernel in case of fail-over. However, NAS server 304a will not create a file system on the LUN, nor export the LUN as a NAS device.
SAN server 302a sends the following string to NAS server 304b:
LUN:Enable:15:l
The string inform NAS server 304b that LUN 15 is available, and that NAS server 304b should configure the LUN into its kernel, create a file system on the LUN, and export the file system via CIFS and NFS.
If successful, NAS servers 304a and 304b respond with a packet containing the following string:
NAS: 1:0 If unsuccessful, NAS servers 304aand304b respond with two messages. The first packet contains the following string:
NAS:0:NumBytes
The NumBytes parameter is the number of bytes in the second message to follow. The second message contains strings describing why the operation failed.
When LUNs are being deallocated by SAN server 302a, SAN server 302a sends each of NAS servers 304a and 304b a packet containing the following string :
LUN:Disable:LunNumber
The string instructs NAS servers 304a and 304b to remove the LUN from their kernels. Also, if either NAS server had exported the LUN, the NAS server un-exports it.
The response to the LUN:Disable string is the same as to the LUN:Enable string. That is, if the operation is successful, the NAS servers respond with a packet containing the "NAS : 1 :0" string. If unsuccessful, the NAS servers respond with the "NAS:0:NumBytes" string, followed by a string that describes the error.
6.2 NAS Server Configuration of Allocated Storage
When aNAS server receives a LUN:Enable string, it configures that LUN from each interface with second data communication network 116. For instance, first network interface 508'may include two HBAs that each interface with second data communication network 116. Each HB A is configured into the NAS server ' s operating system, creating two new disk devices. The first disk device refers to the LUN on the first HB A, and the second disk device refers to the same LUN on the second HBA. In an embodiment, a NAS server uses a Linux operating system. The Linux operating system uses symbolic names for disk devices, such as /dev/sda, /dev/sdb, and /dev/sdc. Because of this, it is difficult to determine the LUN number from the symbolic name. To overcome this difficulty, the NAS server maintains a map of LUN numbers to symbolic names. For example, the map may be maintained via Linux symbolic links. The symbolic links may be kept in a directory named /dev/StorageDir, and contain the HBA number, the controller number, the target number, and the LUN number.
For example, NAS server 304a may receive a directive to enable LUN 15. After NAS server 304a configured LUN 15 on HBA 1 into the kernel of operating system 510 of NAS server 304a, the kernel may assign LUN 15 the device name of /dev/sde. To maintain the mapping of LUN 15 on HBA 1 to device /dev/sde, NAS server 304a may create a symbolic link to /dev/sde named /dev/StorageDir/1.0.0.15. Subsequent NAS operations may be performed on /dev/StorageDir/1.0.0.15.
In an embodiment, when the NAS servers receive the LUN:Enable string, NAS servers 304a and 304b may execute the steps shown in flowchart 1700 of FIG. 17, and further described below, for configuring and exporting allocated storage. In particular, the steps for below may be executed by NAS file manager 512 in each NAS server. For example, steps 1702-1718, and 1722-1724 may be executed by storage configuration module 514. Furthermore, the steps below are adaptable to one or more NAS servers:
In step 1702, the LUN is configured on the first HBA into the operating system. For example, the LUN is configured on the first HBA into operating system 510.
In step 1704, the LUN is configured on the second HBA into the operating system. For example, the LUN is configured on the second HBA into operating system 510.
In step 1706, a symbolic link is created in /dev/StorageDir, linking the LUNs with the Linux symbolic device names. In step 1708, a directory is created, named /exportxxxx, where xxxx is a 4-digit representation of the LUN number (as mentioned elsewhere herein, alternative directory names may be specified).
In step 1710, the value of CreateFSFlag is determined. If CreateFSFlag is 0, then processing is complete. In this case, the NAS : 1 :0 string is sent to the SAN server. If CreateFSFlag is 1, the processing continues to step 1712.
In step 1712, the IP address upon which the request arrived is determined. The IP address is important to determine, because the NAS server may be in a recovery mode, and may have several IP addresses. The recovery mode is described in a section below.
In step 1714, the LUN number is inserted into a file, for example, named /usr/StorageFile/etc/Ipaddress.NASvolumes, where Ipaddress is the IP address upon which the request arrived. This file is important when fail-over occurs.
In step 1716, a file system is created on the LUN. In step 1718, the file system is mounted on /exportxxxx.
In step 1720, the file system is exported via NFS and CIFS. For example, NFS protocol module 504 exports the file via NFS, and CIFS protocol module 506 exports the file via CIFS.
In step 1722, files storing NFS and CIFS exported file systems are updated. A file named /usr/StorageFile/etc/Ipaddress.NFSexports is updated. This file contains a list of all file systems exported via NFS, along with their attributes. A file named /usr/StorageFile/etc/Ipaddress.CIFSexports is updated. This file contains a list of all file systems exported via CIFS.
In step 1724, a response is sent to the SAN server. The NAS : 1 : 0 string is sent to the SAN server if the previous steps were successful. Otherwise, the
NAS:0:NumBytes string, followed by the error strings, are sent to the SAN server.
Note that when all processing is complete, one or the NAS servers will have been instructed to export the LUN, while the other will not. For example,
NAS server 304a may have been instructed to export the LUN, while NAS server 304b was not. Per the steps above, NAS server 304a will have updated the following files: Ipaddress.NASvolumes, Ipaddress.NFSexports, and Ipaddress.CIFSexports. NAS servers 304a and 304b both would have the LUN configured in their operating system (with symbolic links in /dev/StorageDir). One or more of these files may be used for fail-over and recovery, described in further detail in a section below.
In an embodiment, when a NAS server receives the LUN:Disable string, the steps shown in flowchart 1800 of FIG. 18, and further described below, are executed by the NAS server, for deconfiguring and unexporting unallocated storage. In particular, the steps below may be executed by NAS file manager 512 in the NAS server. For example, steps 1802-1808 may be executed by storage configuration module 514. Furthermore, the steps below are adaptable to one or more NAS servers:
In step 1802, the IP address upon which the request arrived is determined.
In step 1804, whether the corresponding file system is exported is determined. If the NAS server exports the file system, it unexports it from NFS, unexports it from CIFS, unmounts the file system, and removes the information from Ipaddress.NASvolumes, Ipaddress.NFSexports, and Ipaddress.CIFSexports.
In step 1806, the LUN is removed from the kernel configuration.
In step 1808, the symbolic links for the LUN in /dev/StorageDir are removed.
To summarize, the administrator selects a single box in panel 1202 of GUI 1200 to have a LUN become a NAS LUN. However, the processing described above occurs within the NAS servers, and is not visible to, and does not require interaction with the administrator. 6.3 Listing Volumes
After an administrator assigns a NAS LUN, GUI 1200 may display the resultant file system in panel 1204. GUI 1200 also may show the size of the file system, the NAS server that owns the file system, the protocols upon which the file system is exported, and various security attributes in panel 1206. To obtain that information, GUI 1200 sends a c36 list file systems management directive to SAN server 302a. The c36 management directive includes one parameter that specifies the type of information being requested. The parameter may be one of the following keywords: PERM, NFS, or CIFS. If the parameter used is PERM, the SAN server returns a string including the number of NAS file systems, followed by a space character, followed a list of strings that correspond to all NAS file systems. The strings may be of the following form:
/exportxxxx servernum size owner group perm
The /exportxxxx string is the file system name (where xxxx corresponds to the
LUN number). The servernum parameter is either 1 or 2, which corresponds to the NAS server (NAS server 304a or NAS server 304b) to which the file system is allocated. The size parameter is the size of the file system. The owner parameter is the username that owns the file system. The group parameter is the group name of the file system. The perm parameter is a string that lists the permissions on the file system.
For example, suppose an administrator assigned LUN 1 and LUN 3 to NAS server 304a, and LUN 2 and LUN 4 to NAS server 304b. To determine the file systems assigned to NAS, and hence populate the list of file systems in panel 1204, GUI 1200 would issue the following management directive:
c36 PERM The directive instructs SAN server 302a to return the following strings:
4
/exportOOOl 1 67G root root rwxrwxrwx
/export0002 2 70G root root rwxrwxrwx /export0003 1 63 G root root rwxrwxrwx
/export0004 2 10G root root rwxrwxrwx
This information is used by GUI 1200 to build the list of file systems in panel 1204.
When a user clicks on a file system in panel 1204, additional information is displayed in panels 1208 and 1210. For example, panel 1208 displays whether the file system is exported via NFS, and, if so, displays a host-restriction list and whether the file system is read-only. Furthermore, panel 1210 displays whether the file system is exported via CIFS, and attributes of that protocol as well. Note that in an embodiment, when a LUN is initially allocated, it is automatically exported via NFS and CIFS by the NAS server. However, as further described below, the administrator can choose to restrict the protocols under which the file system is exported.
To obtain protocol information, GUI 1200 may send the c36 directive to SAN server 302a using one of the keywords NFS and CIFS as the parameter. If the parameter is NFS, the SAN server returns a string containing the number of
NFS file systems, followed by a space character, followed a list of strings that correspond to all NFS file systems. The strings may be of the following form:
/exportxxxx servernum flag hostlist
The servernum parameter is the same as was returned with a c36 directive using the PERM parameter. The flag parameter is the string "rw" or "ro" (for read- write or read-only). The hostlist parameter is a comma-separated list of hosts (or IP addresses) that have access to the file system via NFS.
If the parameter used is CIFS, the directive returns a string containing the number of CIFS file systems, followed by a space character, followed a list of strings that correspond to all CIFS file systems. The strings may be of the following form:
/exportxxxx servernum flag userlist
The userlist parameter is a comma-separated list of user names that can access the file system via CIFS. Hence, through the use of the c36 directive with the PERM, NFS, and
CIFS parameters, GUI 1200 can fill in panels 1206, 1208, and 1210.
When SAN server 302a receives the c36 management directive, it creates a NAS protocol message and forwards it to NAS servers 304a and 304b. The message may contain a string of the following form:
LUN:ListNols:type
Where the type parameter is one of the keywords PERM, NFS, and CIFS.
In an embodiment, when the NAS servers receive the LUN:ListVols:type,
NAS servers 304a and 304b may execute the steps shown in flowchart 1900 of
FIG. 19, and further described below. In particular, the steps below may be executed by NAS file manager 512 in each NAS server. For example, steps 1902-
1908 may be executed by storage configuration module 514:
In step 1902, the IP address upon which the request arrived is determined.
In step 1904, the value of the type parameter is determined. If the type parameter is PERM, the file /usr/StorageFile/etc/Ipaddress.NASvolumes is opened to obtain the list of NAS LUNs, and the required information is returned to the SAN server. In step 1906, if the type parameter is NFS, the file /usr/StorageFile/etc/Ipaddress.NFSexports is opened to obtain the list of file systems exported via NFS, and the required information is returned.
In step 1908, if the type parameter is CIFS, the file /usr/StorageFile/etc/Ipaddress.CIFSexports is opened to obtain the list of file systems exported via CIFS, and the required information is returned.
If the above steps are successful, the NAS servers respond with a packet containing the "NAS: 1:0" string. If the steps are unsuccessful, the NAS servers respond with the "NAS:0:NumBytes" string, followed by a string that describes the error.
As described above, a NAS server makes a NAS LUN available via NFS and CIFS when the corresponding file system is created and exported. However, an administrator may unselect the NFS box in panel 1208, or the CIFS box in panel 1210, for a file system selected in panel 1204. This causes the file system to be unexported, making it unavailable via the unselected protocol. Further details regarding the unexporting of a file system are provided in the following section.
6.4 Unexporting File Systems
After an administrator assigns a NAS LUN, GUI 1200 may show the resulting file system in panel 1204. If the administrator selects the file system in panel 1204, panels 1208 and 1210 indicate that the file system is exported via NFS and/or CIFS. An administrator may choose to deny access to the file system via NFS or CIFS by unselecting the NFS box in panel 1208, or the CIFS box in panel 1210, respectively. When the administrator denies access to NFS or CIFS in this manner, GUI
1200 sends a c38 management directive to SAN server 302a. The c38 unexport file system management directive includes tliree parameters: the file system name, the protocol from which to unexport the file system, and the number of the NAS server that owns the file system.
For example, the administrator may allocate LUN 15 to NAS server 304a, creating file system /exportOOl 5. The administrator may want to deny access via CIFS to file system /exportOOl 5. The administrator may select /exportOOl 5 in panel 1204, unselect the CIFS box in panel 1210, and press "Commit" in panel
1204. GUI 1200 sends the following management directive to SAN server 302a:
c38 /exρort0015 CIFS 1
The directive instructs SAN server 302a to deny CIFS access for file system /exportOOl 5. Similarly, GUI 1200 may send the following management directive to SAN server 302a:
c38 /export0015 NFS 1
The directive instructs SAN server 302a to deny NFS access for file system /exportOOl 5. When SAN server 302a receives the c38 management directive, SAN server 302a creates a NAS protocol message, and forwards the message to the NAS server (NAS server 304a or 304b) that is specified by the third parameter. The message may contain a string of the following forms:
NFS :Unexport:fileSy stemName Or
CIFS :Unexport:fileSystemName
In an embodiment, when the NAS servers receive one of these strings, NAS servers 304a and 304b may execute the steps shown in flowchart 2000 of FIG. 20, and further described below. In particular, the steps below may be executed by NAS file manager 512 in each NAS server. For example, steps 2002- 2008 may be executed by storage configuration module 514:
In step 2002, the IP address upon which the request arrived is determined.
In step 2004, whether the NAS server has been allocated the LUN associated with the file system is determined. If the NAS server has not been allocated the LUN, an error string is returned, and the process ends.
In step 2006, if the message specifies CIFS, the related file system information is removed from the system file that lists all CIFS exported file systems (referred to as the CIFS configuration file elsewhere herein), and the file system is removed from /usr/StorageFile/etc/Ipaddress.CIFSexports.
In step 2008, if the message specifies NFS, the related file system information is removed from the system file that lists all NFS exported file systems (referred to as the NFS configuration file elsewhere herein), and the file system is removed from /usr/StorageFile/etc/Ipaddress.NFSexports. If these steps are successful, the NAS servers respond with a packet containing the "NAS: 1:0" string. If the steps are unsuccessful, the NAS servers respond with the "NAS : 0 :NumBytes" string, followed by a string that describes the error.
In addition to allowing a file system to be unexported, GUI 1200 allows the administrator to export a previously unexported file system, and to change attributes of an exported file system (such as access lists and read-only access).
6.5 Exporting File Systems
As described above, an administrator can view attributes of a NAS file system displayed in GUI 1200. If a file system is unexported, the administrator can choose to export the file system by selecting the NFS box in panel 1208, and/or the CIFS box in panel 1210, and pressing "Commit" in panel 1204. An administrator may also change access lists, or make the file system read-only, for these protocols through panels 1208 and 1210. When directed to export a file system, GUI 1200 sends a c37 management directive to SAN server 302a. The c37 export file system management directive includes five parameters: the file system name, the protocol in which to export the file system, the number of the NAS server that was allocated the file system, a flag that specifies read-only or read-write, and a comma-separated access list.
For example, the administrator may assign LUN 17 to NAS server 304b. In an embodiment, that file system is made available over CIFS and NFS by default. The administrator may want to change attributes of the file system, such that NFS access is read-only, and that access to the file system is restricted only to hosts named clientl, client2, and client3. Accordingly, the administrator may select /exportOOl 7 in panel 1204, modify the respective attributes in panel 1208, and press "Commit" in panel 1204. GUI 1200 sends a resulting management directive to SAN server 302a:
c37 /exportOOl 7 NFS 2 ro client I,client2,client3
The directive instructs SAN server 302a to re-export the file system /exportOOl 7 via NFS as read-only, and to restrict access only to hosts named clientl, client2, and client3.
Similarly, the administrator may want to set CIFS access to file system
/exportOOl 7 to be read- write, and restrict access only to users named betty, fred, and wilma. Accordingly, the administrator may select /exportOO 17 in panel 1204, modify the respective attributes in panel 1210, and press "Commit" in panel 1204.
GUI 1200 sends a resulting management directive to SAN server 302a:
c37 /export0017 CIFS 2 rw betty,fred,wilma
The directive instructs SAN server 302a to export file system /exportOO 17 via
CIFS as read-write, and to restrict access to the file system to users betty, fred, and wilma. After SAN server 302a receives the c37 management directive, it creates aNAS protocol message and forwards the message to the NAS server (NAS server
304a or 304b) specified by the third parameter. Two messages may be required for exporting a file system. For example, the first message contains a string of the following forms:
NFS:Export:fileSysternName:rwFlag:NumBytes Or
CIFS : Export:fileSystemName:rwFlag:NumBytes
The fileSystemName parameter is the name of the file system whose attributes are being modified. The rwFlag parameter includes the string "ro" or "rw". The
NumBytes parameter is the number of bytes in an access list (including commas). If NumBytes is greater than 0, the SAN server sends a second message containing the comma-separated access list.
In an embodiment, when the NAS servers receive one of these messages, NAS servers 304a and 304b may execute the steps shown in flowchart 2100 of
FIG. 21, and further described below. In particular, the steps below may be executed by NAS file manager 512 in each NAS server. For example, steps 2102-
2108 may be executed by storage configuration module 514:
In step 2102, the IP address upon which the request arrived is determined. In step 2104, the NAS server determines whether it owns the file system.
In step 2106, if the message specifies CIFS, the relevant information is added or replaced in the CIFS configuration file, and the file /usr/StorageFile/etc/Ipaddress.CIFSexports is updated.
In step 2108, if the message specifies NFS, the relevant information is added or replaced in the NFS configuration file, and the file
/usr/StorageFile/etc/Ipaddress.NFSexports is updated.
If the steps are successful, the NAS servers respond with a packet containing the "NAS: 1:0" string. If the steps are unsuccessful, the NAS servers respond with the "NAS :0:NumBytes" string, followed by a string that describes the error.
6.6 Setting Permissions
After an administrator assigns aNAS LUN, the administrator can modify the file system's owner, group, and permissions attributes. This may be accomplished by selecting the file system from the list in panel 1204, modifying the relevant attributes in panel 1206, and pressing "Commit" in panel 1204.
When directed to change attributes of a file system, GUI 1200 sends a set permissions management directive to SAN server 302a. The c39 management directive includes five parameters: the file system name, the number of the NAS server that has been allocated the file system, the new owner of the file system, the new group name of the file system, and the new permissions of the file system.
For example, suppose the administrator may assign LUN 17 to NAS server 304b. After file system /exportOO 17 is created, the administrator may reassign the file system to a user "fred" by inputting the user name into the corresponding text box in panel 1206. The administrator may reassign the file system to a group "research" by inputting the group name into the corresponding text box in panel 1206. The administrator may change permissions for the file system in panel 1206. For example, the administrator may unselect the group write "w" box, and the world read "r", write "w", and execute "x" boxes in panel 1206. GUI 1200 sends a resulting management directive to SAN server 302a:
c39 /exportOO 17 2 fred research rwxr-x —
The directive instructs SAN server 302a to change the owner to fred, the group to research, and the permissions to rwxr-w — , for file system /exportOO 17. After SAN server 302a receives the c39 management directive, it creates aNAS protocol message and forwards the message to both of NAS servers 304a and 304b. Sending the message to both NAS servers keeps the permissions consistent during fail-over, which is further described below. The message may contain a string of the following form:
LUN:Setperm:fileSystemName:owner:group:permissions
When a NAS server receives the string, it changes the owner, group, and world permissions as specified. If the change of permissions is successful, the NAS server responds with the NAS: 1:0 string. If the change of permissions is unsuccessful, the NAS servers respond with two messages. The first packet contains the NAS:0:NumBytes sfring, where NumBytes is the number of bytes in the second message. The second message contains a description of the error.
7.0 GUI Command Line Interface
GUI 1200 may provide a command-line interface (CLI) to the NAS functionality to receive CLI commands. CLI commands which correspond to functionality described above are presented below. The administrator may input the following CLI commands to allocate and de-allocate aNAS LUN: makeNASMap and unMakeNASMap. The commands are followed by two parameters. The NASServerNumber parameter is 1 or 2, depending upon which NAS server being allocated or deallocated the LUN (i.e. NAS servers 304a and 304b). The SANNASLUN parameter is the LUN to be allocated or deallocated:
makeNASMap SANNASLUN NASServerNumber unMakeNASMap SANNASLUN NASServerNumber
The administrator may input the following CLI commands to export or unexport a file system: makeExport (followed by five parameters) and unMakeExport (followed by four parameters) . The protocolFlag parameter is 0 for CIFS protocol, and is 1 for NFS protocol. The NASserverNumber parameter is e,qual to 1 or 2 (for NAS server 304a or 304b, respectively) . The rwFlag parameter is equal to 0 for read-only, and equal to 1 for read- write. The List parameter is a comma-separated list of hosts or users:
makeExport SANNASLUN protocolFlag NASServerNumber rwFlag List unMakeExport SANNASLUN protocolFlag NASServerNumber
The administrator issues the following CLI command to set the permissions on a file system: ChangeExport (followed by five parameters):
ChangeExport SANNASLUN NASServerNumber Owner Group Permissions
CLI commands may also be used to retrieve NAS file system listings. For example, CLI commands that may be used are listed as follows, followed by their description:
* nasMaps
This CLI command lists LUNs assigned to NAS. For example, this is determined by determining the LUNs assigned to NASServerNASOne and NASServerNASTwo (i.e., NAS server 304a and 304b).
* exports
This CLI command lists all NAS file systems and their properties.
* nfsExports This CLI command lists allNAS file systems exported via NFS, along with their properties.
* cifsExports
This CLI command lists all NAS file systems exported via CIFS, along with their properties.
* refreshNASMaps
This CLI command refreshes all NAS related configurations and deletes all uncommitted NAS changes.
Additional CLI commands, or modifications to the CLI commands presented above, applicable to GUI 1200 of the present invention would be recognized by persons skilled in the relevant art(s) from the teachings herein.
8.0 Obtaining Statistics
Although not shown in GUI 1200, a management directive may be provided that obtains NAS statistics. The management directive allows monitoring of the NAS functionality, and alerts a user upon error.
To obtain statistics and error information, monitoring users or applications may send a c40 management directive to a SAN server. The c40 obtain statistics management directive is followed by no parameters. The directive is sent to the SAN server, and the SAN server returns a series of strings that show network statistics, remote procedure call statistics, file system statistics, and error conditions.
A sfring returned by the SAN server that contains network statistics may have the following form: NAS Server :NET: OutputPackets : Collisions :InputPackets :InputErrors
The NASServer parameter is the number 1 or 2, which corresponds to the first and second NAS servers (i.e., NAS servers 304a and 304b). Each NAS server will be represented by one instance of that line. The value of the OutputPackets parameter is the number of network packets sent out by the NAS server. The value of the Collisions parameter is the number of network collisions that have occurred. Those two values may be used to determine a collision rate (the collision rate is Collisions divided by OutputPackets). If the collision rate is greater than 0.05, the user's "wire" is considered "hot. " This means that there likely are too many machines coupled to the user's "wire" or network, causing network collisions and greatly reducing performance. If that happens, entity monitoring the network statistics may recommend that the user install bridges or switches into the network.
The value of the InputPackets parameter is the number of packets received by the NAS server. The value of the InputErrors parameter is the number of bad packets received. Input errors may be caused by electrical problems on the network, or by receiving bad checksums. If the entity monitoring the network statistics sees the InputErrors rising, the user may have a client machine coupled to the network with a faulty network interface card, or the client may have damaged cables.
If the value of InputPackets on NAS server 304a is significantly higher or lower than that of NAS server 304b, the user may consider reassigning NAS LUNs across the NAS servers to help balance the load.
Another returned string type contains remote procedure call (RPC) statistics, and may have the following form:
NASServer:RPC:TotalCalls:MalformedCalls As in the prior string, the value of NASServer is 1 or 2, which corresponds to the first or second NAS server (i.e., NAS server 304a and 304b). Each NAS server will be represented by one instance of the RPC statistics line.
The value of the TotalCalls parameter is the number of RPC calls received. The value of the MalformedCalls parameter is the number of RPC calls that had errors. A malformed call is one that was damaged by the network (but still passed the checksum). If the entity monitoring the RPC statistics sees a large number of malformed calls, the user may have a network that is jittery.
If the value of the TotalCalls parameter on NAS server 304a is significantly higher or lower than that of NAS server 304b, the user is providing more NFS traffic to one of the servers. The user should think about reassigning NAS LUNs across the NAS servers to help balance the load.
A third returned string type contains file system information, and may have the following form:
NASServer:FS:/exportxxxx:TotalSize:AvailableSize
Each NAS Server may provide one of these strings for each file system it owns.
The /exportxxxx parameter is the name of the file system (where xxxx is a four-digit representation of the LUN number). The value of the TotalSize parameter is the size of the file system in kilobytes, for example. The value of the
AvailableSize parameter is the amount of free space (in kilobytes, for example). If the amount of free space becomes too small, the entity monitoring the file system information should inform the user.
A fourth returned string type contains error information, and may have the following form:
NASServer:ERR:Severity:TimeOfError:Message Each NAS Server may provide one or more of these strings. The Severity parameter is a number (with 1 being the most critical). The Message parameter describes an error that occurred. For example, if NAS server 304b went down and NAS server 304a took over, the following error message may be returned:
1:ERR:1: 08.23-15.03 : TAKEOVER -- 192.168.30.31 is taking over
192.168.30.32
To summarize, an example output from a c40 call is shown below. Note that any number of the above described sfrings may result in an output to the c40 call:
1:NET:7513:0:59146:0
1:RPC:430:0
1 :FS:/export0005:256667:209459
1:ERR:1: 08.23-15.03 : TAKEOVER - 192.168.30.31 is taking over 192.168.30.32 1:ERR:1: 08.23-18.22 : RELINQUISH - Relinquishing control of
192.168.30.32
2:NET:6772:0:55656:0
2:RPC:453:0
2:FS:/export0001 :3434322:67772 2:FS:/export0002:256667:20945
The output indicates that network is OK. However, NAS server 304a took over for NAS server 304b at a time of 15 : 03 on August 23 , as indicated by a timestamp in the output above. The output also indicates that NAS server 304b came back up at a time of 18:22. Further description of fail-over and recovery are provide in a section below. (Note that in embodiments, the timestamp may further indicate the particular time zone.) When a SAN server receives the c40 management directive, it creates a NAS protocol message and forwards it to the NAS servers. The message may contain a sfring of the following form:
LUN:GetStats
In an embodiment, when the NAS servers receive one of these messages, each of NAS servers 304a and 304b may execute the steps shown in flowchart 2200 of FIG. 22, and further described below. In particular, the steps below may be executed by NAS file manager 512 in each NAS server. For example, steps 2202-2210 may be executed by storage configuration module 514: In step 2202, the IP address upon which the request arrived is determined.
In step 2204, the network statistics are obtained. , In step 2206, the RPC statistics are obtained.
In step 2208, items listed in /usr/StorageFile/etc/Ipaddress.NASvolumes are analyzed, and file system information about each item is returned. In step 2210, error messages from are retrieved from error logs, and the error logs are moved to having a name ending with ".old". In this manner, a subsequent call will not return the same errors.
If the above steps are successful, the NAS servers each respond with two messages. The first message containing the following string:
NAS: 1 -.NumBytes
The value of the NumBytes parameter is the number of bytes in the information that follows. The second message is the information collected in the above steps, such as shown in the example output above. If unsuccessful, the unsuccessful NAS server respond with the "NAS:0:NumBytes" string, followed by a string that describes the error. 9.0 Providing High Availability According to The Present Invention
As described above, a goal of the NAS implementation of the present invention is to provide high-availability. The following sections present the NAS high-availability features by describing NAS configuration, boot-up, and fail-over.
9.1 NAS Server Configuration
To improve ease of installation, the NAS configuration is accomplished by the SAN servers. According to the present invention, a command is issued on the SAN server, informing it of the initial IP addresses of the SAN servers. Once the SAN servers have this information, the SAN server can communicate with the NAS servers. Further NAS configuration may be accomplished from the SAN server.
In an embodiment, to aid in the configuration of the storage appliance, a NAS Configuration Sheet may be supplied to each user configuring the system. The user fills out the NAS Configuration sheet, and a configuring entity may run three commands (described in the following sub-section) to configure the NAS servers. The commands send a c41 management directive to the SAN server.
The c41 management directive takes on several instances, each configuring a different part of a NAS server. The first instance configures the NAS server addresses, as described in the following section. Further instances of the c41 management directive may be used to configure NFS and CIFS on a NAS server. 9.1.1 Configuring NAS Server Addresses
In the embodiment shown in FIG. 7, because each NAS server provides redundancy, each includes two Internet protocol (IP) addresses. The first IP address is a "boot up" IP address, and the second is a public IP address. Two IP addresses are necessary for fail-over, as further described in a section below.
To configure NAS addresses, the entity performing the configuration may obtain the completed NAS Configuration Sheet, and use this information to perform the configuration. To perform the configuration, a CLI or GUI sends the c41 management directive to the SAN server. The c41 configure NAS server management directive may use the following parameters:
1. The keyword addr;
2. The IP address of the primary SAN server;
3. The IP address of the redundant SAN server;
4. The hostname of the first NAS server; 5. The public IP address of the first NAS server;
6. The boot up IP address of the first NAS server;
7. The hostname of the second NAS server;
8. The public IP address of the second NAS server;
9. The boot up IP address of the second NAS server; 10. The IP netmask;
11. The IP broadcast address; and
12. The default gateway address.
In an embodiment, when a SAN server receives the c41 management directive with a first parameter of addr, the SAN server may execute the steps shown in flowchart 2300 of FIG. 23, and further described below. In particular, the steps below may be executed by SAN storage manager 404 in the SAN server
302: In step 2302, a network message is sent to the first NAS server (for example, using the NAS Protocol) including the information listed above. The SAN server uses the configured IP address to communicate with the NAS server. The SAN server informs the NAS server that it is NAS server 1. In step 2304, a network message is sent to the second NAS server, including the information listed above. The SAN server uses the configured IP address to communicate with the NAS server. The SAN server informs the NAS server that it is NAS server 2.
In step 2306, the SAN server configuration is updated with the public address of NAS server 1 and NAS server 2. In an embodiment, future communication with the NAS servers occurs via the public IP address.
The network message of steps 2302 and 2304 above may include a string of the following form:
CONF:ADDR:ServNum:IPl :IP2:HostNamel :IP3:IP4:HostName2:IP5:IP6:NM
:BC:GW
The ServNum parameter is the NAS Server number. The SAN server places a 1 in that field when it sends the message to the first NAS server, and places a 2 in that field when it sends the message to the second NAS server. The IPl and IP2 parameters are the addresses of the SAN servers, IP3 and IP4 are the public and boot-up addresses of the first NAS server, and IP5 and IP6 are the public and boot-up addresses of the second NAS server. NM, BC, and GW are the Netmask, Broadcast address, and Gateway of the network.
In an embodiment, when the NAS servers receive the network message, they may execute the steps shown in flowchart 2400 of FIG. 24, and further described below. In particular, the steps below may be executed by NAS file manager 512 in each NAS server. For example, steps 2402-2406 may be executed by storage configuration module 514: In step 2402, a file named /usr/StorageFile/etc/NASconfig is created, and the following information is placed in the file:
SanAppliance: IPl IP2 1 : IP3 IP4 2: IP5 IP6
The first line contains the addresses of both SAN servers, the second line contains the public and boot-up addresses of the first NAS server, and the third line contains the public and boot-up addresses of the second NAS server. Both NAS servers will use that file to figure out their NAS server number. In step 2404, the value of the ServNum parameter is determined. If the
ServNum parameter is 1 , the NAS server modifies its Linux configuration files to assign itself the hostname of HostNamel, and the IP address of IP4 (i.e., it's boot-up IP address). If the ServNum parameter is 2, the NAS server modifies its Linux configuration to assign itself the hostname of HostName2, and the IP address of IP6.
In step 2406, the NAS server is rebooted.
After rebooting, each NAS server have been assigned and configured with the desired hostname and boot-up IP address. Also, the SAN servers have stored the public address of each NAS server. After boot-up, the NAS servers may assign themselves their public IP address, which is described in the following section.
9.2 NAS Server Boot-up
Following boot-up after step 2406 of the prior section, each NAS Server may execute the steps shown in flowchart 2500 of FIG. 25, and further described below. In particular, the steps below may be executed by NAS file manager 512 in each NAS server. For example, steps 2502-2526 may be executed by storage configuration module 514: In step 2502, the file /usr/StorageFile/etc/NASconfig is searched for the line that contains its boot-up IP address. From that line, the NAS server determines its NAS server number and its public IP address.
In step 2504, the file /usr/StorageFile/etc/NASconfig is searched for the other NAS server ' s public IP address. This may be accomplished by searching the file for the other NAS server number.
In step 2506, whether the NAS server is attached to the network is verified. This may be verified by attempting to communicate with the SAN servers, for example (the IP addresses of the SAN servers are stored in /usr/StorageFile/etc/NASconfig). If the NAS server cannot communicate with the
SAN servers, it assumes that its network interface card has a problem. In this situation, the NAS server may go into a loop, where it sleeps for 10 seconds, for example, and the retries step 3.
In step 2508, whether the NAS server's public IP address is in use is determined. This may be determined by attempting to send a network message to its public IP address, for example. If its public address is in use, then fail-over has occurred. In this case, the NAS server sends a message to the peer NAS server, informing the peer NAS server that it has come back up. The peer NAS server relinquishes control of the assumed public IP address, and relinquishes control of the file systems it assumed confrol over during fail-over.
In step 2510, the boot-up IP address of the NAS server is changed to its public IP address.
In step 2512, a Gratuitous ARP request is issued, which allows clients to update their IP-to-Ethernet mapping information. In step 2514, the directory /dev/StorageDir is examined for LUNs. To avoid constantly reassigning Linux symbolic device names, the NAS server does not query for all LUNs on startup. Instead, NAS server examines the file /dev/StorageDir. For each symbolic link in that directory, the NAS server adds the LUN into its operating system and re-creates the symbolic link. The links in /dev/StorageDir are more fully described above. In step 2516, the NAS server name is registered with the SAN servers as NASServerNASOne or NASServerNASTwo, depending on its server number.
In step 2518, the file /usr/StorageFile/etc/PublicIPAddress.NASvolumes is searched for file systems. For each file system listed in the directory, the NAS server checks the file system and mounts it.
In step 2520, each of the entries in the file /usr/StorageFile/etc PublicIPAddress.NFSexports are made available by NFS.
In step 2522, each of the entries in the file /usr/StorageFile/etc/PublicIPAddress.CIFSexports are made available by CIFS. In step' 2524, a NAS server process that implements the NAS protocol is started.
In step 2526, the NAS server sleeps for a period of time, such as 5 minutes, and a heartbeat process is started to monitor the public IP address of the peer NAS server. The NAS server waits for a period of time because the peer NAS server may also be booting up.
Hence, after the NAS server performs the above steps, it has determined its public IP address, the process that implements the NAS protocol is running, and a heartbeat process that monitors the peer NAS server has been started. The following sub-section describes what happens when the peer NAS server crashes.
9.3 NAS Server Failure And Recovery
As described above, after a NAS server boots, it starts a heartbeat process that monitors its peer. For example, the NAS server may send a heartbeat pulse to the peer NAS server, and receive a heartbeat pulse sent by the peer NAS server. If the NAS server determines that it cannot communicate with the peer NAS server, by monitoring the peer NAS server's heartbeat pulse, fail-over occurs.
When fail-over occurs, the NAS server takes on the public IP address of the peer NAS server, takes on the hostname of the peer NAS server (as an alias to its own, for example), and exports all of the file systems that were exported by the peer NAS server.
FIG. 15 illustrates a NAS server 304 that includes a heartbeat process module 1502. Embodiments for the heartbeat process module 1502 of the NAS server of the present invention are described as follows. These implementations are described herein for illustrative purposes, and are not limiting. In particular, the heartbeat process module as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof. The present invention is applicable to further ways of determining network failures, through the use of heartbeat signals and other means. Example implementations for determining network failures are described in pending U.S. Patent Application entitled "Internet Protocol Data Mirroring," Serial No. 09/664,499, Attorney Docket Number 1942.0040000.
Heartbeat process module 1502 generates a heartbeat process. Under the control of the heartbeat process executing on the NAS server, the NAS server connects to the NAS Protocol server on the public IP address of the peer NAS server (for example, the connection may be made every 10 seconds). After the connection is made, the NAS server sends a message containing the following string:
AYT:
After the peer NAS server receives the message, the peer NAS server may execute the steps shown in flowchart 2600 of FIG. 26, and further described below:
In step 2602, the IP address upon which the request arrived is determined.
In step 2604, the following files are checked for any modifications since the last "AYT:" message was received:
/usr/StorageFile/etc/Ipaddress.NASvolumes /usr/StorageFile/etc/Tpaddress.CIFSexports /usr/StorageFile/etc/Ipaddress.NFSexports
In step 2606, if it was determined in step 2604. that any of those files have been modified, the modified file is sent in a response to the message. In step 2608, if any of the NFS locking status files have been modified since the last "AYT:" message, the modified NFS locking status file(s) are sent in a response to the message.
The response to the "AYT:" message may include several messages. A first message contains a string that may have the following form:
NAS:l:NumFiles
The NumFiles parameter indicates the number of files that the peer NAS server is sending in the response to the NAS server. If value of the NumFiles parameter is not zero, the peer NAS server sends two messages for each file found modified in the steps above. A first of the two messages may contain the following string:
Filename:fileSize
The Filename parameter is the name of the file being sent in the response. The fileSize parameter is the number of bytes in the file. A second of the two messages may contain the contents of the file.
If an error occurs, the peer NAS server responds with the "NAS:0:NumBytes" string, followed by a string that describes the error. The parameter NumBytes is the length of the string that followed.
As a result of the above steps, after sending an "AYT:" message to the peer NAS server, the NAS server may receive files containing the peer NAS server's NAS volumes, CIFS exports, and NFS exports. For example, NAS server 304a may have a public IP address of 192.11.109.8, and NAS server 304b may have the public IP address of 192.11.109.9. Accordingly, NAS server 304a would store the following files containing its information. These files are typically populated when the NAS server receives NAS Protocol messages:
/usr/StorageFile/etc/192.11.109.8.NASvolumes /usr/StorageFile/etc/192.11.109.8.CIFSexρorts
/usr/StorageFile/etc/192.11.109.8.NFSexports
NAS server 304a would also store the following files containing the corresponding information of the peer NAS server, NAS server 304b:
/usr/StorageFile/etc/192.11.109.9. NAS volumes /usr/StorageFile/etc/192.11.109.9.CIFSexports
/usr/StorageFile/etc/192.11.109.9.NFSexports
So, at periodic intervals (such as 10 seconds), NAS server 304a sends an "ATY:" message to peer NAS server 304b, and peer NAS server 304b responds with updates to the above described NAS files. If the NAS server cannot connect to the peer NAS server, the peer may be down, and fail-over may be necessary. If the NAS server cannot connect to the public IP address of the peer NAS server, it first checks to see if it can send a "ping" to the public IP address of the peer. If so, the NAS server may assume the NAS Protocol server on the peer NAS server has exited. The NAS server may accordingly record an error message. The error message may be displayed the next time a user sends the c40 directive to the NAS server, for example.
If the NAS server is unable to connect to the public IP address of the peer NAS server, and is unable to "ping" the public IP address of the peer NAS server, the NAS server may attempt to contact each SAN server. If the NAS server cannot contact either SAN server, the NAS server may assume that something is wrong with its network interface card. In that event, the NAS server may sleep for some interval of time, such as 10 seconds, and then attempt to contact the NAS and SAN servers again. By sleeping for a period of time, fail-over due to temporary network outages may be avoided. After the second attempt, if the NAS server cannot contact its peer NAS server and the SAN servers, the NAS server may shut down NAS services. Specifically, the NAS server may execute the steps shown in flowchart 2700 of FIG. 27, and further described below:
In step 2702, export of file systems by NFS and CIFS is stopped.
In step 2704, all NAS file systems are unmounted.
In step 2706, the NAS server public IP address is shut down, and the boot-up IP address is re-assumed.
In step 2708, all LUNs are removed from the operating system.
In step 2710, the boot-up process described in the previous section is executed, and further operations described in the previous section regarding NAS server boot-up may be performed. If the NAS server is unable to connect to the public IP address of the peer
NAS server, but can contact a SAN server, it may assume the peer NAS server is down. In that event, the NAS server may sleep for a period of time (for example, 10 seconds). Sleeping for a period of time may aid in preventing fail-over from occurring during temporary network outages. After sleeping, the NAS server may re-attempt connecting to the peer NAS server. If, after the second attempt, the
NAS server is unable to connect with the peer NAS server, the NAS server may perform NAS fail-over. In an embodiment, the NAS server may execute the steps shown in flowchart 2800 of FIG. 28, and further described below:
In step 2802, the public IP address of the peer NAS server is assumed. In step 2804, a Gratuitous ARP request is issued, causing clients/hosts to update their IP-to-Ethernet mapping tables.
In step 2806, a list of the peer NAS server' s NAS volumes/file systems are obtained from /usr/StorageFile/etc Ipaddr.NASvolumes. "Ipaddr" is the public IP address of the peer NAS server. In step 2808, the file systems obtained in step 2806 are checked. In step 2810, the file systems obtained in step 2806 are mounted. In step 2812, the list of NFS exports is obtained from /usr/StorageFile/etc/Tpaddr.NFSexports, and is exported via NFS.
In step 2814, the NFS lock manager is stopped and re-started, causing clients/hosts to reclaim their locks.
In step 2816, the list of CIFS exports is obtained from /usr/StorageFile/etc/Ipaddr.CIFSexports, and is exported via CIFS.
In step 2818, the file /etc/smb.conf is modified to list its peer name as an alias for CIFS access. In step 2820, the heartbeat process is stopped. The heartbeat process is resumed when the peer comes back up, as described below.
When fail-over occurs, clients see a small period of time where the server seems inaccessible. That is because the NAS server must check every file system that was exported by the failed peer, mount it, and export it. However, after the NAS server completes the processing, the client sees the server as accessible again.
It client is not aware that a second server has assumed the identity of the failed server.
As mentioned in the section above relating to NAS server boot-up, aNAS server resumes its NAS functionality when it comes back up. After the NAS server boots, the NAS server checks to see if its public IP address is in use. For example, the NAS server may determine this by attempting to send a network message to its public IP address. If the NAS server's public address is in use, then fail-over has likely occurred. If that event, the NAS server may notify the peer NAS server that it has recovered. For example, the NAS server may send a message containing the following string to the peer NAS server:
I_AM_BACK:
When the peer NAS server receives this message, the peer NAS server may perform steps to return control of the original storage resources to the NAS server. For example, the peer NAS server may execute the steps shown in flowchart 2900 of FIG. 29, and further described below:
In step 2902, the public IP address of the NAS server is brought down.
In step 2904, file systems in Ipaddr.NFSexports are unexported. In step 2906, file systems in Ipaddr.CIFSexports are unexported.
In step 2908, file systems in Ipaddr.NASvolumes are unexported
In step 2910, the heartbeat to the NAS server is re-started.
In summary, failure of a NAS server results in the peer NAS server taking over all of the failed NAS server's resources. When the failed NAS server comes back up, it resumes confrol over its original resources.
10.0 Example NAS Protocol Messages
The NAS Protocol discussed above is a simple protocol that allows a SAN server to communicate with the NAS servers, and allows the NAS servers to communicate between themselves. FIG. 15 illustrates a NAS server 304 that includes aNAS protocol module 1504, according to an embodiment of the present invention. Embodiments for the NAS protocol module 1504 of the present invention are described as follows. These implementations are described herein for illustrative purposes, and are not limiting. In particular, the NAS protocol module as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof.
NAS protocol module 1504 generates a NAS protocol process. In an embodiment, the NAS protocol process binds to TCP port number 8173. The NAS protocol may use ASCII strings. To simplify session-layer issues, the first packet is 256 bytes. If the sfring in the packet is less than 256 bytes, the string may be NULL-terminated and the receiving process ignores the remainder of the 256 bytes. Messages in the NAS protocol are listed below. Further description of each message is presented elsewhere herein. In an embodiment, for all cases, a failure response consists of two messages. The first message is a 256-byte packet that contains the string "NAS:0:NumBytes", where NumBytes is the number of bytes in the second message. The second message contains a sfring describing the error. The responses listed below are example responses indicating successful completion.
Request: LUN:Enable:LunNumber:CreateFsFlag
Response: NAS: 1:0
Request: LUN:Disable:LunNumber
Response: NAS: 1:0
Request: LUN:ListNols:Type
Response: Message 1 : NAS : 1 :NumBytes Message 2: NumBytes of data
Request: LUN:GetStats
Response: Message 1: NAS: l:NumBytes Message 2: NumBytes of data
Request: LUN:Setperm:FileSystem:Owner:Group:Permissions
Response: NAS: 1:0
Request: NFS:Unexport:FileSystem
Response: NAS: 1:0
Request: Message 1: NFS:Export:FileSystem:rwFlag:NumBytes Message 2: NumBytes-length sfring with comma-separated list of hosts Response: NAS:1:0
Request: CIFS :Unexport:FileSy stem
Response: NAS: 1:0
Request: Message 1: CIFS:Export:FileSystem:rwFlag:NumBytes Message 2: NumBytes-length string with comma-separated list of users
Response: NAS: 1:0
Additional NAS protocol messages, or modifications to the NAS protocol messages presented above, would be recognized by persons skilled in the relevant art(s) from the teachings herein, and are within the scope and spirit of the present invention.
11. Example Computer System
An example of a computer system 1040 is shown in FIG. 10. The computer system 1040 represents any single or multi-processor computer. In conjunction, single-threaded and multi-threaded applications can be used. Unified or distributed memory systems can be used. Computer system 1040, or portions thereof, may be used to implement the present invention. For example, each of the SAN servers and NAS servers of the present invention may comprise software running on a computer system such as computer system 1040.
In one example, elements of the present invention may be implemented in a multi-platform (platform independent) programming language such as JAVA 1.1, programming language/structured query language (PL/SQL), hyper-text mark-up language (HTML), practical exfraction report language (PERL), common gateway interface/structured query language (CGI/SQL) or the like. Java™- enabled and JavaScript™- enabled browsers are used, such as, Netscape™, HotJava™, and Microsoft™ Explorer™ browsers. Active content Web pages can be used. Such active content Web pages can include Java™ applets or ActiveX™ controls, or any other active content technology developed now or in the future.
The present invention, however, is not intended to be limited to Java™, JavaScript™, or their enabled browsers, and can be implemented in any programming language and browser, developed now or in the future, as would be apparent to a person skilled in the art given this description. In another example, the present invention may be implemented using a high-level programming language (e.g., C++) and applications written for the Microsoft Windows™ environment. It will be apparent to persons skilled in the relevant art(s) how to implement the invention in alternative embodiments from the teachings herein. Computer system 1040 includes one or more processors, such as processor
1044. One or more processors 1044 can execute software implementing routines described above, such as shown in flowchart 1200. Each processor 1044 is connected to a communication infrastructure 1042 (e.g., a communications bus, cross-bar, or network). Various software embodiments are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or computer architectures. For example, SAN server 302 and/or NAS server 304 may include one or more of processor 1044.
Computer system 1040 can include a display interface 1002 that forwards graphics, text, and other data from the communication infrastructure 1042 (or from a frame buffer not shown) for display on the display unit 1030. For example, administrative interface 412 may include a display unit 1030 that displays GUI 1200. The display unit 1030 may be included in the structure of storage appliance 210, or may be separate. For instance, GUI communication link 426 may be included in display interface 1002. Display interface 1002 may include a network connection, including a LAN, WAN, or the Internet, such that GUI 1200 may be viewed remotely from SAN server 302.
Computer system 1040 also includes a main memory 1046, preferably random access memory (RAM), and can also include a secondary memory 1048. The secondary memory 1048 can include, for example, a hard disk drive 1050 and/or a removable storage drive 1052, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc. The removable storage drive 1052 reads from and/or writes to a removable storage unit 1054 in a well known manner. Removable storage unit 1054 represents a floppy disk, magnetic tape, optical disk, etc., which is read by and written to by removable storage drive 1052.
As will be appreciated, the removable storage unit 1054 includes a computer usable storage medium having stored therein computer software and/or data.
In alternative embodiments, secondary memory 1048 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 1040. Such means can include, for example, a removable storage unit 1062 and an interface 1060. Examples can include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 1062 and interfaces 1060 which allow software and data to be transferred from the removable storage unit 1062 to computer system 1040.
Computer system 1040 can also include a communications interface 1064. For example, first network interface 406, second network interface 402, and SAN server interface 408 shown in FIG. 4, and first network interface 508 and second network interface 502 shown in FIG. 5, may include one or more aspects of communications interface 1064. Communications interface 1064 allows software and data to be transferred between computer system 1040 and external devices via communications path 1066. Examples of communications interface 1064 can include a modem, a network interface (such as Ethernet card), a communications port, interfaces described above, etc. Software and data transferred via communications interface 1064 are in the form of signals which can be electronic, electromagnetic, optical or other signals capable of being received by communications interface 1064, via communications path 1066. Note that communications interface 1064 provides a means by which computer system 1040 can interface to a network such as the Internet.
The present invention can be implemented using software running (that is, executing) in an environment similar to that described above with respect to FIG. 8. In this document, the term "computer program product" is used to generally refer to removable storage unit 1054, a hard disk installed in hard disk drive 1050, or a carrier wave carrying software over a communication path 1066
(wireless link or cable) to communication interface 1064. A computer useable medium can include magnetic media, optical media, or other recordable media, or media that transmits a carrier wave or other signal. These computer program products are means for providing software to computer system 1040. Computer programs (also called computer confrol logic) are stored in main memory 1046 and/or secondary memory 1048. Computer programs can also be received via communications interface 1064. Such computer programs, when executed, enable the computer system 1040 to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, enable the processor 1044 to perform features of the present invention.
Accordingly, such computer programs represent controllers of the computer system 1040.
The present invention can be implemented as confrol logic in software, firmware, hardware or any combination thereof. In an embodiment where the invention is implemented using software, the software may be stored in a computer program product and loaded into computer system 1040 using removable storage drive 1052, hard disk drive 1050, or interface 1060. Alternatively, the computer program product may be downloaded to computer system 1040 over communications path 1066. The control logic (software), when executed by the one or more processors 1044, causes the processor(s) 1044 to perform functions of the invention as described herein.
In another embodiment, the invention is implemented primarily in firmware and/or hardware using, for example, hardware components such as application specific integrated circuits (ASICs). Implementation of a hardware state machine so as to perform the functions described herein will be apparent to persons skilled in the relevant art(s) from the teachings herein.
12.0 Conclusion
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

What Is Claimed Is:
1. A method for interfacing a storage area network (SAN) with a first data communication network, wherein one or more hosts coupled to the first data communication network can access data stored in one or more of a plurality of storage devices in the SAN, wherein the one or more hosts access one or more of the plurality of storage devices as network attached storage (NAS), comprising the steps of: coupling a SAN server to a SAN; coupling a NAS server to the SAN server through a second data communication network; coupling the NAS server to the first data communication network; allocating a portion of at least one of the plurality of storage devices from the SAN server to the NAS server; configuring the allocated portion as NAS storage in the NAS server; exporting the configured portion from the NAS server to be accessible to the one or more hosts coupled to the first data communication network.
2. The method of claim 1 , wherein the SAN server is configured to allocate storage from the SAN to at least one host attached to the second data communication network, further comprising the step of: coupling the SAN server to the second data communication network.
3. The method of claim 2, wherein said allocating step comprises the step of: viewing the NAS server from the SAN server as a host attached to the second data communication network.
4. The method of claim 3, wherein said allocating step further comprises the step of: allocating the portion of at least one of the plurality of storage devices from the SAN server to the NAS server in the same manner as the portion would be allocated from the SAN server to a host attached to the second data communication network.
5. The method of claim 1, further comprising the steps of: coupling an administrative interface to the SAN server; and receiving a storage allocation directive from the administrative interface with the SAN server.
6. The method of claim 5 , wherein said allocating step comprises the step of: sending a NAS protocol storage allocation message from the SAN server to the NAS server.
s
7. The method of claim 6, further comprising the step of: sending a response from the NAS server to the SAN server that indicates whether said configuring step was successful.
8. The method of claim 1, further comprising the steps of: deallocating a second virtual NAS storage device to form a deallocated storage portion; unexporting the virtual NAS storage device; and deconfϊguring the deallocated storage portion.
9. The method of claim 8, further comprising the step of: receiving a storage deallocation directive from an administrative interface with the SAN server.
10. The method of claim 9, wherein said deallocating step comprises the step of: sending aNAS protocol storage deallocation message from the SAN server to the NAS server.
11. The method of claim 10, further comprising the step of: sending a response from the NAS server to the SAN server that indicates whether said deconfiguring step was successful.
12. The method of claim 1, further comprising the step of: coupling a second NAS server in parallel with the first NAS server.
13. The method of claim 12, further comprising the steps of: determining the failure of the first NAS server; and performing fail-over of storage resources from the first NAS server to the second NAS server.
14. The method of claim 13 , wherein said determining step comprises the step of: monitoring a heartbeat signal sent from the first NAS server at the second NAS server.
15. The method of claim 13 , further comprising the steps of: notifying the second NAS server that the first NAS server has recovered; and returning confrol of the storage resources to the first NAS server.
16. A method for managing the allocation of storage from a storage area network (SAN) as network attached storage (NAS) to a data communication network, comprising the steps of:
(a) receiving a storage management directive from a graphical user interface; (b) sending a message corresponding to the received storage management directive to a NAS server; and
(c) receiving a response corresponding to the sent message from the NAS server.
17. The method of claim 16, further comprising the steps of:
(d) providing an command line interface (CLI) at the graphical user interface; and
(e) allowing a user to input the storage directive as a CLI command into the CLI.
18. The method of claim 16, wherein said message is a NAS protocol message, wherein step (b) comprises the step of: sending a NAS protocol message corresponding to the received storage directive to a NAS server.
19. The method of claim 16, wherein step (a) comprises the step of: receiving a storage management directive from the graphical user interface that is any one of the following storage management directives: storage allocation, list file systems, export file system, unexport file system, set permissions, obtain statistics, or configure NAS server.
20. An apparatus for accessing a plurality of storage devices in a storage area network (SAN) as network attached storage (NAS) in a data communication network, comprising: a SAN server that includes: a first interface configured to be coupled to the SAN; and a second interface that is coupled to a first data communication network; and a NAS server that includes: a third interface configured to be coupled to a second data communication network; and a fourth interface that is coupled to said first data communication network; wherein said SAN server allocates a first portion of the plurality of storage devices in the SAN to be accessible tlirough said second interface to at least one first host coupled to said first data communication network; wherein said SAN server allocates a second portion of the plurality of storage devices in the SAN to said NAS server; and wherein said NAS server configures access to said second portion of the plurality of storage devices to at least one second host coupled to said second data communication network.
21. The apparatus of claim 20, wherein said first portion of the plurality of storage devices in the SAN includes a first at least one physical storage device, wherein said SAN server further includes: a storage mapper that maps said first at least one physical storage device to at least one first logical storage device that is accessible to said at least one first host; wherein said second portion of the plurality of storage devices in the SAN includes a second at least one physical storage device; and wherein said storage mapper maps said second at least one physical storage device to at least one second logical storage device that is allocated to said NAS server.
22. The apparatus of claim 20, wherein each of said first interface, said second interface, and said fourth interface includes a fibre channel or SCSI interface, and wherein said third interface includes an Ethernet adaptor.
23. The apparatus of claim 20, further comprising a storage appliance, wherein said SAN server and said NAS server are included in said storage appliance.
24. The apparatus of claim 20, wherein said NAS server exports at least a portion of said second portion of said plurality of storage devices through said third interface to said second data communication network using network file system (NFS) protocol.
25. The apparatus of claim 20, wherein said NAS server exports at least a portion of said second portion of said plurality of storage devices through said third interface to said second data communication network using common Internet file system (CIFS) protocol.
26. The apparatus of claim 20, further comprising an administrative interface coupled to said SAN server.
27. The apparatus of claim 26, wherein said administrative interface includes a graphical user interface.
28. The apparatus of claim 20, further comprising a fibre channel switch coupled between said second interface of said SAN server and said first data communication network.
29. The apparatus of claim 20, further comprising a fibre channel switch coupled between said first interface of said SAN server and said SAN.
30. A storage appliance for accessing a plurality of storage devices in a storage area network (SAN) as network attached storage (NAS) in a data communication network, comprising: a first SAN server configured to be coupled to the plurality of storage devices in the SAN via a first data communication network, wherein said first SAN server is configured to be coupled to a second data communication network; a second SAN server configured to be coupled to the plurality of storage devices in the SAN via a third data communication network, wherein said second
SAN server is configured to be coupled to a fourth data communication network; a first NAS server configured to be coupled to a fifth data communication network, wherein said first NAS server is coupled to said second and said fourth data communication networks; and a second NAS server configured to be coupled to said fifth data communication network, wherein said second NAS server is coupled to said second and said fourth data communication networks; wherein said first SAN server allocates a first portion of the plurality of storage devices in the SAN to be accessible to at least one first host coupled to said second data communication network; wherein said first SAN server allocates a second portion of the plurality of storage devices in the SAN to said first NAS server; wherein said first NAS server configures access to said second portion of the plurality of storage devices to at least one second host coupled to said fifth data communication network; wherein said second NAS server assumes the configuring of access to said second portion of the plurality of storage devices by said first NAS server during failure of said first NAS server; and wherein said second SAN server assumes allocation of said second portion of the plurality of storage devices by said first SAN server during failure of said first SAN server.
31. The apparatus of claim 30, further comprising: a first fibre channel switch coupled between said first SAN server and said second data communication network; and a second fibre channel switch coupled between said second SAN server and said fourth data communication network.
32. The apparatus of claim 30, further comprising: a first fibre channel switch coupled between said first SAN server and said first data communication network; and a second fibre channel switch coupled between said second SAN server and said third data communication network.
33. A system for interfacing a storage area network (SAN) with a first data communication network, wherein one or more hosts coupled to the first data communication network can access data stored in one or more of a plurality of storage devices in the SAN, wherein the one or more hosts access one or more of the plurality of storage devices as network attached storage (NAS), comprising the steps of: means for coupling a SAN server to a SAN; means for coupling a NAS server to the SAN server through a second data communication network; means for coupling the NAS server to the first data communication network; means for allocating a portion of at least one of the plurality of storage devices from the SAN server to the NAS server; means for configuring the allocated portion as NAS storage in the NAS server; and means for exporting the configured portion from the NAS server to be accessible to the one or more hosts coupled to the first data communication network.
34. The system of claim 33, wherein the SAN server is configured to allocate storage from the SAN to at least one host attached to the second data communication network, further comprising: means for coupling the SAN server to the second data communication network.
35. The system of claim 34, wherein said means for allocating comprises: means for viewing the NAS server from the SAN server as a host attached to the second data communication network.
36. The system of claim 35, wherein means for allocating further comprises: means for allocating the portion of at least one of the plurality of storage devices from the SAN server to the NAS server in the same manner as the portion would be allocated from the SAN server to a host attached to the second data communication network.
37. The system of claim 33, further comprising: means for coupling an administrative interface to the SAN server; and means for receiving a storage allocation directive from the administrative interface with the SAN server.
PCT/US2001/005385 2001-02-20 2001-02-21 System and method for accessing a storage area network as network attached storage WO2002067529A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2001241588A AU2001241588A1 (en) 2001-02-20 2001-02-21 System and method for accessing a storage area network as network attached storage
EP01912847A EP1382176A2 (en) 2001-02-20 2001-02-21 System and method for accessing a storage area network as network attached storage

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/785,473 US6606690B2 (en) 2001-02-20 2001-02-20 System and method for accessing a storage area network as network attached storage
US09/785,473 2001-02-20

Publications (2)

Publication Number Publication Date
WO2002067529A2 true WO2002067529A2 (en) 2002-08-29
WO2002067529A3 WO2002067529A3 (en) 2003-10-16

Family

ID=25135617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/005385 WO2002067529A2 (en) 2001-02-20 2001-02-21 System and method for accessing a storage area network as network attached storage

Country Status (4)

Country Link
US (1) US6606690B2 (en)
EP (1) EP1382176A2 (en)
AU (1) AU2001241588A1 (en)
WO (1) WO2002067529A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1543424A2 (en) * 2002-08-09 2005-06-22 Network Appliance, Inc. Storage virtualization by layering virtual disk objects on a file system
EP1761853A2 (en) * 2004-06-28 2007-03-14 Emc Corporation Low cost flexible network accessed storage architecture
WO2007103533A1 (en) * 2006-03-08 2007-09-13 Omneon Video Networks Gateway server
WO2008070802A3 (en) * 2006-12-06 2008-10-09 David Flynn Apparatus, system, and method for an in-server storage area network
US20130173732A1 (en) * 2011-04-22 2013-07-04 Kwang Sik Seo Network connection system for sharing data among independent networks
US9407516B2 (en) 2011-01-10 2016-08-02 Storone Ltd. Large scale storage system
US9448900B2 (en) 2012-06-25 2016-09-20 Storone Ltd. System and method for datacenters disaster recovery
US9612851B2 (en) 2013-03-21 2017-04-04 Storone Ltd. Deploying data-path-related plug-ins
US11960412B2 (en) 2022-10-19 2024-04-16 Unification Technologies Llc Systems and methods for identifying storage resources that are not in use

Families Citing this family (325)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6735676B1 (en) * 1996-09-02 2004-05-11 Hitachi, Ltd. Method and system for sharing storing device via mutually different interfaces
US6857076B1 (en) 1999-03-26 2005-02-15 Micron Technology, Inc. Data security for digital data storage
US7096370B1 (en) * 1999-03-26 2006-08-22 Micron Technology, Inc. Data security for digital data storage
US6877044B2 (en) * 2000-02-10 2005-04-05 Vicom Systems, Inc. Distributed storage management platform architecture
US6772270B1 (en) 2000-02-10 2004-08-03 Vicom Systems, Inc. Multi-port fibre channel controller
US7222176B1 (en) * 2000-08-28 2007-05-22 Datacore Software Corporation Apparatus and method for using storage domains for controlling data in storage area networks
US6868417B2 (en) * 2000-12-18 2005-03-15 Spinnaker Networks, Inc. Mechanism for handling file level and block level remote file accesses using the same server
US6968463B2 (en) 2001-01-17 2005-11-22 Hewlett-Packard Development Company, L.P. System for controlling access to resources in a storage area network
US20040233910A1 (en) * 2001-02-23 2004-11-25 Wen-Shyen Chen Storage area network using a data communication protocol
JP2004523837A (en) * 2001-02-28 2004-08-05 クロスローズ・システムズ・インコーポレイテッド Method and system for duplicating a data flow within a SCSI extended copy command
US20020129216A1 (en) * 2001-03-06 2002-09-12 Kevin Collins Apparatus and method for configuring available storage capacity on a network as a logical device
US20020133539A1 (en) * 2001-03-14 2002-09-19 Imation Corp. Dynamic logical storage volumes
AU2002250559A1 (en) * 2001-03-22 2002-10-08 United Video Properties, Inc. Personal video recorder systems and methods
US7526795B2 (en) * 2001-03-27 2009-04-28 Micron Technology, Inc. Data security for digital data storage
US6915524B2 (en) * 2001-04-06 2005-07-05 International Business Machines Corporation Method for controlling multiple storage devices from a single software entity
US6779063B2 (en) * 2001-04-09 2004-08-17 Hitachi, Ltd. Direct access storage system having plural interfaces which permit receipt of block and file I/O requests
US7171453B2 (en) * 2001-04-19 2007-01-30 Hitachi, Ltd. Virtual private volume method and system
JP4484396B2 (en) * 2001-05-18 2010-06-16 株式会社日立製作所 Turbine blade
US6915397B2 (en) * 2001-06-01 2005-07-05 Hewlett-Packard Development Company, L.P. System and method for generating point in time storage copy
US20020188697A1 (en) * 2001-06-08 2002-12-12 O'connor Michael A. A method of allocating storage in a storage area network
EP1407342A2 (en) * 2001-06-14 2004-04-14 Cable & Wireless Internet Services, Inc. Secured shared storage architecture
US6714953B2 (en) * 2001-06-21 2004-03-30 International Business Machines Corporation System and method for managing file export information
US7343410B2 (en) * 2001-06-28 2008-03-11 Finisar Corporation Automated creation of application data paths in storage area networks
EP1435049B1 (en) * 2001-07-09 2013-06-19 Savvis, Inc. Methods and systems for shared storage virtualization
US20030018657A1 (en) * 2001-07-18 2003-01-23 Imation Corp. Backup of data on a network
JP4156817B2 (en) * 2001-07-27 2008-09-24 株式会社日立製作所 Storage system
US7472231B1 (en) * 2001-09-07 2008-12-30 Netapp, Inc. Storage area network data cache
US7185062B2 (en) * 2001-09-28 2007-02-27 Emc Corporation Switch-based storage services
US7243229B2 (en) * 2001-10-02 2007-07-10 Hitachi, Ltd. Exclusive access control apparatus and method
US20030093509A1 (en) * 2001-10-05 2003-05-15 Li Raymond M. Storage area network methods and apparatus with coordinated updating of topology representation
US7287063B2 (en) * 2001-10-05 2007-10-23 International Business Machines Corporation Storage area network methods and apparatus using event notifications with data
US8060587B2 (en) * 2001-10-05 2011-11-15 International Business Machines Corporation Methods and apparatus for launching device specific applications on storage area network components
US7080140B2 (en) * 2001-10-05 2006-07-18 International Business Machines Corporation Storage area network methods and apparatus for validating data from multiple sources
US6931487B2 (en) * 2001-10-22 2005-08-16 Hewlett-Packard Development Company L.P. High performance multi-controller processing
JP2003141054A (en) * 2001-11-07 2003-05-16 Hitachi Ltd Storage management computer
JP2003162439A (en) * 2001-11-22 2003-06-06 Hitachi Ltd Storage system and control method therefor
JP2003208347A (en) * 2002-01-16 2003-07-25 Fujitsu Ltd Access controller, access control program, host device and host control program
US7349992B2 (en) * 2002-01-24 2008-03-25 Emulex Design & Manufacturing Corporation System for communication with a storage area network
US20030154314A1 (en) * 2002-02-08 2003-08-14 I/O Integrity, Inc. Redirecting local disk traffic to network attached storage
JP2003241903A (en) * 2002-02-14 2003-08-29 Hitachi Ltd Storage control device, storage system and control method thereof
US7360122B2 (en) 2002-02-22 2008-04-15 Bea Systems, Inc. Method for initiating a sub-system health check
US7233989B2 (en) * 2002-02-22 2007-06-19 Bea Systems, Inc. Method for automatic monitoring of managed server health
US20030172069A1 (en) * 2002-03-08 2003-09-11 Yasufumi Uchiyama Access management server, disk array system, and access management method thereof
US6954839B2 (en) * 2002-03-13 2005-10-11 Hitachi, Ltd. Computer system
US7313557B1 (en) 2002-03-15 2007-12-25 Network Appliance, Inc. Multi-protocol lock manager
US6993539B2 (en) 2002-03-19 2006-01-31 Network Appliance, Inc. System and method for determining changes in two snapshots and for transmitting changes to destination snapshot
US7886298B2 (en) * 2002-03-26 2011-02-08 Hewlett-Packard Development Company, L.P. Data transfer protocol for data replication between multiple pairs of storage controllers on a san fabric
US6947981B2 (en) * 2002-03-26 2005-09-20 Hewlett-Packard Development Company, L.P. Flexible data replication mechanism
US7032131B2 (en) * 2002-03-26 2006-04-18 Hewlett-Packard Development Company, L.P. System and method for ensuring merge completion in a storage area network
US7007042B2 (en) * 2002-03-28 2006-02-28 Hewlett-Packard Development Company, L.P. System and method for automatic site failover in a storage area network
US8051197B2 (en) * 2002-03-29 2011-11-01 Brocade Communications Systems, Inc. Network congestion management systems and methods
JP2003316671A (en) * 2002-04-19 2003-11-07 Hitachi Ltd Method for displaying configuration of storage network
US7433952B1 (en) * 2002-04-22 2008-10-07 Cisco Technology, Inc. System and method for interconnecting a storage area network
US7165258B1 (en) * 2002-04-22 2007-01-16 Cisco Technology, Inc. SCSI-based storage area network having a SCSI router that routes traffic between SCSI and IP networks
JP3957278B2 (en) * 2002-04-23 2007-08-15 株式会社日立製作所 File transfer method and system
JP2003316616A (en) * 2002-04-24 2003-11-07 Hitachi Ltd Computer system
US7398326B2 (en) * 2002-04-25 2008-07-08 International Business Machines Corporation Methods for management of mixed protocol storage area networks
JP4704659B2 (en) 2002-04-26 2011-06-15 株式会社日立製作所 Storage system control method and storage control device
US8543657B2 (en) * 2002-05-03 2013-09-24 Samsung Electronics Co., Ltd Data communication system and method using a wireless terminal
US6947939B2 (en) * 2002-05-08 2005-09-20 Hitachi, Ltd. System and methods to manage wide storage area network
JP2003330782A (en) 2002-05-10 2003-11-21 Hitachi Ltd Computer system
US6785794B2 (en) * 2002-05-17 2004-08-31 International Business Machines Corporation Differentiated storage resource provisioning
US7383330B2 (en) * 2002-05-24 2008-06-03 Emc Corporation Method for mapping a network fabric
JP2003345631A (en) * 2002-05-28 2003-12-05 Hitachi Ltd Computer system and allocating method for storage area
JP4100968B2 (en) * 2002-06-06 2008-06-11 株式会社日立製作所 Data mapping management device
US7003527B1 (en) * 2002-06-27 2006-02-21 Emc Corporation Methods and apparatus for managing devices within storage area networks
US20040210677A1 (en) * 2002-06-28 2004-10-21 Vinodh Ravindran Apparatus and method for mirroring in a storage processing device
US20040148376A1 (en) * 2002-06-28 2004-07-29 Brocade Communications Systems, Inc. Storage area network processing device
US8200871B2 (en) * 2002-06-28 2012-06-12 Brocade Communications Systems, Inc. Systems and methods for scalable distributed storage processing
US7418702B2 (en) * 2002-08-06 2008-08-26 Sheng (Ted) Tai Tsao Concurrent web based multi-task support for control management system
US7379990B2 (en) * 2002-08-12 2008-05-27 Tsao Sheng Ted Tai Distributed virtual SAN
US7873700B2 (en) * 2002-08-09 2011-01-18 Netapp, Inc. Multi-protocol storage appliance that provides integrated support for file and block access protocols
US7711539B1 (en) * 2002-08-12 2010-05-04 Netapp, Inc. System and method for emulating SCSI reservations using network file access protocols
US7103638B1 (en) * 2002-09-04 2006-09-05 Veritas Operating Corporation Mechanism to re-export NFS client mount points from nodes in a cluster
US7397768B1 (en) 2002-09-11 2008-07-08 Qlogic, Corporation Zone management in a multi-module fibre channel switch
JP2004110367A (en) 2002-09-18 2004-04-08 Hitachi Ltd Storage system control method, storage control device, and storage system
US7475124B2 (en) * 2002-09-25 2009-01-06 Emc Corporation Network block services for client access of network-attached data storage in an IP network
US7340486B1 (en) * 2002-10-10 2008-03-04 Network Appliance, Inc. System and method for file system snapshot of a virtual logical disk
JP2004171206A (en) * 2002-11-19 2004-06-17 Hitachi Ltd Storage system
US7263593B2 (en) 2002-11-25 2007-08-28 Hitachi, Ltd. Virtualization controller and data transfer control method
US7191225B1 (en) * 2002-11-27 2007-03-13 Veritas Operating Corporation Mechanism to provide direct multi-node file system access to files on a single-node storage stack
US7443845B2 (en) * 2002-12-06 2008-10-28 Cisco Technology, Inc. Apparatus and method for a lightweight, reliable, packet-based transport protocol
US7475142B2 (en) * 2002-12-06 2009-01-06 Cisco Technology, Inc. CIFS for scalable NAS architecture
US20040139167A1 (en) * 2002-12-06 2004-07-15 Andiamo Systems Inc., A Delaware Corporation Apparatus and method for a scalable network attach storage system
EP1573962B1 (en) * 2002-12-20 2011-03-16 International Business Machines Corporation Secure system and method for san management in a non-trusted server environment
US7069307B1 (en) 2002-12-20 2006-06-27 Network Appliance, Inc. System and method for inband management of a virtual disk
JP4152755B2 (en) * 2003-01-10 2008-09-17 富士通株式会社 Server device having a function of switching between old and new program modules
JP2004220216A (en) * 2003-01-14 2004-08-05 Hitachi Ltd San/nas integrated storage device
JP2004220450A (en) * 2003-01-16 2004-08-05 Hitachi Ltd Storage device, its introduction method and its introduction program
JP4330889B2 (en) 2003-01-20 2009-09-16 株式会社日立製作所 Method for installing software in storage device control apparatus, control method for storage device control apparatus, and storage device control apparatus
JP2004227098A (en) * 2003-01-20 2004-08-12 Hitachi Ltd Control method of storage device controller and storage device controller
JP4567293B2 (en) * 2003-01-21 2010-10-20 株式会社日立製作所 file server
JP4237515B2 (en) * 2003-02-07 2009-03-11 株式会社日立グローバルストレージテクノロジーズ Network storage virtualization method and network storage system
JP4226350B2 (en) * 2003-02-17 2009-02-18 株式会社日立製作所 Data migration method
JP4651913B2 (en) 2003-02-17 2011-03-16 株式会社日立製作所 Storage system
US20040177198A1 (en) * 2003-02-18 2004-09-09 Hewlett-Packard Development Company, L.P. High speed multiple ported bus interface expander control system
JP2004280283A (en) 2003-03-13 2004-10-07 Hitachi Ltd Distributed file system, distributed file system server, and access method to distributed file system
JP4320195B2 (en) 2003-03-19 2009-08-26 株式会社日立製作所 File storage service system, file management apparatus, file management method, ID designation type NAS server, and file reading method
US7460528B1 (en) 2003-04-15 2008-12-02 Brocade Communications Systems, Inc. Processing data packets at a storage service module of a switch
US7382776B1 (en) 2003-04-15 2008-06-03 Brocade Communication Systems, Inc. Performing block storage virtualization at a switch
US7380163B2 (en) * 2003-04-23 2008-05-27 Dot Hill Systems Corporation Apparatus and method for deterministically performing active-active failover of redundant servers in response to a heartbeat link failure
US7293152B1 (en) * 2003-04-23 2007-11-06 Network Appliance, Inc. Consistent logical naming of initiator groups
US7587422B2 (en) * 2003-04-24 2009-09-08 Neopath Networks, Inc. Transparent file replication using namespace replication
US7346664B2 (en) * 2003-04-24 2008-03-18 Neopath Networks, Inc. Transparent file migration using namespace replication
US7831641B2 (en) * 2003-04-24 2010-11-09 Neopath Networks, Inc. Large file support for a network file server
US7072917B2 (en) * 2003-04-24 2006-07-04 Neopath Networks, Inc. Extended storage capacity for a network file server
US7181439B1 (en) * 2003-04-25 2007-02-20 Network Appliance, Inc. System and method for transparently accessing a virtual disk using a file-based protocol
JP2004348464A (en) * 2003-05-22 2004-12-09 Hitachi Ltd Storage device and communication signal shaping circuit
JP4060235B2 (en) * 2003-05-22 2008-03-12 株式会社日立製作所 Disk array device and disk array device control method
US7093120B2 (en) * 2003-05-29 2006-08-15 International Business Machines Corporation Method, apparatus, and program for performing boot, maintenance, or install operations on a storage area network
US7885256B1 (en) * 2003-05-30 2011-02-08 Symantec Operating Corporation SAN fabric discovery
US7876703B2 (en) * 2003-06-13 2011-01-25 International Business Machines Corporation System and method for enabling connection among devices in a network
JP4433372B2 (en) 2003-06-18 2010-03-17 株式会社日立製作所 Data access system and method
JP2005018193A (en) 2003-06-24 2005-01-20 Hitachi Ltd Interface command control method for disk device, and computer system
US7899885B2 (en) * 2003-06-27 2011-03-01 At&T Intellectual Property I, Lp Business enterprise backup and recovery system and method
US7451208B1 (en) 2003-06-28 2008-11-11 Cisco Technology, Inc. Systems and methods for network address failover
JP2005031929A (en) * 2003-07-11 2005-02-03 Hitachi Ltd Management server for assigning storage area to server, storage device system, and program
US7430175B2 (en) 2003-07-21 2008-09-30 Qlogic, Corporation Method and system for managing traffic in fibre channel systems
US7894348B2 (en) 2003-07-21 2011-02-22 Qlogic, Corporation Method and system for congestion control in a fibre channel switch
US7792115B2 (en) 2003-07-21 2010-09-07 Qlogic, Corporation Method and system for routing and filtering network data packets in fibre channel systems
US7646767B2 (en) 2003-07-21 2010-01-12 Qlogic, Corporation Method and system for programmable data dependant network routing
US7406092B2 (en) 2003-07-21 2008-07-29 Qlogic, Corporation Programmable pseudo virtual lanes for fibre channel systems
US7684401B2 (en) 2003-07-21 2010-03-23 Qlogic, Corporation Method and system for using extended fabric features with fibre channel switch elements
US7421658B1 (en) * 2003-07-30 2008-09-02 Oracle International Corporation Method and system for providing a graphical user interface for a script session
JP4297747B2 (en) * 2003-08-06 2009-07-15 株式会社日立製作所 Storage device
JP2005071196A (en) * 2003-08-27 2005-03-17 Hitachi Ltd Disk array apparatus and control method of its fault information
WO2005029251A2 (en) * 2003-09-15 2005-03-31 Neopath Networks, Inc. Enabling proxy services using referral mechanisms
US7219201B2 (en) * 2003-09-17 2007-05-15 Hitachi, Ltd. Remote storage disk control device and method for controlling the same
JP4598387B2 (en) * 2003-09-17 2010-12-15 株式会社日立製作所 Storage system
JP4307202B2 (en) 2003-09-29 2009-08-05 株式会社日立製作所 Storage system and storage control device
TWI227613B (en) * 2003-09-30 2005-02-01 Icp Electronics Inc Method of storing data access records in network communication device
US8832842B1 (en) * 2003-10-07 2014-09-09 Oracle America, Inc. Storage area network external security device
US20050081099A1 (en) * 2003-10-09 2005-04-14 International Business Machines Corporation Method and apparatus for ensuring valid journaled file system metadata during a backup operation
US20050086427A1 (en) * 2003-10-20 2005-04-21 Robert Fozard Systems and methods for storage filing
JP4257783B2 (en) 2003-10-23 2009-04-22 株式会社日立製作所 Logically partitionable storage device and storage device system
US7603453B1 (en) * 2003-10-24 2009-10-13 Network Appliance, Inc. Creating links between nodes connected to a fibre channel (FC) fabric
US7366866B2 (en) * 2003-10-30 2008-04-29 Hewlett-Packard Development Company, L.P. Block size allocation in copy operations
US7383313B2 (en) * 2003-11-05 2008-06-03 Hitachi, Ltd. Apparatus and method of heartbeat mechanism using remote mirroring link for multiple storage system
US7721062B1 (en) 2003-11-10 2010-05-18 Netapp, Inc. Method for detecting leaked buffer writes across file system consistency points
US7401093B1 (en) 2003-11-10 2008-07-15 Network Appliance, Inc. System and method for managing file data during consistency points
US7783611B1 (en) 2003-11-10 2010-08-24 Netapp, Inc. System and method for managing file metadata during consistency points
JP4426261B2 (en) * 2003-11-25 2010-03-03 株式会社日立製作所 Channel adapter and disk array device
US7669032B2 (en) * 2003-11-26 2010-02-23 Symantec Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US20050114595A1 (en) * 2003-11-26 2005-05-26 Veritas Operating Corporation System and method for emulating operating system metadata to provide cross-platform access to storage volumes
JP2005165441A (en) * 2003-11-28 2005-06-23 Hitachi Ltd Storage controller and method for controlling storage controller
JP4156499B2 (en) * 2003-11-28 2008-09-24 株式会社日立製作所 Disk array device
US7698289B2 (en) * 2003-12-02 2010-04-13 Netapp, Inc. Storage system architecture for striping data container content across volumes of a cluster
JP4703959B2 (en) 2003-12-03 2011-06-15 株式会社日立製作所 Storage device system and replication creation method thereof
JP2005190036A (en) * 2003-12-25 2005-07-14 Hitachi Ltd Storage controller and control method for storage controller
JP4497918B2 (en) 2003-12-25 2010-07-07 株式会社日立製作所 Storage system
JP4463042B2 (en) * 2003-12-26 2010-05-12 株式会社日立製作所 Storage system having volume dynamic allocation function
US7340639B1 (en) 2004-01-08 2008-03-04 Network Appliance, Inc. System and method for proxying data access commands in a clustered storage system
US8566446B2 (en) * 2004-01-28 2013-10-22 Hewlett-Packard Development Company, L.P. Write operation control in storage networks
JP4477365B2 (en) * 2004-01-29 2010-06-09 株式会社日立製作所 Storage device having a plurality of interfaces and control method of the storage device
US20050198401A1 (en) * 2004-01-29 2005-09-08 Chron Edward G. Efficiently virtualizing multiple network attached stores
JP4227035B2 (en) * 2004-02-03 2009-02-18 株式会社日立製作所 Computer system, management device, storage device, and computer device
JP4634049B2 (en) * 2004-02-04 2011-02-16 株式会社日立製作所 Error notification control in disk array system
JP2005228170A (en) * 2004-02-16 2005-08-25 Hitachi Ltd Storage device system
US7133988B2 (en) * 2004-02-25 2006-11-07 Hitachi, Ltd. Method and apparatus for managing direct I/O to storage systems in virtualization
JP4391265B2 (en) 2004-02-26 2009-12-24 株式会社日立製作所 Storage subsystem and performance tuning method
US20050193105A1 (en) * 2004-02-27 2005-09-01 Basham Robert B. Method and system for processing network discovery data
US7949792B2 (en) * 2004-02-27 2011-05-24 Cisco Technology, Inc. Encoding a TCP offload engine within FCP
US7272654B1 (en) 2004-03-04 2007-09-18 Sandbox Networks, Inc. Virtualizing network-attached-storage (NAS) with a compact table that stores lossy hashes of file names and parent handles rather than full names
US7966293B1 (en) 2004-03-09 2011-06-21 Netapp, Inc. System and method for indexing a backup using persistent consistency point images
US8782654B2 (en) 2004-03-13 2014-07-15 Adaptive Computing Enterprises, Inc. Co-allocating a reservation spanning different compute resources types
US7577688B2 (en) * 2004-03-16 2009-08-18 Onstor, Inc. Systems and methods for transparent movement of file services in a clustered environment
JP2005267008A (en) 2004-03-17 2005-09-29 Hitachi Ltd Method and system for storage management
JP2005301590A (en) 2004-04-09 2005-10-27 Hitachi Ltd Storage system and data copying method
US8230085B2 (en) * 2004-04-12 2012-07-24 Netapp, Inc. System and method for supporting block-based protocols on a virtual storage appliance executing within a physical storage appliance
US8190741B2 (en) * 2004-04-23 2012-05-29 Neopath Networks, Inc. Customizing a namespace in a decentralized storage environment
US7930377B2 (en) * 2004-04-23 2011-04-19 Qlogic, Corporation Method and system for using boot servers in networks
US7720796B2 (en) * 2004-04-23 2010-05-18 Neopath Networks, Inc. Directory and file mirroring for migration, snapshot, and replication
US8195627B2 (en) * 2004-04-23 2012-06-05 Neopath Networks, Inc. Storage policy monitoring for a storage network
US7484058B2 (en) * 2004-04-28 2009-01-27 Emc Corporation Reactive deadlock management in storage area networks
US8996455B2 (en) * 2004-04-30 2015-03-31 Netapp, Inc. System and method for configuring a storage network utilizing a multi-protocol storage appliance
US7409511B2 (en) * 2004-04-30 2008-08-05 Network Appliance, Inc. Cloning technique for efficiently creating a copy of a volume in a storage system
US7409494B2 (en) 2004-04-30 2008-08-05 Network Appliance, Inc. Extension of write anywhere file system layout
US7430571B2 (en) 2004-04-30 2008-09-30 Network Appliance, Inc. Extension of write anywhere file layout write allocation
JP4726432B2 (en) * 2004-05-10 2011-07-20 株式会社日立製作所 Disk array device
US7231503B2 (en) * 2004-05-12 2007-06-12 Hitachi, Ltd. Reconfiguring logical settings in a storage system
US7683904B2 (en) * 2004-05-17 2010-03-23 Pixar Manual component asset change isolation methods and apparatus
US20070266388A1 (en) 2004-06-18 2007-11-15 Cluster Resources, Inc. System and method for providing advanced reservations in a compute environment
US20060023692A1 (en) * 2004-07-19 2006-02-02 Tom Fleissner Exemplary method and apparatus for creating an efficient storage area network
US8176490B1 (en) 2004-08-20 2012-05-08 Adaptive Computing Enterprises, Inc. System and method of interfacing a workload manager and scheduler with an identity manager
JP4646574B2 (en) 2004-08-30 2011-03-09 株式会社日立製作所 Data processing system
JP4826077B2 (en) * 2004-08-31 2011-11-30 株式会社日立製作所 Boot disk management method
US20060064558A1 (en) * 2004-09-20 2006-03-23 Cochran Robert A Internal mirroring operations in storage networks
US7290112B2 (en) * 2004-09-30 2007-10-30 International Business Machines Corporation System and method for virtualization of processor resources
US20060070069A1 (en) * 2004-09-30 2006-03-30 International Business Machines Corporation System and method for sharing resources between real-time and virtualizing operating systems
US8295299B2 (en) 2004-10-01 2012-10-23 Qlogic, Corporation High speed fibre channel switch element
US7676611B2 (en) 2004-10-01 2010-03-09 Qlogic, Corporation Method and system for processing out of orders frames
JP2006127028A (en) 2004-10-27 2006-05-18 Hitachi Ltd Memory system and storage controller
US7305530B2 (en) * 2004-11-02 2007-12-04 Hewlett-Packard Development Company, L.P. Copy operations in storage networks
US7472307B2 (en) * 2004-11-02 2008-12-30 Hewlett-Packard Development Company, L.P. Recovery operations in storage networks
US20060106893A1 (en) * 2004-11-02 2006-05-18 Rodger Daniels Incremental backup operations in storage networks
CA2586763C (en) 2004-11-08 2013-12-17 Cluster Resources, Inc. System and method of providing system jobs within a compute environment
US20060129987A1 (en) * 2004-12-15 2006-06-15 Patten Benhase Linda V Apparatus, system, and method for accessing management data
US7238030B2 (en) * 2004-12-20 2007-07-03 Emc Corporation Multi-function expansion slots for a storage system
JP2006178720A (en) * 2004-12-22 2006-07-06 Hitachi Ltd Storage system
US8180855B2 (en) 2005-01-27 2012-05-15 Netapp, Inc. Coordinated shared storage architecture
US8127088B2 (en) * 2005-01-27 2012-02-28 Hewlett-Packard Development Company, L.P. Intelligent cache management
US8019842B1 (en) 2005-01-27 2011-09-13 Netapp, Inc. System and method for distributing enclosure services data to coordinate shared storage
US7301718B2 (en) * 2005-01-31 2007-11-27 Hewlett-Packard Development Company, L.P. Recording errors in tape drives
US7626218B2 (en) * 2005-02-04 2009-12-01 Raytheon Company Monolithic integrated circuit having enhancement mode/depletion mode field effect transistors and RF/RF/microwave/milli-meter wave milli-meter wave field effect transistors
JP2006227856A (en) * 2005-02-17 2006-08-31 Hitachi Ltd Access controller and interface mounted on the same
US8863143B2 (en) 2006-03-16 2014-10-14 Adaptive Computing Enterprises, Inc. System and method for managing a hybrid compute environment
EP2348409B1 (en) * 2005-03-16 2017-10-04 III Holdings 12, LLC Automatic workload transfer to an on-demand center
US7757056B1 (en) 2005-03-16 2010-07-13 Netapp, Inc. System and method for efficiently calculating storage required to split a clone volume
US9231886B2 (en) 2005-03-16 2016-01-05 Adaptive Computing Enterprises, Inc. Simple integration of an on-demand compute environment
US20060230243A1 (en) * 2005-04-06 2006-10-12 Robert Cochran Cascaded snapshots
EP3203374B1 (en) 2005-04-07 2021-11-24 III Holdings 12, LLC On-demand access to compute resources
US7343468B2 (en) * 2005-04-14 2008-03-11 International Business Machines Corporation Method and apparatus for storage provisioning automation in a data center
US7743210B1 (en) 2005-04-29 2010-06-22 Netapp, Inc. System and method for implementing atomic cross-stripe write operations in a striped volume set
US7698334B2 (en) * 2005-04-29 2010-04-13 Netapp, Inc. System and method for multi-tiered meta-data caching and distribution in a clustered computer environment
US7904649B2 (en) * 2005-04-29 2011-03-08 Netapp, Inc. System and method for restriping data across a plurality of volumes
US7698501B1 (en) 2005-04-29 2010-04-13 Netapp, Inc. System and method for utilizing sparse data containers in a striped volume set
US8073899B2 (en) * 2005-04-29 2011-12-06 Netapp, Inc. System and method for proxying data access commands in a storage system cluster
US7962689B1 (en) 2005-04-29 2011-06-14 Netapp, Inc. System and method for performing transactional processing in a striped volume set
US20060271579A1 (en) * 2005-05-10 2006-11-30 Arun Batish Storage usage analysis
US20060265358A1 (en) * 2005-05-17 2006-11-23 Junichi Hara Method and apparatus for providing information to search engines
US7984258B2 (en) * 2005-06-03 2011-07-19 Seagate Technology Llc Distributed storage system with global sparing
US7644228B2 (en) 2005-06-03 2010-01-05 Seagate Technology Llc Distributed storage system with global replication
WO2007002855A2 (en) * 2005-06-29 2007-01-04 Neopath Networks, Inc. Parallel filesystem traversal for transparent mirroring of directories and files
US7779218B2 (en) * 2005-07-22 2010-08-17 Hewlett-Packard Development Company, L.P. Data synchronization management
US7653682B2 (en) * 2005-07-22 2010-01-26 Netapp, Inc. Client failure fencing mechanism for fencing network file system data in a host-cluster environment
US7206156B2 (en) * 2005-07-27 2007-04-17 Hewlett-Packard Development Company, L.P. Tape drive error management
US7617541B2 (en) * 2005-09-09 2009-11-10 Netapp, Inc. Method and/or system to authorize access to stored data
US20070064477A1 (en) * 2005-09-20 2007-03-22 Battelle Memorial Institute System for remote data sharing
JP2007087039A (en) * 2005-09-21 2007-04-05 Hitachi Ltd Disk array system and control method
US8131689B2 (en) * 2005-09-30 2012-03-06 Panagiotis Tsirigotis Accumulating access frequency and file attributes for supporting policy based storage management
US7325078B2 (en) * 2005-10-06 2008-01-29 Hewlett-Packard Development Company, L.P. Secure data scrubbing
US7721053B2 (en) * 2005-10-24 2010-05-18 Hewlett-Packard Development Company, L.P. Intelligent logical unit provisioning
EP1949214B1 (en) 2005-10-28 2012-12-19 Network Appliance, Inc. System and method for optimizing multi-pathing support in a distributed storage system environment
US8122070B1 (en) * 2005-12-29 2012-02-21 United States Automobile Association (USAA) Document management system user interfaces
JP2007219657A (en) * 2006-02-14 2007-08-30 Hitachi Ltd Storage system and its recovery method
US7548560B1 (en) 2006-02-27 2009-06-16 Qlogic, Corporation Method and system for checking frame-length in fibre channel frames
US7590660B1 (en) 2006-03-21 2009-09-15 Network Appliance, Inc. Method and system for efficient database cloning
US7496551B1 (en) * 2006-03-28 2009-02-24 Emc Corporation Methods and apparatus associated with advisory generation
US7921185B2 (en) * 2006-03-29 2011-04-05 Dell Products L.P. System and method for managing switch and information handling system SAS protocol communication
US7467268B2 (en) 2006-04-14 2008-12-16 Hewlett-Packard Development Company, L.P. Concurrent data restore and background copy operations in storage networks
US7577865B2 (en) * 2006-04-14 2009-08-18 Dell Products L.P. System and method for failure recovery in a shared storage system
US20070299952A1 (en) * 2006-06-23 2007-12-27 Brian Gerard Goodman External network management interface proxy addressing of data storage drives
US7428614B2 (en) * 2006-07-27 2008-09-23 Hitachi, Ltd. Management system for a virtualized storage environment
JP5073259B2 (en) 2006-09-28 2012-11-14 株式会社日立製作所 Virtualization system and area allocation control method
US7925758B1 (en) 2006-11-09 2011-04-12 Symantec Operating Corporation Fibre accelerated pipe data transport
US8301673B2 (en) * 2006-12-29 2012-10-30 Netapp, Inc. System and method for performing distributed consistency verification of a clustered file system
US8489811B1 (en) 2006-12-29 2013-07-16 Netapp, Inc. System and method for addressing data containers using data set identifiers
AU2008205007A1 (en) * 2007-01-05 2008-07-17 Sanpulse Technologies, Inc. Storage optimization method
US7934027B2 (en) * 2007-01-19 2011-04-26 Hewlett-Packard Development Company, L.P. Critical resource management
US20080177907A1 (en) * 2007-01-23 2008-07-24 Paul Boerger Method and system of a peripheral port of a server system
US8868495B2 (en) * 2007-02-21 2014-10-21 Netapp, Inc. System and method for indexing user data on storage systems
US8312046B1 (en) 2007-02-28 2012-11-13 Netapp, Inc. System and method for enabling a data container to appear in a plurality of locations in a super-namespace
US7861031B2 (en) * 2007-03-01 2010-12-28 Hewlett-Packard Development Company, L.P. Access control management
US8024514B2 (en) * 2007-03-01 2011-09-20 Hewlett-Packard Development Company, L.P. Access control management
US8219821B2 (en) 2007-03-27 2012-07-10 Netapp, Inc. System and method for signature based data container recognition
US7694079B2 (en) 2007-04-04 2010-04-06 Hewlett-Packard Development Company, L.P. Tagged sequential read operations
US8607046B1 (en) 2007-04-23 2013-12-10 Netapp, Inc. System and method for signing a message to provide one-time approval to a plurality of parties
US20080270480A1 (en) * 2007-04-26 2008-10-30 Hanes David H Method and system of deleting files from a remote server
US7827350B1 (en) 2007-04-27 2010-11-02 Netapp, Inc. Method and system for promoting a snapshot in a distributed file system
US7882304B2 (en) * 2007-04-27 2011-02-01 Netapp, Inc. System and method for efficient updates of sequential block storage
US8219749B2 (en) * 2007-04-27 2012-07-10 Netapp, Inc. System and method for efficient updates of sequential block storage
US8005993B2 (en) * 2007-04-30 2011-08-23 Hewlett-Packard Development Company, L.P. System and method of a storage expansion unit for a network attached storage device
US7797489B1 (en) 2007-06-01 2010-09-14 Netapp, Inc. System and method for providing space availability notification in a distributed striped volume set
CA2689479A1 (en) * 2007-06-04 2008-12-11 Bce Inc. Methods and systems for validating online transactions using location information
JP4475598B2 (en) * 2007-06-26 2010-06-09 株式会社日立製作所 Storage system and storage system control method
US7689587B1 (en) * 2007-06-28 2010-03-30 Emc Corporation Autorep process to create repository according to seed data and at least one new schema
US9037750B2 (en) * 2007-07-10 2015-05-19 Qualcomm Incorporated Methods and apparatus for data exchange in peer to peer communications
US7730218B2 (en) * 2007-07-31 2010-06-01 Hewlett-Packard Development Company, L.P. Method and system for configuration and management of client access to network-attached-storage
US20090049459A1 (en) * 2007-08-14 2009-02-19 Microsoft Corporation Dynamically converting symbolic links
US8041773B2 (en) 2007-09-24 2011-10-18 The Research Foundation Of State University Of New York Automatic clustering for self-organizing grids
US8601131B1 (en) * 2007-09-28 2013-12-03 Emc Corporation Active element manager
US8972547B2 (en) * 2007-10-18 2015-03-03 International Business Machines Corporation Method and apparatus for dynamically configuring virtual internet protocol addresses
US7996636B1 (en) 2007-11-06 2011-08-09 Netapp, Inc. Uniquely identifying block context signatures in a storage volume hierarchy
DE102007057248A1 (en) * 2007-11-16 2009-05-20 T-Mobile International Ag Connection layer for databases
JP2009129165A (en) * 2007-11-22 2009-06-11 Toshiba Corp Image processing apparatus and method thereof
US9519540B2 (en) 2007-12-06 2016-12-13 Sandisk Technologies Llc Apparatus, system, and method for destaging cached data
US7836226B2 (en) 2007-12-06 2010-11-16 Fusion-Io, Inc. Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment
JP2009157471A (en) * 2007-12-25 2009-07-16 Hitachi Ltd File sharing system and method of setting file sharing system
US8341251B2 (en) * 2008-01-03 2012-12-25 International Business Machines Corporation Enabling storage area network component migration
US7996607B1 (en) 2008-01-28 2011-08-09 Netapp, Inc. Distributing lookup operations in a striped storage system
US20090193346A1 (en) * 2008-01-30 2009-07-30 International Business Machines Corporation Apparatus and method to improve a graphical user interface
US20090210461A1 (en) * 2008-02-14 2009-08-20 Mcchord Austin Network Attached Storage System and Method
US8239486B2 (en) * 2008-03-19 2012-08-07 Oracle International Corporation Direct network file system
KR101266381B1 (en) * 2008-03-24 2013-05-22 삼성전자주식회사 Image forming system and method for managing of the same
US8103628B2 (en) * 2008-04-09 2012-01-24 Harmonic Inc. Directed placement of data in a redundant data storage system
US8725986B1 (en) 2008-04-18 2014-05-13 Netapp, Inc. System and method for volume block number to disk block number mapping
WO2010014851A2 (en) * 2008-07-30 2010-02-04 Diomede Corporation Systems and methods for power aware data storage
US8624898B1 (en) 2009-03-09 2014-01-07 Pixar Typed dependency graphs
US8117388B2 (en) * 2009-04-30 2012-02-14 Netapp, Inc. Data distribution through capacity leveling in a striped file system
US9003411B2 (en) * 2009-05-13 2015-04-07 Verizon Patent And Licensing Inc. Automated provisioning and configuration of virtual and physical servers
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9015333B2 (en) 2009-12-18 2015-04-21 Cisco Technology, Inc. Apparatus and methods for handling network file operations over a fibre channel network
US8711864B1 (en) 2010-03-30 2014-04-29 Chengdu Huawei Symantec Technologies Co., Ltd. System and method for supporting fibre channel over ethernet communication
US9442671B1 (en) * 2010-12-23 2016-09-13 Emc Corporation Distributed consumer cloud storage system
WO2012116369A2 (en) 2011-02-25 2012-08-30 Fusion-Io, Inc. Apparatus, system, and method for managing contents of a cache
US8301812B1 (en) * 2011-03-24 2012-10-30 Emc Corporation Techniques for performing host path detection verification
US9251086B2 (en) 2012-01-24 2016-02-02 SanDisk Technologies, Inc. Apparatus, system, and method for managing a cache
US9298715B2 (en) 2012-03-07 2016-03-29 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
US9471578B2 (en) 2012-03-07 2016-10-18 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
US9122383B2 (en) * 2012-04-16 2015-09-01 Hewlett-Packard Development Company, L.P. Object visualization
US9342537B2 (en) 2012-04-23 2016-05-17 Commvault Systems, Inc. Integrated snapshot interface for a data storage system
FR2991074B1 (en) * 2012-05-25 2014-06-06 Bull Sas METHOD, DEVICE AND COMPUTER PROGRAM FOR DYNAMICALLY CONTROLLING MEMORY ACCESS DISTANCES IN A NUMA-TYPE SYSTEM
US10237127B1 (en) * 2012-09-28 2019-03-19 EMC IP Holding Company LLC Unified initialization utility
US8959388B1 (en) * 2012-12-28 2015-02-17 Emc Corporation Managing TLU recovery using pre-allocated LUN slices
US9886346B2 (en) 2013-01-11 2018-02-06 Commvault Systems, Inc. Single snapshot for multiple agents
US20140280347A1 (en) * 2013-03-14 2014-09-18 Konica Minolta Laboratory U.S.A., Inc. Managing Digital Files with Shared Locks
US10152530B1 (en) 2013-07-24 2018-12-11 Symantec Corporation Determining a recommended control point for a file system
JP6241178B2 (en) * 2013-09-27 2017-12-06 富士通株式会社 Storage control device, storage control method, and storage control program
US9277002B2 (en) 2014-01-09 2016-03-01 International Business Machines Corporation Physical resource management
US9632874B2 (en) 2014-01-24 2017-04-25 Commvault Systems, Inc. Database application backup in single snapshot for multiple applications
US9495251B2 (en) 2014-01-24 2016-11-15 Commvault Systems, Inc. Snapshot readiness checking and reporting
US9639426B2 (en) 2014-01-24 2017-05-02 Commvault Systems, Inc. Single snapshot for multiple applications
US9753812B2 (en) 2014-01-24 2017-09-05 Commvault Systems, Inc. Generating mapping information for single snapshot for multiple applications
US20150213010A1 (en) * 2014-01-30 2015-07-30 Sage Microelectronics Corp. Storage system with distributed data searching
US9774672B2 (en) 2014-09-03 2017-09-26 Commvault Systems, Inc. Consolidated processing of storage-array commands by a snapshot-control media agent
US10042716B2 (en) 2014-09-03 2018-08-07 Commvault Systems, Inc. Consolidated processing of storage-array commands using a forwarder media agent in conjunction with a snapshot-control media agent
US9648105B2 (en) * 2014-11-14 2017-05-09 Commvault Systems, Inc. Unified snapshot storage management, using an enhanced storage manager and enhanced media agents
US9448731B2 (en) 2014-11-14 2016-09-20 Commvault Systems, Inc. Unified snapshot storage management
US10503753B2 (en) 2016-03-10 2019-12-10 Commvault Systems, Inc. Snapshot replication operations based on incremental block change tracking
US10769081B2 (en) * 2016-12-30 2020-09-08 Intel Corporation Computer program product, system, and method to allow a host and a storage device to communicate between different fabrics
US10732885B2 (en) 2018-02-14 2020-08-04 Commvault Systems, Inc. Block-level live browsing and private writable snapshots using an ISCSI server
US11334441B2 (en) * 2019-05-31 2022-05-17 Dell Products L.P. Distribution of snaps for load balancing data node clusters
CN111427721B (en) * 2020-03-05 2023-04-28 杭州宏杉科技股份有限公司 Abnormality recovery method and device
US11947431B1 (en) * 2022-12-07 2024-04-02 Dell Products, L.P. Replication data facility failure detection and failover automation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999034297A1 (en) * 1997-12-31 1999-07-08 Crossroads Systems, Inc. Storage router and method for providing virtual local storage

Family Cites Families (177)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4404647A (en) 1978-03-16 1983-09-13 International Business Machines Corp. Dynamic array error recovery
GB2023314B (en) 1978-06-15 1982-10-06 Ibm Digital data processing systems
EP0128945B1 (en) 1982-12-09 1991-01-30 Sequoia Systems, Inc. Memory backup system
US4611298A (en) 1983-06-03 1986-09-09 Harding And Harris Behavioral Research, Inc. Information storage and retrieval system and method
JPS60142418A (en) 1983-12-28 1985-07-27 Hitachi Ltd Input/output error recovery system
BR8503913A (en) 1984-08-18 1986-05-27 Fujitsu Ltd ERROR RECOVERY SYSTEM AND PROCESS IN A CHANNEL DATA PROCESSOR HAVING A CONTROL MEMORY DEVICE AND ERROR RECOVERY PROCESS IN A CHANNEL TYPE DATA PROCESSOR
JP2900359B2 (en) 1986-10-30 1999-06-02 株式会社日立製作所 Multiprocessor system
US4942579A (en) 1987-06-02 1990-07-17 Cab-Tek, Inc. High-speed, high-capacity, fault-tolerant error-correcting storage system
US5257367A (en) 1987-06-02 1993-10-26 Cab-Tek, Inc. Data storage system with asynchronous host operating system communication link
US5051887A (en) 1987-08-25 1991-09-24 International Business Machines Corporation Maintaining duplex-paired storage devices during gap processing using of a dual copy function
US5129088A (en) 1987-11-30 1992-07-07 International Business Machines Corporation Data processing method to create virtual disks from non-contiguous groups of logically contiguous addressable blocks of direct access storage device
US5136523A (en) 1988-06-30 1992-08-04 Digital Equipment Corporation System for automatically and transparently mapping rules and objects from a stable storage database management system within a forward chaining or backward chaining inference cycle
US5175849A (en) 1988-07-28 1992-12-29 Amdahl Corporation Capturing data of a database system
US5345587A (en) 1988-09-14 1994-09-06 Digital Equipment Corporation Extensible entity management system including a dispatching kernel and modules which independently interpret and execute commands
US4996687A (en) 1988-10-11 1991-02-26 Honeywell Inc. Fault recovery mechanism, transparent to digital system function
US5167011A (en) 1989-02-15 1992-11-24 W. H. Morris Method for coodinating information storage and retrieval
US5842224A (en) 1989-06-16 1998-11-24 Fenner; Peter R. Method and apparatus for source filtering data packets between networks of differing media
CA2017458C (en) 1989-07-24 2000-10-10 Jonathan R. Engdahl Intelligent network interface circuit
US5212789A (en) 1989-10-12 1993-05-18 Bell Communications Research, Inc. Method and apparatus for updating application databases used in a distributed transaction processing environment
US5276867A (en) 1989-12-19 1994-01-04 Epoch Systems, Inc. Digital data storage system with improved data migration
US5185884A (en) 1990-01-24 1993-02-09 International Business Machines Corporation Computer controlled optimized pairing of disk units
US5138710A (en) 1990-04-25 1992-08-11 Unisys Corporation Apparatus and method for providing recoverability in mass storage data base systems without audit trail mechanisms
EP0455922B1 (en) 1990-05-11 1996-09-11 International Business Machines Corporation Method and apparatus for deriving mirrored unit state when re-initializing a system
JPH0444673A (en) 1990-06-11 1992-02-14 Toshiba Corp Defective information recording system for disk device
US5155845A (en) 1990-06-15 1992-10-13 Storage Technology Corporation Data storage system for providing redundant copies of data on different disk drives
EP0737921B1 (en) 1990-09-17 2000-06-28 Cabletron Systems, Inc. System and method for modelling a computer network
US5390313A (en) 1990-09-24 1995-02-14 Emc Corporation Data storage system with data mirroring and reduced access time data retrieval
US5157663A (en) 1990-09-24 1992-10-20 Novell, Inc. Fault tolerant computer system
US5544347A (en) 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US5206939A (en) 1990-09-24 1993-04-27 Emc Corporation System and method for disk mapping and data retrieval
US5561815A (en) 1990-10-02 1996-10-01 Hitachi, Ltd. System and method for control of coexisting code and image data in memory
US5212784A (en) 1990-10-22 1993-05-18 Delphi Data, A Division Of Sparks Industries, Inc. Automated concurrent data backup system
US5528759A (en) 1990-10-31 1996-06-18 International Business Machines Corporation Method and apparatus for correlating network management report messages
AU8683991A (en) 1990-11-09 1992-05-14 Array Technology Corporation Logical partitioning of a redundant array storage system
JP2603757B2 (en) 1990-11-30 1997-04-23 富士通株式会社 Method of controlling array disk device
US5235601A (en) 1990-12-21 1993-08-10 Array Technology Corporation On-line restoration of redundancy information in a redundant array system
US5274799A (en) 1991-01-04 1993-12-28 Array Technology Corporation Storage device array architecture with copyback cache
US5317731A (en) 1991-02-25 1994-05-31 International Business Machines Corporation Intelligent page store for concurrent and consistent access to a database by a transaction processor and a query processor
US5367682A (en) 1991-04-29 1994-11-22 Steven Chang Data processing virus protection circuitry including a permanent memory for storing a redundant partition table
US5321813A (en) 1991-05-01 1994-06-14 Teradata Corporation Reconfigurable, fault tolerant, multistage interconnect network and protocol
US5278838A (en) 1991-06-18 1994-01-11 Ibm Corp. Recovery from errors in a redundant array of disk drives
US5559958A (en) 1991-06-24 1996-09-24 Compaq Computer Corporation Graphical user interface for computer management system and an associated management information base
US5347653A (en) 1991-06-28 1994-09-13 Digital Equipment Corporation System for reconstructing prior versions of indexes using records indicating changes between successive versions of the indexes
US5325505A (en) 1991-09-04 1994-06-28 Storage Technology Corporation Intelligent storage manager for data storage apparatus having simulation capability
US5481701A (en) 1991-09-13 1996-01-02 Salient Software, Inc. Method and apparatus for performing direct read of compressed data file
JP2793399B2 (en) 1991-12-09 1998-09-03 日本電気株式会社 Buffer device
JP3160106B2 (en) 1991-12-23 2001-04-23 ヒュンダイ エレクトロニクス アメリカ How to sort disk arrays
US5745789A (en) 1992-01-23 1998-04-28 Hitachi, Ltd. Disc system for holding data in a form of a plurality of data blocks dispersed in a plurality of disc units connected by a common data bus
JPH05224822A (en) 1992-02-12 1993-09-03 Hitachi Ltd Collective storage device
WO1993018456A1 (en) 1992-03-13 1993-09-16 Emc Corporation Multiple controller sharing in a redundant storage array
US5263154A (en) 1992-04-20 1993-11-16 International Business Machines Corporation Method and system for incremental time zero backup copying of data
US5987627A (en) 1992-05-13 1999-11-16 Rawlings, Iii; Joseph H. Methods and apparatus for high-speed mass storage access in a computer system
IL105638A0 (en) 1992-05-13 1993-09-22 Southwest Bell Tech Resources Storage controlling system and method for transferring information
US5596736A (en) 1992-07-22 1997-01-21 Fujitsu Limited Data transfers to a backing store of a dynamically mapped data storage system in which data has nonsequential logical addresses
US5404361A (en) 1992-07-27 1995-04-04 Storage Technology Corporation Method and apparatus for ensuring data integrity in a dynamically mapped data storage subsystem
US5375232A (en) 1992-09-23 1994-12-20 International Business Machines Corporation Method and system for asynchronous pre-staging of backup copies in a data processing storage subsystem
US5497483A (en) 1992-09-23 1996-03-05 International Business Machines Corporation Method and system for track transfer control during concurrent copy operations in a data processing storage subsystem
US5553235A (en) 1992-10-23 1996-09-03 International Business Machines Corporation System and method for maintaining performance data in a data processing system
US5495601A (en) 1992-12-11 1996-02-27 International Business Machines Corporation Method to off-load host-based DBMS predicate evaluation to a disk controller
JP3422370B2 (en) 1992-12-14 2003-06-30 株式会社日立製作所 Disk cache controller
GB2273584B (en) 1992-12-16 1997-04-16 Quantel Ltd A data storage apparatus
US5771367A (en) 1992-12-17 1998-06-23 International Business Machines Corporation Storage controller and method for improved failure recovery using cross-coupled cache memories and nonvolatile stores
US5689678A (en) 1993-03-11 1997-11-18 Emc Corporation Distributed storage array system having a plurality of modular control units
US5715393A (en) 1993-08-16 1998-02-03 Motorola, Inc. Method for remote system process monitoring
US5432922A (en) 1993-08-23 1995-07-11 International Business Machines Corporation Digital storage system and method having alternating deferred updating of mirrored storage disks
US5619694A (en) 1993-08-26 1997-04-08 Nec Corporation Case database storage/retrieval system
JP3078972B2 (en) 1993-11-05 2000-08-21 富士通株式会社 Disk array device
EP0728333A1 (en) 1993-11-09 1996-08-28 Arcada Software Data backup and restore system for a computer network
US5911150A (en) 1994-01-25 1999-06-08 Data General Corporation Data storage tape back-up for data processing systems using a single driver interface unit
US5583994A (en) 1994-02-07 1996-12-10 Regents Of The University Of California System for efficient delivery of multimedia information using hierarchical network of servers selectively caching program for a selected time period
US5566316A (en) 1994-02-10 1996-10-15 Storage Technology Corporation Method and apparatus for hierarchical management of data storage elements in an array storage device
JPH07271513A (en) 1994-03-29 1995-10-20 Fujitsu Ltd Disk control method and device therefor
JP3745398B2 (en) 1994-06-17 2006-02-15 富士通株式会社 File disk block control method
US5504882A (en) 1994-06-20 1996-04-02 International Business Machines Corporation Fault tolerant data storage subsystem employing hierarchically arranged controllers
JP3564732B2 (en) 1994-06-30 2004-09-15 ソニー株式会社 Disk control method and apparatus
US5435004A (en) 1994-07-21 1995-07-18 International Business Machines Corporation Computerized system and method for data backup
DE69519205T2 (en) 1994-07-22 2001-05-23 Koninkl Kpn Nv Method for creating connections in a communication network
WO1996005595A1 (en) 1994-08-08 1996-02-22 Sony Corporation Data recording method, data recording apparatus, disc recording medium, computer system and data copying preventing method
US5537533A (en) 1994-08-11 1996-07-16 Miralink Corporation System and method for remote mirroring of digital data from a primary network server to a remote network server
US5625818A (en) 1994-09-30 1997-04-29 Apple Computer, Inc. System for managing local database updates published to different online information services in different formats from a central platform
US5835953A (en) 1994-10-13 1998-11-10 Vinca Corporation Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US5671439A (en) 1995-01-10 1997-09-23 Micron Electronics, Inc. Multi-drive virtual mass storage device and method of operating same
US5682478A (en) 1995-01-19 1997-10-28 Microsoft Corporation Method and apparatus for supporting multiple, simultaneous services over multiple, simultaneous connections between a client and network server
GB9501378D0 (en) 1995-01-24 1995-03-15 Ibm A system and method for establishing a communication channel over a heterogeneous network between a source node and a destination node
US5850522A (en) 1995-02-03 1998-12-15 Dex Information Systems, Inc. System for physical storage architecture providing simultaneous access to common file by storing update data in update partitions and merging desired updates into common partition
US5680580A (en) 1995-02-28 1997-10-21 International Business Machines Corporation Remote copy system for setting request interconnect bit in each adapter within storage controller and initiating request connect frame in response to the setting bit
US5594732A (en) 1995-03-03 1997-01-14 Intecom, Incorporated Bridging and signalling subsystems and methods for private and hybrid communications systems including multimedia systems
US5692155A (en) 1995-04-19 1997-11-25 International Business Machines Corporation Method and apparatus for suspending multiple duplex pairs during back up processing to insure storage devices remain synchronized in a sequence consistent order
US6006017A (en) 1995-05-02 1999-12-21 Motorola Inc. System for determining the frequency of repetitions of polling active stations relative to the polling of inactive stations
JP3283724B2 (en) 1995-05-10 2002-05-20 三菱電機株式会社 Mirror disk control method and mirror disk device
US5659787A (en) 1995-05-26 1997-08-19 Sensormatic Electronics Corporation Data communication network with highly efficient polling procedure
US5710918A (en) 1995-06-07 1998-01-20 International Business Machines Corporation Method for distributed task fulfillment of web browser requests
US6003030A (en) 1995-06-07 1999-12-14 Intervu, Inc. System and method for optimized storage and retrieval of data on a distributed computer network
US5765200A (en) 1995-06-07 1998-06-09 International Business Machines Corporation Logical positioning within a storage device by a storage controller
US5761410A (en) 1995-06-28 1998-06-02 International Business Machines Corporation Storage management mechanism that detects write failures that occur on sector boundaries
GB2302966A (en) 1995-06-30 1997-02-05 Ibm Transaction processing with a reduced-kernel operating system
US5875456A (en) 1995-08-17 1999-02-23 Nstor Corporation Storage device array and methods for striping and unstriping data and for adding and removing disks online to/from a raid storage array
US5832492A (en) 1995-09-05 1998-11-03 Compaq Computer Corporation Method of scheduling interrupts to the linked lists of transfer descriptors scheduled at intervals on a serial bus
WO1997011426A1 (en) 1995-09-18 1997-03-27 Cyberstorage Systems, Inc. Universal storage management system
US5768623A (en) 1995-09-19 1998-06-16 International Business Machines Corporation System and method for sharing multiple storage arrays by dedicating adapters as primary controller and secondary controller for arrays reside in different host computers
US5805919A (en) 1995-10-05 1998-09-08 Micropolis Corporation Method and system for interleaving the distribution of data segments from different logical volumes on a single physical drive
US5740397A (en) 1995-10-11 1998-04-14 Arco Computer Products, Inc. IDE disk drive adapter for computer backup and fault tolerance
US5819020A (en) 1995-10-16 1998-10-06 Network Specialists, Inc. Real time backup system
US5828475A (en) 1995-10-25 1998-10-27 Mcdata Corporation Bypass switching and messaging mechanism for providing intermix data transfer for a fiber optic switch using a bypass bus and buffer
US6047350A (en) 1995-11-20 2000-04-04 Advanced Micro Devices, Inc. Computer system which performs intelligent byte slicing on a multi-byte wide bus
US5710885A (en) 1995-11-28 1998-01-20 Ncr Corporation Network management system with improved node discovery and monitoring
US5774680A (en) 1995-12-11 1998-06-30 Compaq Computer Corporation Interfacing direct memory access devices to a non-ISA bus
US5809328A (en) 1995-12-21 1998-09-15 Unisys Corp. Apparatus for fibre channel transmission having interface logic, buffer memory, multiplexor/control device, fibre channel controller, gigabit link module, microprocessor, and bus control device
JP3287203B2 (en) 1996-01-10 2002-06-04 株式会社日立製作所 External storage controller and data transfer method between external storage controllers
DE19603474C2 (en) 1996-01-31 1999-05-27 Siemens Ag Method for converting messages in different formats in communication systems
US5787304A (en) 1996-02-05 1998-07-28 International Business Machines Corporation Multipath I/O storage systems with multipath I/O request mechanisms
US5761507A (en) 1996-03-05 1998-06-02 International Business Machines Corporation Client/server architecture supporting concurrent servers within a server with a transaction manager providing server/connection decoupling
US6063128A (en) 1996-03-06 2000-05-16 Bentley Systems, Incorporated Object-oriented computerized modeling system
US5852715A (en) 1996-03-19 1998-12-22 Emc Corporation System for currently updating database by one host and reading the database by different host for the purpose of implementing decision support functions
US5673322A (en) 1996-03-22 1997-09-30 Bell Communications Research, Inc. System and method for providing protocol translation and filtering to access the world wide web from wireless or low-bandwidth networks
US5764913A (en) 1996-04-05 1998-06-09 Microsoft Corporation Computer network status monitoring system
JP3641872B2 (en) 1996-04-08 2005-04-27 株式会社日立製作所 Storage system
US5835718A (en) 1996-04-10 1998-11-10 At&T Corp URL rewriting pseudo proxy server
US5894554A (en) 1996-04-23 1999-04-13 Infospinner, Inc. System for managing dynamic web page generation requests by intercepting request at web server and routing to page server thereby releasing web server to process other requests
US5790774A (en) 1996-05-21 1998-08-04 Storage Computer Corporation Data storage system with dedicated allocation of parity storage and parity reads and writes only on operations requiring parity information
US5720027A (en) 1996-05-21 1998-02-17 Storage Computer Corporation Redundant disc computer having targeted data broadcast
KR970076238A (en) 1996-05-23 1997-12-12 포만 제프리 엘 Servers, methods and program products thereof for creating and managing multiple copies of client data files
US5901327A (en) 1996-05-28 1999-05-04 Emc Corporation Bundling of write data from channel commands in a command chain for transmission over a data link between data storage systems for remote data mirroring
US6052797A (en) 1996-05-28 2000-04-18 Emc Corporation Remotely mirrored data storage system with a count indicative of data consistency
US5673382A (en) 1996-05-30 1997-09-30 International Business Machines Corporation Automated management of off-site storage volumes for disaster recovery
US6101497A (en) 1996-05-31 2000-08-08 Emc Corporation Method and apparatus for independent and simultaneous access to a common data set
US5933653A (en) 1996-05-31 1999-08-03 Emc Corporation Method and apparatus for mirroring data in a remote data storage system
US5857208A (en) 1996-05-31 1999-01-05 Emc Corporation Method and apparatus for performing point in time backup operation in a computer system
US5809332A (en) 1996-06-03 1998-09-15 Emc Corporation Supplemental communication between host processor and mass storage controller using modified diagnostic commands
US5765204A (en) 1996-06-05 1998-06-09 International Business Machines Corporation Method and apparatus for adaptive localization of frequently accessed, randomly addressed data
US5732238A (en) 1996-06-12 1998-03-24 Storage Computer Corporation Non-volatile cache for providing data integrity in operation with a volatile demand paging cache in a data storage system
US5748897A (en) 1996-07-02 1998-05-05 Sun Microsystems, Inc. Apparatus and method for operating an aggregation of server computers using a dual-role proxy server computer
US5848251A (en) 1996-08-06 1998-12-08 Compaq Computer Corporation Secondary channel for command information for fibre channel system interface bus
US5959994A (en) 1996-08-19 1999-09-28 Ncr Corporation ATM/SONET network enhanced as a universal computer system interconnect
JP2868080B2 (en) 1996-09-12 1999-03-10 三菱電機株式会社 Communication monitoring control device and communication monitoring control method
US5844554A (en) * 1996-09-17 1998-12-01 Bt Squared Technologies, Inc. Methods and systems for user interfaces and constraint handling configurations software
US5787485A (en) 1996-09-17 1998-07-28 Marathon Technologies Corporation Producing a mirrored copy using reference labels
US5812754A (en) 1996-09-18 1998-09-22 Silicon Graphics, Inc. Raid system with fibre channel arbitrated loop
US5893919A (en) 1996-09-27 1999-04-13 Storage Computer Corporation Apparatus and method for storing data with selectable data protection using mirroring and selectable parity inhibition
US5940865A (en) 1996-10-14 1999-08-17 Fujitsu Limited Apparatus and method for accessing plural storage devices in predetermined order by slot allocation
US5787470A (en) 1996-10-18 1998-07-28 At&T Corp Inter-cache protocol for improved WEB performance
US5953538A (en) 1996-11-12 1999-09-14 Digital Equipment Corporation Method and apparatus providing DMA transfers between devices coupled to different host bus bridges
US6065100A (en) 1996-11-12 2000-05-16 Micro-Design International Caching apparatus and method for enhancing retrieval of data from an optical storage device
US5909540A (en) * 1996-11-22 1999-06-01 Mangosoft Corporation System and method for providing highly available data storage using globally addressable memory
US6061356A (en) 1996-11-25 2000-05-09 Alcatel Internetworking, Inc. Method and apparatus for switching routable frames between disparate media
US5794254A (en) 1996-12-03 1998-08-11 Fairbanks Systems Group Incremental computer file backup using a two-step comparison of first two characters in the block and a signature with pre-stored character and signature sets
US5857213A (en) 1996-12-06 1999-01-05 International Business Machines Corporation Method for extraction of a variable length record from fixed length sectors on a disk drive and for reblocking remaining records in a disk track
US5933614A (en) 1996-12-31 1999-08-03 Compaq Computer Corporation Isolation of PCI and EISA masters by masking control and interrupt lines
US5961593A (en) 1997-01-22 1999-10-05 Lucent Technologies, Inc. System and method for providing anonymous personalized browsing by a proxy system in a network
US5897661A (en) 1997-02-25 1999-04-27 International Business Machines Corporation Logical volume manager and method having enhanced update capability with dynamic allocation of storage and minimal storage of metadata information
US5969841A (en) 1997-03-20 1999-10-19 Methode Electronics, Inc. Gigabaud link module with received power detect signal
US5926833A (en) 1997-03-24 1999-07-20 International Business Machines Corporation Method and system allowing direct data access to a shared data storage subsystem by heterogeneous computing systems
US6073209A (en) 1997-03-31 2000-06-06 Ark Research Corporation Data storage controller providing multiple hosts with access to multiple storage subsystems
US6052736A (en) 1997-03-31 2000-04-18 International Business Machines Corp. Adaptive message routing in a multiple network environment with a master router
US6098146A (en) 1997-04-11 2000-08-01 Dell Usa, L. P. Intelligent backplane for collecting and reporting information in an SSA system
US6003065A (en) 1997-04-24 1999-12-14 Sun Microsystems, Inc. Method and system for distributed processing of applications on host and peripheral devices
US6021436A (en) 1997-05-09 2000-02-01 Emc Corporation Automatic method for polling a plurality of heterogeneous computer systems
US5983316A (en) 1997-05-29 1999-11-09 Hewlett-Parkard Company Computing system having a system node that utilizes both a logical volume manager and a resource monitor for managing a storage pool
US5948108A (en) 1997-06-12 1999-09-07 Tandem Computers, Incorporated Method and system for providing fault tolerant access between clients and a server
US6085193A (en) 1997-09-29 2000-07-04 International Business Machines Corporation Method and system for dynamically prefetching information via a server hierarchy
US6065096A (en) 1997-09-30 2000-05-16 Lsi Logic Corporation Integrated single chip dual mode raid controller
US5974566A (en) 1997-10-07 1999-10-26 International Business Machines Corporation Method and apparatus for providing persistent fault-tolerant proxy login to a web-based distributed file service
US6052727A (en) 1997-10-27 2000-04-18 Dell U.S.A., L.P. Method of discovering client systems on a local area network
US6418478B1 (en) * 1997-10-30 2002-07-09 Commvault Systems, Inc. Pipelined high speed data transfer mechanism
US6057863A (en) 1997-10-31 2000-05-02 Compaq Computer Corporation Dual purpose apparatus, method and system for accelerated graphics port and fibre channel arbitrated loop interfaces
US6009478A (en) 1997-11-04 1999-12-28 Adaptec, Inc. File array communications interface for communicating between a host computer and an adapter
US6066181A (en) 1997-12-08 2000-05-23 Analysis & Technology, Inc. Java native interface code generator
GB2332288A (en) 1997-12-10 1999-06-16 Northern Telecom Ltd agent enabling technology
US6167358A (en) * 1997-12-19 2000-12-26 Nowonder, Inc. System and method for remotely monitoring a plurality of computer-based systems
US6029000A (en) 1997-12-22 2000-02-22 Texas Instruments Incorporated Mobile communication system with cross compiler and cross linker
US6065085A (en) 1998-01-27 2000-05-16 Lsi Logic Corporation Bus bridge architecture for a data processing system capable of sharing processing load among a plurality of devices
US6041381A (en) 1998-02-05 2000-03-21 Crossroads Systems, Inc. Fibre channel to SCSI addressing method and system
US6038630A (en) 1998-03-24 2000-03-14 International Business Machines Corporation Shared access control device for integrated system with multiple functional units accessing external structures over multiple data buses
US6081834A (en) 1998-04-15 2000-06-27 Unisys Corporation Network data path interface method and system for enhanced data transmission
US6065087A (en) 1998-05-21 2000-05-16 Hewlett-Packard Company Architecture for a high-performance network/bus multiplexer interconnecting a network and a bus that transport data using multiple protocols
US6446141B1 (en) * 1999-03-25 2002-09-03 Dell Products, L.P. Storage server system including ranking of data source
US6487638B2 (en) * 2001-01-26 2002-11-26 Dell Products, L.P. System and method for time weighted access frequency based caching for memory controllers

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999034297A1 (en) * 1997-12-31 1999-07-08 Crossroads Systems, Inc. Storage router and method for providing virtual local storage

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SCHULZ GREG: "SAN and NAS; Complementary Technologies - SAN and NAS provide Storage and Data Sharing" MTI WHITEPAPER, [Online] 1 May 2000 (2000-05-01), pages 1-11, XP002201566 Retrieved from the Internet: <URL:http://www.mti.com/white_papers/WP200 02.PDF> [retrieved on 2002-06-07] *
See also references of EP1382176A2 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1543424B1 (en) * 2002-08-09 2012-12-05 Network Appliance, Inc. Storage virtualization by layering virtual disk objects on a file system
EP1543424A2 (en) * 2002-08-09 2005-06-22 Network Appliance, Inc. Storage virtualization by layering virtual disk objects on a file system
EP1761853A2 (en) * 2004-06-28 2007-03-14 Emc Corporation Low cost flexible network accessed storage architecture
EP1761853A4 (en) * 2004-06-28 2008-06-11 Emc Corp Low cost flexible network accessed storage architecture
WO2007103533A1 (en) * 2006-03-08 2007-09-13 Omneon Video Networks Gateway server
US9824027B2 (en) 2006-12-06 2017-11-21 Sandisk Technologies Llc Apparatus, system, and method for a storage area network
US9734086B2 (en) 2006-12-06 2017-08-15 Sandisk Technologies Llc Apparatus, system, and method for a device shared between multiple independent hosts
US8495292B2 (en) 2006-12-06 2013-07-23 Fusion-Io, Inc. Apparatus, system, and method for an in-server storage area network
US11847066B2 (en) 2006-12-06 2023-12-19 Unification Technologies Llc Apparatus, system, and method for managing commands of solid-state storage using bank interleave
US11640359B2 (en) 2006-12-06 2023-05-02 Unification Technologies Llc Systems and methods for identifying storage resources that are not in use
US11573909B2 (en) 2006-12-06 2023-02-07 Unification Technologies Llc Apparatus, system, and method for managing commands of solid-state storage using bank interleave
WO2008070802A3 (en) * 2006-12-06 2008-10-09 David Flynn Apparatus, system, and method for an in-server storage area network
US9407516B2 (en) 2011-01-10 2016-08-02 Storone Ltd. Large scale storage system
US9729666B2 (en) 2011-01-10 2017-08-08 Storone Ltd. Large scale storage system and method of operating thereof
US9426204B2 (en) * 2011-04-22 2016-08-23 Korea Aerospace Research Institute Network connection system for sharing data among independent networks
US20130173732A1 (en) * 2011-04-22 2013-07-04 Kwang Sik Seo Network connection system for sharing data among independent networks
US9697091B2 (en) 2012-06-25 2017-07-04 Storone Ltd. System and method for datacenters disaster recovery
US9448900B2 (en) 2012-06-25 2016-09-20 Storone Ltd. System and method for datacenters disaster recovery
US10169021B2 (en) 2013-03-21 2019-01-01 Storone Ltd. System and method for deploying a data-path-related plug-in for a logical storage entity of a storage system
US9612851B2 (en) 2013-03-21 2017-04-04 Storone Ltd. Deploying data-path-related plug-ins
US11960412B2 (en) 2022-10-19 2024-04-16 Unification Technologies Llc Systems and methods for identifying storage resources that are not in use

Also Published As

Publication number Publication date
WO2002067529A3 (en) 2003-10-16
US6606690B2 (en) 2003-08-12
AU2001241588A1 (en) 2002-09-04
US20020156984A1 (en) 2002-10-24
EP1382176A2 (en) 2004-01-21

Similar Documents

Publication Publication Date Title
US6606690B2 (en) System and method for accessing a storage area network as network attached storage
EP1908261B1 (en) Client failure fencing mechanism for fencing network file system data in a host-cluster environment
US10326846B1 (en) Method and apparatus for web based storage on-demand
US20070022314A1 (en) Architecture and method for configuring a simplified cluster over a network with fencing and quorum
US6977927B1 (en) Method and system of allocating storage resources in a storage area network
US8205043B2 (en) Single nodename cluster system for fibre channel
EP1747657B1 (en) System and method for configuring a storage network utilizing a multi-protocol storage appliance
CN100544342C (en) Storage system
US7353353B2 (en) File security management
US7971089B2 (en) Switching connection of a boot disk to a substitute server and moving the failed server to a server domain pool
JP4758424B2 (en) System and method capable of utilizing a block-based protocol in a virtual storage appliance running within a physical storage appliance
US7120654B2 (en) System and method for network-free file replication in a storage area network
JP2005502096A (en) File switch and exchange file system
US9602600B1 (en) Method and apparatus for web based storage on-demand
US6804819B1 (en) Method, system, and computer program product for a data propagation platform and applications of same
US7523201B2 (en) System and method for optimized lun masking
US7707263B1 (en) System and method for associating a network address with a storage device
US7539711B1 (en) Streaming video data with fast-forward and no-fast-forward portions
WO2019209625A1 (en) Methods for managing group objects with different service level objectives for an application and devices thereof
US7533175B1 (en) Network address resolution and forwarding TCP/IP packets over a fibre channel network
Storage et al. Implementing the IBM TotalStorage NAS 300G
Chaturvedi SAN-The Network for Storage
Buchanan et al. Networking Operating Systems

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2001912847

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 2001912847

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP