Using Gateway for NFS in a server cluster

A server cluster is a group of independent computer systems, known as nodes, running Windows 2000 Advanced Server and working together as a single system to ensure that mission-critical applications and resources remain available to clients. Every node is attached to one or more cluster storage devices. Server clusters enable users and administrators to access and manage the nodes as a single system rather than as separate computers.

Gateway for NFS can put the capabilities of a server cluster to good use. By installing and properly configuring Gateway for NFS on the nodes of a server cluster, you can achieve both static load balancing and high availability of Gateway for NFS shares.

To use Gateway for NFS in a server cluster, you must install Gateway for NFS on all nodes in the cluster. You must then configure identical network file system (NFS) shares on all of the nodes. This achieves static load balancing by allowing users to connect to any one of several NFS shares to access a particular NFS exported directory. In addition, this allows other nodes in the cluster to take over the client connections of a node that fails until the node is restored, greatly increasing the reliability of the Gateway for NFS installation.

As mentioned earlier, the shares on the nodes must be identical. That is, Gateway for NFS on each node must be configured to share the same NFS directories using the same share names, drive letters, and other properties. Because the shares are identical, when one node in the cluster fails, all the Gateway for NFS shares on that node can automatically be taken over by another node in the cluster.

Example

For the purposes of this example, there are two UNIX servers named unix1 and unix2. The computer named unix1 has exported a directory named research, and the computer named unix2 has exported a directory named operations. Gateway for NFS is installed on two computers, Node1 and Node2, which are nodes in a cluster. Node1 is configured with a virtual server named Server1, and Node2 is configured with a virtual server named Server2. Gateway for NFS on each of the two nodes is configured to share the research directory exported by unix1 as NFSresearch and to share the operations directory exported by unix2 as NFSoperations. Load balancing is achieved by directing half the clients to connect to the shares on Server1 and the other half to connect to shares on Server2. That is, half the clients should connect to the NFSresearch share on \\Server1 (net use * \\Server1\NFSresearch) while the other half should connect to the corresponding share on \\Server2 (net use * \\Server2\NFSresearch). If Node1 fails, the virtual server of Node1, Server1, will be transferred to Node2. Because the exported directories shared by Node1 (through Server1) are shared using identical share names and drive letters on Node2 (through Server2), Node2 will automatically take over the shares from Node1 until Node1 is restored. In the case of the NFSresearch share, for example, clients connected to \\Server1\NFSresearch will continue to access their files without interruption when Node1 fails, because the share will be served by Node2 in its place. This is completely transparent to users using the share and it is not necessary for them to reconnect to the share.

To ensure that the Gateway for NFS shares on all the nodes of a cluster are set up with identical properties, you can configure the shares using the gwshare utility in a script file that you would run on each node.