Appropriate Scenarios for Using FRS

FRS works well in the following scenarios.

When the data is read-only or changes infrequently

Because changes occur infrequently, the data is usually consistent. In addition, FRS has less data to replicate, so network bandwidth is not heavily affected.

When the sites are geographically dispersed and consistency is not an issue

Geographically dispersed sites might have slower bandwidth connections, but if your organization does not require the data in those sites to always be consistent with each other, you can configure replication in those sites on a schedule that make sense for your organization. For example, if your organization has sites in Los Angeles and Zimbabwe, you can place one or more replicas of the data in servers in those sites and schedule replication to occur at night or during periods of low bandwidth use. Because in this scenario replication could take hours or days to update every member, the delay must be acceptable to your organization.

When each file is changed by only one person from one location

Replication conflicts rarely occur if only a single user changes a given file from a single location. Some common scenarios for single authorship are redirected My Documents folders and other home directories. Conversely, if users roam between sites, replication latency could cause the file to be temporarily inconsistent between sites.

When replication takes place among a small number of servers in the same site

Replication latency is reduced by frequently replicating data using high-speed connections. As a result, data tends to be more consistent.

As a file server fail over configuration, if some data inconsistency between servers can be tolerated

It is possible to use DFS and FRS to replicate read-write user data so that if one file server fails, another can take its place. However, before deploying such a scenario, the following factors must be taken into account to determine if DFS and FRS are appropriate in the context of the planned scenario.

The issues to consider are:

In some scenarios, this can still be acceptable. The key question is how likely it is that conflicting edits may be made by two different client computers to the same file before the data has had time to replicate.

Another concept is to consider a mechanism (such as scripts) whereby only one of the link targets raises its shared folder at a time. In this case, DFS can only ever successfully route a client computer to one file server, and so such write conflicts cannot occur. In this case, failover is provided by deciding one other member of the replication set can raise its file share, and the failing member is disconnected and has its file share lowered.

The final issue to consider in this scenario is bandwidth usage. Because users are updating files, there is no clear bound to how much replication traffic they may generate, and this should be considered carefully in replica sets that are intended to span a wide area network (WAN).