Windows Server 2012 R2 Failover Clustering

Providing high availability for applications and services is one of the most critical responsibilities that IT administrators have in today’s data centers. Planned or unplanned downtime may cause businesses to lose money, customers, and reputation.

Highly available systems demand the implementation of fault-tolerant processes and operations that minimize interruptions by eliminating single point of failures and detecting failures as they happen. This is what failover clustering is all about. Our first article dedicated to Windows Server 2012 R2 failover clustering describes the main components of a failover cluster implementation, the quorum configuration options and the shared storage preparation.

 

Main Components of a Failover Cluster

When configuring a Windows Server 2012 R2 failover cluster, it is essential to carefully consider the main components that will integrate the cluster configuration. Let’s review the most important ones:

  • Nodes. These are the member servers of a failover cluster. This collection of servers communicate with each other and run cluster services, resources, and applications associated with a cluster.
  • Networks. Refers to the networks that cluster nodes use to communicate with one another, the clients, and the storage. Three different networks can be configured to provide enhanced functionality to the cluster:
  • Private network: Dedicated to internal cluster communication. It is used by the nodes to exchange heartbeats and interact with other nodes in the cluster. The failover cluster authenticates all internal communication.
  • Public network: This network allows network clients access to cluster applications and services. It is possible to have a mixed public and private network, although it is not recommended as bottleneck and contention issues may strain the network connections.
  • Storage network: These are dedicated channels to shared storage. iSCSI storage requires special attention because it uses the same IP protocol and Ethernet devices available to the other networks. However, the storage network should be completely isolated from any other network in the cluster. Configuring redundant connections on all these networks increases cluster resilience.
  • Storage. This is the cluster storage system that is typically shared between cluster nodes. The failover cluster storage options on Windows Server 2012 R2 are:
  • iSCSI: The iSCSI protocol encapsulates SCSI commands into data packetsthat are transmitted using Ethernet and IP protocols. Packets are sent over the network using a point-to-point connection. Windows Server 2012 supports implementing iSCSI target software as a feature. Once the iSCSI target is configured, the cluster nodes can connect to the shared storage using the iSCSI initiator software that is also part of the Windows Server 2012 operating system. Keep in mind that, in most production networks with high loads, system administrators will opt for hardware iSCSI host bus adapters (HBA) over software iSCSI.
  • Fiber channel:
    Fiber channel SANs typically have better performance than iSCSI SANs, but are much more expensive. Specialized hardware and cabling are needed, with options to connect point-to-point, switched, and loop interfaces.
  • Shared serial attached SCSI: Implementing shared serial attached SCSI requires that two cluster nodes be physically close to each other. You may be limited by the number of connections for cluster nodes on the shared storage devices.
  • Shared .vhdx: Use with virtual machine guest clustering. A shared virtual hard drive should be located on a clustered shared volume (CSV) or scale-out file server cluster. From there, it can be added to virtual machines participating in a guest cluster by connecting to the SCSI interface. Vhd drives are not supported.
  • Services and applications. These represent the components that the failover cluster protects by providing high availability. Clients access services and applications and expect them to be available when needed. When a node fails, failover moves services and applications to another node to ensure that those clustered services and applications continue to be available to network clients.

Server 2012 R2 Failover Clustering Quorum

Quorum defines the minimum number of nodes that must participate concurrently on the cluster to provide failover protection. Each node casts a vote and if there are enough votes, the quorum can start or continue running. When there is an even number of nodes, the cluster can be configured to allow an additional witness vote from a disk or a file share. Each node contains an updated copy of the cluster configuration that includes the number of votes that are required for the cluster to function properly.

There are four quorum modes in Windows Server 2012:

Node majority

Each node that is online and connected to the network represents a vote. The cluster operates only with a majority, or more than half of the votes. Node majority is recommended for clusters with an odd number of servers.

Node and disk majority

Each node that is online and connected to the network represents a vote, but there is also a disk witness that is allowed to vote. The cluster runs successfully only with a majority, that means, when it has more than half of the votes. This configuration relies on the nodes being able to communicate with one another in the cluster, and with the disk witness. It is recommended for clusters with an even number of nodes.

Node and file share majority

Each node that is online and connected to the network represents a vote, but there is also a file share that is allowed to vote. As in previous modes, the cluster operates only with a majority of the votes. This mode works in a similar way to node and disk majority but, instead of a disk witness, the cluster uses a file share witness.

No majority: disk only

The cluster has quorum if one node is available and in communication with a specific disk in the cluster storage. Only the nodes that are also in communication with that disk can join the cluster. This represents a single point of failure and it is the least desirable option.

On Windows Server 2012, the installation wizard by default automatically selects the quorum mode during the installation process. Once the failover cluster installation completes, you will have either one of these two modes:

  • Node majority: if there is an odd number of nodes in the cluster.
  • Node and disk majority: if there is an even number of nodes in the cluster.

At any time you can switch to a different mode to accommodate changes in your network and cluster arrangement.

Windows Server 2012 R2 Dynamic Quorum

Windows Server 2012 R2 introduces significant changes to the way cluster quorum functions. When you install a Windows Server 2012 R2 failover cluster, dynamic quorum is selected by default. This process defines the quorum majority based on the number of nodes in the cluster and configures the disk witness vote dynamically as nodes are added or remove from the cluster. If a cluster has an odd number of votes, a disk witness will not have a vote in the cluster; with an even number, a disk witness will have a vote. In other words, the cluster automatically decides whether to use the witness vote based on the number of voting nodes that are available in the cluster. Dynamic quorum allows a cluster to recalculate quorum when a node fails in order to keep the cluster running successfully, even when the number of nodes remaining in the cluster drops below 50 percent of the initial configuration. Another benefit of the dynamic quorum is that, when you add or evict nodes from the cluster, there is no need to change the quorum settings manually. The previous quorum modes that require manual configuration are still available, in case you feel some nostalgia for the old methodology.

Windows Server 2012 R2 also allows you to start cluster nodes that do not have a majority by using the “force quorum resilience” feature. This can be used when a cluster breaks into subsets of cluster nodes that are not aware of each other, a situation that is also known as split brain syndrome cluster scenario.

Using Windows Server 2012 R2 iSCSI Target

For shared storage, our demonstration lab uses the iSCSI Target feature on Windows Server 2012 R2. To verify the status of the iSCSI feature on a Windows Server 2012, from Windows PowerShell run the following command:

  • Get-WindowsFeature FS-iSCSITarget-Server

The above figure shows that the iSCSI Target has not been installed on the server yet. To install the iSCSI target feature, run the following Windows PowerShell command:

  • Install-WindowsFeature FS-iSCSITarget-Server

Configuring the iSCSI targets

After the iSCSI has been installed, you can go to Server Manager to complete the configuration. Here are the steps:

  1. In the Server Manager, in the navigation pane, click File and Storage Services.

  2. In the File and Storage Services pane, click iSCSI.

  3. In the iSCSI VIRTUAL DISKS pane, click TASKS, and then in the TASKS drop-down list box, click New iSCSI Virtual Disk.

  4. In the New iSCSI Virtual Disk Wizard, on the Select iSCSI virtual disk location page, under Storage location, click drive E, and then click Next.

  5. On the Specify iSCSI virtual disk name page, in the Name text box, type iLUN0, and then click Next.

  6. On the Specify iSCSI virtual disk size page, in the Size text box, type 500; in the drop-down list box, if necessary switch to GB, select Dynamically expanding and then click Next.

  7. On the Assign iSCSI target page, click New iSCSI target, and then click Next.

  8. On the Specify target name page, in the Name box, type iSAN, and then click Next.

  9. On the Specify access servers page, click Add.

  10. In the Select a method to identify the initiator dialog box, click Enter a value for the selected type, in the Type drop-down list box, click IP Address, in the Value text box, type 192.168.1.200, and then click OK.

  11. On the Specify access servers page, click Add.

  12. In the Select a method to identify the initiator dialog box, click Enter a value for the selected type; in the Type drop-down list box, click IP Address; in the Value text box, type 192.168.1.201, and then click OK.

  13. On the Specify access servers page, confirm that you have two IP addresses. These correspond to the two cluster nodes that will be using their iSCSI initiators to connect to the shared storage. Click Next.

  14. On the Enable Authentication page, click Next.

  15. On the Confirm selections page, click Create.

  16. On the View results page, wait until creation completes, and then click Close.

  17. In the iSCSI VIRTUAL DISKS pane, click TASKS, and then in the TASKS drop-down list box, click New iSCSI Virtual Disk.

  18. In the New iSCSI Virtual Disk Wizard, on the Select iSCSI virtual disk location page; under Storage location, click drive E, and then click Next.

  19. On the Specify iSCSI virtual disk name page, in the Name box, type iLUN1, and then click Next.

  20. On the Specify iSCSI virtual disk size page, in the Size box, type 300; in the drop-down list box, if necessary, switch to GB, select Dynamically expanding, and then click Next.

  21. On the Assign iSCSI target page, click iSAN, and then click Next.

  22. On the Confirm selection page, click Create.

  23. On the View results page, wait until the new iSCSI virtual disk is created, and then click Close.

Repeating steps 17 through 23, another 1GB iSCSI virtual hard disk has been created to be used as the disk witness in the failover cluster. The three drives are shown in the figure below.

Closing Remarks

Failover clustering is a critical technology to provide high availability of services and applications. This article introduced the Windows Server 2012 R2 failover clustering components and the quorum configuration modes. It also illustrated the implementation of the iSCSI Target feature to provide the shared storage for a failover cluster. Our next article will demonstrate step by step how to connect the servers to the shared storage and how to install and configure Windows Server 2012 R2 failover clustering.

 

Windows Server 2012 R2, describes the step-by-step process to connect the servers to shared storage, and the installation of a Windows Server 2012 R2 failover cluster. After the cluster is created, Windows PowerShell is used to demonstrate a generic application role configuration.

 

Requirements and Recommendations for a Successful Failover Cluster Implementation

A Windows Server 2012 R2 failover cluster can have from two to 64 servers, also known as nodes. Once configured, these computers work together to increase the availability of applications and services. However, the requirements for a failover cluster configuration are more stringent than any other Windows Server network service that you may manage.

Let’s review some of the most important limitations:

  • It is recommended to install similar hardware on each node.
  • You must run the same edition of Windows Server 2012 or Windows Server 2012 R2. The edition can be Standard or Datacenter, but they cannot be mixed in the same cluster.
  • Equally important is to configure the cluster with all nodes as Server Core or Full installation but not both.
  • Every node in the cluster should also have similar software updates and service packs.
  • You must include matching processor architecture on each cluster node. This means that you cannot mix Intel and AMD processors families on the same cluster.
  • When using serial attached SCSI or Fibre Channel storage, the controllers or host bus adapters (HBA) should be identical in all nodes. The controllers should also run the same firmware version.
  • If Internet SCSI (iSCSI) is used for storage, each node should have at least one network adapter or host bus adapter committed exclusively to the cluster storage. The network dedicated to iSCSI storage connections should not carry any other network communication traffic. It is recommended to use a minimum of 2 network adapters per node. Gigabit Ethernet (GigE) or higher is strongly suggested for better performance.
  • Each node should have installed identical network adapters that support the same IP protocol version, speed, duplex, and flow control options.
  • The network adapters in each node must obtain their IP addresses using the same consistent method, either they are all configured with static IP addresses or they all use dynamic IP addresses from a DHCP server.
  • Each server in the cluster must be a member of the same Active Directory domain and use the same DNS server for name resolution.
  • The networks and hardware equipment use to connect the servers in the cluster should be redundant, so that the nodes will maintain communication with one another after a single link fails, a node crashes, or a network device malfunctions.
  • In order to access Microsoft support, all the hardware components in your cluster should bear the “Certified for Windows Server 2012″ logo and they must pass the “Validate a Configuration” Wizard test. More on this later in the article.

Connecting the Servers to Shared Storage

Our lab for this demonstration uses two physical Windows Server 2012 R2 nodes nameServerA1, and ServerA2. Before installing the failover clustering feature, let’s connect the servers to the iSCSI target which contains the shared storage that was created in the firstarticle of this series. Starting with ServerA1, here are the steps:

  1. In the Server Manager, click Tools, and then click the iSCSI Initiator. If prompted, click Yes in the Microsoft iSCSI dialog box.

  2. In the iSCSI Initiator Properties, click the Discovery tab and then click Discover Portal.

  3. In the Discover target Portal page, In the IP address or DNS name box, type192.168.1.100, and then click OK. This is the IP address of the iSCSI Target server.

  4. Click the Targets tab, click Refresh, select iqn.1991-05.com.microsoft:dc1-isan-target, and then click Connect.

  5. In the Connect to Target box, make sure that Add this connection to the list of Favorite Targets is selected, click OK.

  6. In the iSCSI Initiator Properties, verify that the Status is Connected and click OK.

Steps 1 through 6 must also be executed on ServerA2 so that both servers can have access to the shared storage available from the iSCSI Target Server.

Next, let’s configure the volumes using Disk Management on ServerA1.

  1. In the Server Manager, click Tools, and then click Computer Management.

  2. Expand Storage, then click Disk Management and verify that you have three new disks that need to be configured. These are the iSCSI Target disks.

  3. Right-click Disk 9, and then click Online.

  4. Right-click Disk 9, and then click Initialize disk. In the Initialize Disk dialog box, click OK.

  5. Right-click the unallocated space next to Disk 9, and then click New Simple Volume.

  6. On the Welcome page, click Next.

  7. On the Specify Volume Size page, click Next.

  8. On the Assign Drive Letter or Path page, click Next.

  9. On the Format Partition page, in the Volume Label box, type CSV. Select thePerform a quick format check box, and then click Next.

  10. Click Finish.

Repeat steps 1 through 10 for Disks 10 and 11. For disk 10 change the label to Data and for Disk 11 change the label to Witness. If you run your own lab, the disks numbers are likely to be different, but the steps are identical. Once all the steps are completed onServerA1, you need to go to ServerA2 and from Disk Management right click on each disk and bring them online.

Both servers should show the disks configured as the figure below.

Installing the Windows Server 2012 R2 Failover Clustering Feature

Now that both servers are connected to the shared storage, the next phase is to install the failover clustering feature on ServerA1 and ServerA2 using either Windows PowerShell or Server manager.

The process is exactly the same on both servers, so let’s demonstrate it on ServerA1.

  1. Using Windows PowerShell verify that the Failover clustering feature is not installed on the server by running the following command:
  • Get-WindowsFeature Failover-Clustering | FT –Autosize

  1. To install the Failover clustering feature, from PowerShell run this command:
  • Install-WindowsFeature Failover-Clustering –IncludeManagementTools

Validating the Servers for Failover Clustering

Once the failover clustering feature is installed on both servers, running the wizard to validate the servers for failover clustering allows you to generate a detailed report indicating possible areas that may need to be fixed before creating the cluster. Let’s run the Validate a Configuration Wizard from ServerA1.

  1. In the Server Manager, click Tools, and then click Failover Cluster Manager.

  2. In the Actions pane of the Failover Cluster Manager, click Validate Configuration.

  3. In the Validate a Configuration Wizard, click Next.

  4. In the Select Servers or a Cluster, next to the Enter Name box, type ServerA1, and then click Add.

  5. In the Enter Name box, type ServerA2 and then click Add,

  6. Verify that ServerA1 and ServerA2 are shown in the Selected servers box and clickNext.

  7. Verify that Run all tests (recommended) is selected, and then click Next.

  8. On the Confirmation page, click Next.

  9. Wait for the validation tests to finish. This may take several minutes. On the Summary page, click View Report. It is recommended that you keep this report for future references.

  10. Verify that all tests are completed without errors. You can click on areas of the report to find out more details on the configurations that show warnings.

  11. On the Summary page, click to remove the checkmark next to Create the cluster now using the validated nodes, and click Finish.

Creating the Failover cluster

Even though there were some warnings, the servers did pass the validation test, so we can proceed to create our cluster now. The following steps will be executed using Failover Cluster manager on ServerA1, but either node would be fine to complete this process.

  1. In the Failover Cluster Manager, in the center pane, under Management, clickCreate Cluster.

  2. On the Before You Begin page of the Create Cluster Wizard, read the information and click Next.

  3. In the Enter server name box, type ServerA1, ServerA2 and then click Add.

  4. Verify the entries, and then click Next.

  5. In Access Point for Administering the Cluster, in the Cluster Name box, typeClusterA. Under Address, type 192.168.1.210, and then click Next.

  6. In the Confirmation dialog box, verify the information, and then click Next.

  7. On the Summary page, confirm that the cluster was successfully created and clickFinish to return to the Failover Cluster Manager.

After the Create Cluster Wizard is done, you can verify that a computer object with the cluster’s name has been created in Active Directory. See figure below.

Also, a host name is automatically registered in DNS for the new cluster. See figure below.

The failover cluster feature predefines specific roles that can be configured for failover protection, including DFS Namespace server, DHCP Server, File Server, iSCSI Target Server, WINS Server, Hyper-V Replica Broker and Virtual Machines. It is possible to cluster applications and services that are not clustered aware by using the available Generic application or Generic Service role respectively. The figure below shows the roles representing services and applications that can be configured for high availability.

Either the Failover Cluster Manager or Windows PowerShell can be used to configure these roles. The following code provides an example of applying the Generic Application role using Windows PowerShell.

Add-ClusterGenericApplicationRole
-CommandLine
notepad.exe `

-Name
notepad
-StaticAddress
192.168.1.225

The following command can be used to verify that the generic application is online:

Get-ClusterResource “notepad application” | fl

Failover Cluster Manager also shows that the generic application is up and running. See the figure below.

Failover Clustered File Server Options

Windows server 2012 R2 supports two different clustered file servers’ implementations: Scale-Out File Server for application data and File Server for general use.

Scale-Out File Server for Application Data

It is also known as an active-active cluster; this feature was introduced in Windows Server 2012 and it is the recommended clustered file server option to deploy Hyper-V nodes and Microsoft SQL servers over Server Message Block (SMB). This high performance solution allows you to store server application data on file shares that are concurrently available online on all nodes. Because the aggregated bandwidth from all the nodes is now the maximum cluster bandwidth, the performance boost can be very significant. You can increase the total bandwidth by bringing additional nodes into the cluster. These scale-out files shares require SMB 3.0 or higher and they are not available in any version of Windows Server previous to Windows Server 2012.

File Server for General Use

This is the traditional failover clustering solution that has been available on previous versions of Windows Server in which only one node is available at a time in an active-passive configuration. It supports some important features that cannot be implemented on Scale-Out File Servers like data deduplication, DFS replication, dynamic access control, work folders, NFS shares, branchcache and File Server Resource Manager screen and quota management.

Closing Remarks

Installing the Windows Server 2012 R2 failover clustering feature has some strict hardware and software requirements. This article demonstrates how to connect the cluster nodes to shared storage, how to create a cluster and configure a generic application role using Windows PowerShell. There is more to do now that the cluster is up and running as we can configure additional services and applications for failover protection. After all, that is the whole idea of setting up the cluster.

Our next and final article in this series will walk through the configuration of a highly available file server. And saving the best for last, you will see the implementation of cluster shared volumes (CSV) and how they are used on a Hyper-V cluster to provide failover protection in a virtualized environment. Live migration will be tested to validate the functionality of the Hyper-V cluster.