ADC

Creating a Citrix ADC cluster

To create a cluster, start by taking one of the Citrix ADC appliances that you want to add to the cluster. On this node, you must create the cluster instance and define the cluster IP address. This node is the first cluster node and is called the cluster configuration coordinator (CCO). All configurations that are performed on the cluster IP address are stored on this node and then propagated to the other cluster nodes.

The responsibility of CCO in a cluster is not fixed to a specific node. It can change over time depending on the following factors:

  • The priority of the node. The node with the highest priority (lowest priority number) is made the CCO. Therefore, if a node with a priority number lower than the existing CCO is added, the new node takes over as the CCO.

    Note

    Node priority can be configured from NetScaler 10.1 onwards.

  • If the current CCO goes down, the node with the next lowest priority number takes over as the CCO. If the priority is not set or if there are multiple nodes with the lowest priority number, the CCO is selected from one of the available nodes.

Note

The configurations of the appliance (including SNIP addresses and VLANs) are cleared by implicitly running theclear ns config extendedcommand. However, the default VLAN and NSVLAN are not cleared from the appliance. Therefore, if you want the NSVLAN on the cluster, make sure it is created before the appliance is added to the cluster. For an L3 cluster (cluster nodes on different networks), networking configurations are not cleared from the appliance.

Important

HA Monitor (HAMON) on a cluster setup is used to monitor the health of an interface on each node. The HAMON parameter must be enabled on each node to monitor the state of the interface. If the operational state of the HAMON enabled interface goes down due to any reason, the respective cluster node is marked as unhealthy (NOT UP) and that node cannot serve traffic.

To create a cluster by using the command line interface

  1. Log on to an appliance (for example, appliance with NSIP address 10.102.29.60) that you want to add to the cluster.

  2. Add a cluster instance.

    添加集群实例< clId > -quorumType < |马JORITY> -inc -backplanebasedview

    Note

    • The cluster instance ID must be unique within a LAN.
    • The-quorumTypeparameter must be set to MAJORITY and not NONE in the following scenarios:
      • Topologies which do not have redundant links between cluster nodes. These topologies might be prone to network partition due to a single point of failure.
      • During any cluster operations such as node addition or removal.
    • For an L3 cluster, make sure the-incparameter is set to ENABLED. The-incparameter must be disabled for an L2 cluster.
    • When the-backplanebasedviewparameter is enabled, the operational view (set of nodes that serve traffic) is decided based on heartbeats received only on the backplane interface. By default, this parameter is disabled. When this parameter is disabled, a node does not depend on the heartbeat reception only on the backplane.
  3. [Only for an L3 cluster] Create a node group. In the next step, the newly added cluster node must be associated with this node group.

    Note

    This node group includes all or a subset of the Citrix ADC appliances that belong to the same network.

    add cluster nodegroup

  4. Add the Citrix ADC appliance to the cluster.

    ```add cluster node -state -backplane -nodegroup

    > **Note** For an L3 cluster: > >- The node group parameter must be set to the name of the node group that is created. >- The backplane parameter is mandatory for nodes that are associated with a node group that has more than one node, so that the nodes within the network can communicate with each other. Example: Adding a node for an L2 cluster (all cluster nodes are in the same network).

    add cluster node 0 10.102.29.60 -state PASSIVE -backplane 0/1/1

    Adding a node for an L3 cluster which includes a single node from each network. Here, you do not have to set the backplane.

    add cluster node 0 10.102.29.60 -state PASSIVE -nodegroup ng1

    Adding a node for an L3 cluster which includes multiple nodes from each network. Here, you have to set the backplane so that nodes within a network can communicate with each other.

    add cluster node 0 10.102.29.60 -state PASSIVE -backplane 0/1/1 -nodegroup ng1```

  5. Add the cluster IP address (for example, 10.102.29.61) on this node.

    add ns ip   -type clip 

    Example

    > add ns ip 10.102.29.61 255.255.255.255 -type clip 
  6. Enable the cluster instance.

    enable cluster instance

  7. Save the configuration.

    save ns config

  8. Warm reboot the appliance.

    reboot -warm

    Verify the cluster configurations by using the show cluster instance command. Verify that the output of the command displays the NSIP address of the appliance as a node of the cluster.

  9. After the node is UP, login to the CLIP and change RPC credentials for both cluster IP address and Node IP address. For more information about changing an RPC node password, seeChange an RPC node password.

To create a cluster by using the GUI

  1. Log on to an appliance (for example, an appliance with NSIP address 10.102.29.60) that you intend to add to the cluster.
  2. Navigate toSystem > Cluster.
  3. In the details pane, click theManage Clusterlink.
  4. In the Cluster Configuration dialog box, set the parameters required to create a cluster. For a description of a parameter, hover the mouse cursor over the corresponding text box.
  5. ClickCreate.
  6. 在配置集群实例对话框中,希利ct the Enable cluster instance check box.
  7. In the Cluster Nodes pane, select the node and clickOpen.
  8. In the Configure Cluster Node dialog box, set the State.
  9. ClickOK, and then clickSave.
  10. Warm reboot the appliance.
  11. After the node is UP, login to the CLIP and change RPC credentials for both cluster IP address and Node IP address. For more information about changing an RPC node password, seeChange an RPC node password.

Strict mode support for sync status of the cluster

You can now configure a cluster node to view errors when applying the configuration. A new parameter, “syncStatusStrictMode” is introduced in both the add and set cluster instance command to track the status of each node in a cluster. By default, the “syncStatusStrictMode” parameter is disabled.

To enable the strict mode by using the CLI

At the command prompt, type:

set cluster instance [-syncStatusStrictMode (ENABLED | DISABLED)]

Example:

set cluster instance 1 –syncStatusStrictMode ENABLED

To view the status of strict mode by using the CLI

>显示集群实例1)集群ID: 1死国米val: 3 secs Hello Interval: 200 msecs Preemption: DISABLED Propagation: ENABLED Quorum Type: MAJORITY INC State: DISABLED Process Local: DISABLED Retain Connections: NO Heterogeneous: NO Backplane based view: DISABLED Cluster sync strict mode: ENABLED Cluster Status: ENABLED(admin), ENABLED(operational), UP WARNING(s): (1) - There are no spotted SNIPs configured on the cluster. Spotted SNIPs can help improve cluster performance Member Nodes: Node ID Node IP Health Admin State Operational State ------- ------- ------ ----------- ----------------- 1) 1 192.0.2.20 UP ACTIVE ACTIVE(Configuration Coordinator) 2) 2 192.0.2.21 UP ACTIVE ACTIVE 3) 3 192.0.2.19* UP ACTIVE ACTIVE 

To view the sync failure reason of a cluster node by using the GUI

  1. Navigate toSystem > Cluster > Cluster Nodes.
  2. In theCluster Nodespage, scroll to the extreme right to view the details of the synchronization failure reason of the cluster nodes.
Creating a Citrix ADC cluster