ADC

Support matrix and usage guidelines

This document lists the different hypervisors and features supported on a Citrix ADC VPX instance. It also describes their usage guidelines and limitations.

Table 1. VPX instance on Citrix Hypervisor

Citrix Hypervisor version SysID VPX models
7.1 450000 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G

Table 2. VPX instance on VMware ESXi server

VMware ESXi version SysID VPX models
ESXi 7.0 update 3m, build number 21686933 (supported from Citrix ADC release 12.1 build 65.x onwards) 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 3f, build number 20036589 (supported from Citrix ADC release 12.1 build 65.x onwards) 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 3d, build number 19482537 (supported from Citrix ADC release 12.1 build 65.x onwards) 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 2d, build number 18538813 (supported from Citrix ADC release 12.1 build 63.x onwards) 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 2a, build number 17867351 (supported from Citrix ADC release 12.1 build 62.x onwards) 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 1d, build number 17551050 (supported from Citrix ADC release 12.1 build 62.x onwards) 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 1c, build number 17325551 (supported from Citrix ADC release 12.1 build 62.x onwards) 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.0, build numbers 3620759, 5050593 (supported from Citrix ADC release 12.0 build 51.24 onwards), 6765062 (supported from Citrix ADC release 12.0 build 56.20 onwards). 6.5, build number 4564106 and patch 7967591. 6.5, build number 8294253 (supported from Citrix ADC release 12.1 build 55.13 onwards). 6.7, build number 8941472, 13006603 (supported from Citrix ADC release 12.1 build 55.13 onwards). 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G

Table 3. VPX on Microsoft Hyper-V

Hyper-V version SysID VPX models
2012, 2012R2 450020 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000

Table 4. VPX instance on generic KVM

Generic KVM version SysID VPX models
RHEL 7.4, RHEL 7.5 (from Citrix ADC version 12.1 50.x onwards) Ubuntu 16.04 450070 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G. VPX 25G, VPX 40G, VPX 100G

Note:

The VPX instance is qualified for hypervisor release versions mentioned in table 1–4, and not for patch releases within a version. However, the VPX instance is expected to work seamlessly with patch releases of a supported version. If it does not, log a support case for troubleshooting and debugging.

Table 5. VPX instance on AWS

AWS version SysID VPX models
N/A 450040 VPX 10, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 15G, VPX BYOL

Table 6. VPX instance on Azure

Azure version SysID VPX models
N/A 450020 VPX 10, VPX 200, VPX 1000, VPX 3000, VPX BYOL

Table 7. VPX feature matrix

VPX-feature

  • Clustering support is available on SRIOV for client- and server-facing interfaces and not for the backplane.

  • Interface DOWN events are not recorded in Citrix ADC VPX instances.

  • For Static LA, traffic might still be sent on the interface whose physical status is DOWN.

  • For LACP, peer device knows interface DOWN event based on LACP timeout mechanism.

    • Short timeout: 3 seconds
    • Long timeout: 90 seconds
  • For LACP, interfaces should not be shared across VMs.

  • For Dynamic routing, convergence time depends on the Routing Protocol since link events are not detected.

  • Monitored static Route functionality fails if monitors are not bound to static routes since Route state depends on the VLAN status. The VLAN status depends on the link status.

  • Partial failure detection does not happen in high availability if there’s link failure. High availability-split brain condition might happen if there is link failure.

  • When any link event (disable/enable, reset) is generated from a VPX instance, the physical status of the link does not change. For static LA, any traffic initiated by the peer gets dropped on the instance.

  • For the VLAN tagging feature to work, do the following:

    On the VMware ESX, set the port group’s VLAN ID to 1–4095 on the vSwitch of the VMware ESX server. For more information about setting a VLAN ID on the vSwitch of VMware ESX server, seeVMware ESX Server 3 802.1Q VLAN Solutions.

Table 8. Supported browsers

Operating system Browser and versions
Windows 7 Internet Explorer- 8, 9, 10, and 11; Mozilla Firefox 3.6.25 and above; Google Chrome- 15 and above
Windows 64 bit Internet Explorer - 8, 9; Google Chrome - 15 and above
MAC Mozilla Firefox - 12 and above; Safari - 5.1.3; Google Chrome - 15 and above

Usage guidelines

Follow these usage guidelines:

  • See theVMware ESXi CPU Considerationssection in the documentPerformance Best Practices for VMware vSphere 6.5. Here’s an extract:

    不建议虚拟机与high CPU/Memory demand sit on a Host/Cluster that is overcommitted. In most environments ESXi allows significant levels of CPU overcommitment (that is, running more vCPUs on a host than the total number of physical processor cores in that host) without impacting virtual machine performance. If an ESXi host becomes CPU saturated (that is, the virtual machines and other loads on the host demand all the CPU resources the host has), latency-sensitive workloads might not perform well. In this case you might want to reduce the CPU load, for example by powering off some virtual machines or migrating them to a different host (or allowing DRS to migrate them automatically).

  • Citrix recommends the latest hardware compatibility version to avail latest feature sets of the ESXi hypervisor for the virtual machine. For more information about the hardware and ESXi version compatibility, seeVMware documentation.

  • The Citrix ADC VPX is a latency-sensitive, high-performance virtual appliance. To deliver its expected performance, the appliance requires vCPU reservation, memory reservation, vCPU pinning on the host. Also, hyper threading must be disabled on the host. If the host does not meet these requirements, issues such as high-availability failover, CPU spike within the VPX instance, sluggishness in accessing the VPX CLI, pitboss daemon crash, packet drops, and low throughput occur.

  • A hypervisor is considered over-provisioned if one of the following two conditions is met:
    • The total number of virtual cores (vCPU) provisioned on the host is greater than the total number of physical cores (pCPUs).

    • The total number of provisioned VMs consume more vCPUs than the total number of pCPUs.

      At times, if an instance is over-provisioned, the hypervisor might not be able to guarantee the resources reserved (such as CPU, memory, and others) for the instance due to hypervisor scheduling over-heads or bugs or limitations with the hypervisor. This can cause lack of CPU resource for Citrix ADC and might lead to issues mentioned in the first point underUsage guidelines. As administrators, you’re recommended to reduce the tenancy on the host so that the total number of vCPUs provisioned on the host is lesser or equal to the total number of pCPUs.

      ExampleFor ESX hypervisor, if the%RDY%parameter of a VPX vCPU is greater than 0 in theesxtopcommand output, the ESX host is said to be having scheduling overheads, which can cause latency related issues for the VPX instance.

      In such a situation, reduce the tenancy on the host so that%RDY%returns to 0 always. Alternatively, contact the hypervisor vendor to triage the reason for not honoring the resource reservation done.

  • Hot adding is supported only for PV and SRIOV interfaces on Citrix ADC.

  • Hot removing either through the AWS Web console or AWS CLI is not supported for PV and SRIOV interfaces on Citrix ADC. The behavior of the instances can be unpredictable if hot-removal is attempted.

  • You can use two commands (set ns vpxparamandshow ns vpxparam) to control packet engine(non-management) CPU usage behavior of VPX instances in hypervised and cloud environments:

    • set ns vpxparam -cpuyield (YES | NO | DEFAULT)Allow each VM to use CPU resources that have been allocated to another VM but are not being used.

      Set ns vpxparam parameters:

      -cpuyield: Release or do not release of allocated but unused CPU resources.

      • YES: Allow allocated but unused CPU resources to be used by another VM.

      • NO: Reserve all CPU resources for the VM to which they have been allocated. This option shows higher percentage in hypervisor and cloud environments for VPX CPU usage.

      • DEFAULT: No.

      Note

      On all the Citrix ADC VPX platforms, the vCPU usage on the host system is 100 percent. Type theset ns vpxparam –cpuyield YEScommand to override this usage.

      If you want to set the cluster nodes to “yield”, you must perform the following additional configurations on CCO:

      • If a cluster is formed, all the nodes come up with “yield=DEFAULT”.
      • If a cluster is formed using the nodes that are already set to “yield=YES”, then the nodes are added to cluster using “DEFAULT” yield.

      Note:

      If you want to set the cluster nodes to “yield=YES”, you can perform suitable configurations only after forming the cluster but not before the cluster is formed.

    • show ns vpxparamDisplay the current vpxparam settings.

Support matrix and usage guidelines