ADC

Support matrix and usage guidelines

This document lists the different hypervisors, and features supported on a Citrix ADC VPX instance. The document also describes their usage guidelines, and known limitations.

Table 1. VPX instance on Citrix Hypervisor

Citrix Hypervisor version SysID VPX models
8.2 supported 13.0 64.x onwards, 8.0, 7.6, 7.1 450000 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G

Table 2. VPX instance on VMware ESXi hypervisor

ESXi version ESXi release date inMM/DD/YYYYformat ESXi build number Citrix ADC VPX version SysID VPX models
ESXi 8.0u1 04/18/2023 21495797 13.0-90.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 8.0c 03/30/2023 21493926 13.0-90.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 8.0 10/11/2022 20513097 13.0-90.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 3n 07/06/2023 21930508 13.0-91.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 3m 05/03/2023 21686933 13.0-91.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 3i 12/08/2022 20842708 13.0-90.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 3f 07/12/2022 20036589 13.0-86.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 3d 03/29/2022 19482537 13.0-86.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 3c 01/27/2022 19193900 13.0-85.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 2d 09/14/2021 18538813 13.0-83.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 2a 12/17/2020 17867351 13.0-82.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 1d 12/17/2020 17551050 13.0-82.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 1c 12/17/2020 17325551 13.0-82.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 update 1b 10/06/2020 16850804 13.0-76.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0b 06/23/2020 16324942 13.0-71.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 7.0 GA 04/02/2020 15843807 13.0-71.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.7 P04 11/19/2020 17167734 13.0-67.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.7 P03 08/20/2020 16713306 13.0-67.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.7 P02 04/28/2020 16075168 13.0-67.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.7 P01 12/05/2019 15160138 13.0-67.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.7 Update 3 08/20/2019 14320388 13.0-58.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.7 U2 04/11/2019 13006603 13.0-47.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.5 GA 11/15/2016 4564106 13.0-47.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.5 U1g 3/20/2018 7967591 13.0 47.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.0 Update 3 2/24/2017 5050593 12.0-51.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G
ESXi 6.0 Express Patch 11 10/5/2017 6765062 12.0-56.x onwards 450010 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G, VPX 25G, VPX 40G, VPX 100G

Table 3. VPX on Microsoft Hyper-V

Hyper-V version SysID VPX models
2012, 2012 R2, 2016, 2019 450020 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000

Table 4. VPX instance on generic KVM

Generic KVM version SysID VPX models
RHEL 7.4, RHEL 7.5 (from Citrix ADC version 12.1 50.x onwards), RHEL 7.6, RHEL 8.2, Ubuntu 16.04, Ubuntu 18.04, RHV 4.2 450070 VPX 10, VPX 25, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX 8000, VPX 10G, VPX 15G. VPX 25G, VPX 40G, VPX 100G

Points to note:

Consider the following points while using KVM hypervisors.

  • The VPX instance is qualified for hypervisor release versions mentioned in table 1–4, and not for patch releases within a version. However, the VPX instance is expected to work seamlessly with patch releases of a supported version. If it does not, log a support case for troubleshooting and debugging.

  • Use theip linkcommands to configure RHEL 8.2 network bridges.

  • Before using RHEL 7.6, complete the following steps on the KVM host:
    1. Edit /etc/default/grub and append"kvm_intel.preemption_timer=0"toGRUB_CMDLINE_LINUXvariable.

    2. Regenerate grub.cfg with the command"# grub2-mkconfig -o /boot/grub2/grub.cfg".

    3. Restart the host machine.

  • Before using Ubuntu 18.04, complete the following steps on the KVM host:

    1. Edit /etc/default/grub and append"kvm_intel.preemption_timer=0"toGRUB_CMDLINE_LINUXvariable.
    2. Regenerate grub.cfg with the command"# grub-mkconfig -o /boot/grub/grub.cfg “.
    3. Restart the host machine.

Table 5. VPX instance on AWS

AWS version SysID VPX models
N/A 450040 VPX 10, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX BYOL, VPX 8000, VPX 10G, VPX 15G, and VPX 25G are available only with BYOL with EC2 instance types (C5, M5, and C5n)

Note:

The VPX 25G offering doesn’t give the desired 25G throughput in AWS but can give higher SSL transactions rate compared to VPX 15G offering.

Table 6. VPX instance on Azure

Azure version SysID VPX models
N/A 450020 VPX 10, VPX 200, VPX 1000, VPX 3000, VPX 5000, VPX BYOL

Table 7. VPX feature matrix

VPX-feature

The superscript numbers (1, 2, 3) used in the preceding table refers to the following points with respective numbering:

  1. Clustering support is available on SRIOV for client-facing and server-facing interfaces and not for the backplane.

  2. Interface DOWN events are not recorded in Citrix ADC VPX instances.

  3. For Static LA, traffic might still be sent on the interface whose physical status is DOWN.

  4. For LACP, the peer device knows the interface DOWN event based on the LACP timeout mechanism.

    • Short timeout: 3 seconds
    • Long timeout: 90 seconds
  5. For LACP, do not share interfaces across VMs.

  6. For Dynamic routing, convergence time depends on the Routing Protocol since link events are not detected.

  7. Monitored static Route functionality fails if you do not bind monitors to static routes because the Route state depends on the VLAN status. The VLAN status depends on the link status.

  8. Partial failure detection does not happen in high availability if there’s link failure. High availability-split brain condition might happen if there’s link failure.

    • 当任何链接事件(禁用/启用、复位)创erated from a VPX instance, the physical status of the link does not change. For static LA, any traffic initiated by the peer gets dropped on the instance.

    • For the VLAN tagging feature to work, do the following:

    On the VMware ESX, set the port group’s VLAN ID to 1–4095 on the vSwitch of the VMware ESX server. For more information about setting a VLAN ID on the vSwitch of the VMware ESX server, seeVMware ESX Server 3 802.1Q VLAN Solutions.

Table 8. Supported browsers

Operating system Browser and versions
Windows 7 Internet Explorer- 8, 9, 10, and 11; Mozilla Firefox 3.6.25 and above; Google Chrome- 15 and above
Windows 64 bit Internet Explorer - 8, 9; Google Chrome - 15 and above
MAC Mozilla Firefox - 12 and above; Safari - 5.1.3; Google Chrome - 15 and above

Usage guidelines

Follow these usage guidelines:

  • We recommend you to deploy a VPX instance on local disks of the server or SAN-based storage volumes.

See theVMware ESXi CPU Considerationssection in the documentPerformance Best Practices for VMware vSphere 6.5. Here’s an extract:

  • It is not recommended that virtual machines with high CPU/Memory demand sit on a Host/Cluster that is overcommitted.

  • In most environments, ESXi allows significant levels of CPU overcommitment without impacting virtual machine performance. On a host, you can run more vCPUs than the total number of physical processor cores in that host.

  • If an ESXi host becomes CPU saturated, that is, the virtual machines and other loads on the host demand all the CPU resources the host has, latency-sensitive workloads might not perform well. In this case you might want to reduce the CPU load, for example, by powering off some virtual machines or migrating them to a different host, or allowing DRS to migrate them automatically.

  • Citrix recommends the latest hardware compatibility version to avail the latest feature sets of the ESXi hypervisor for the virtual machine. For more information about the hardware and ESXi version compatibility, seeVMware documentation.

  • The Citrix ADC VPX is a latency-sensitive, high-performance virtual appliance. To deliver its expected performance, the appliance requires vCPU reservation, memory reservation, vCPU pinning on the host. Also, hyper threading must be disabled on the host. If the host does not meet these requirements, issues such as high-availability failover, CPU spike within the VPX instance, sluggishness in accessing the VPX CLI, pit boss daemon crash, packet drops, and low throughput occur.

A hypervisor is considered over-provisioned if one of the following two conditions is met:

  • The total number of virtual cores (vCPU) provisioned on the host is greater than the total number of physical cores (pCPUs).

  • The total number of provisioned VMs consume more vCPUs than the total number of pCPUs.

    If an instance is over-provisioned, the hypervisor might not guarantee the resources reserved (such as CPU, memory, and others) for the instance due to hypervisor scheduling over-heads, bugs, or limitations with the hypervisor. This behavior can cause lack of CPU resource for Citrix ADC and might lead to the issues mentioned in the first point underUsage guidelines. As administrators, you’re recommended to reduce the tenancy on the host so that the total number of vCPUs provisioned on the host is lesser or equal to the total number of pCPUs.

    Example

    For ESX hypervisor, if the%RDY%parameter of a VPX vCPU is greater than 0 in theesxtopcommand output, the ESX host is said to be having scheduling overheads, which can cause latency related issues for the VPX instance.

    In such a situation, reduce the tenancy on the host so that%RDY%returns to 0 always. Alternatively, contact the hypervisor vendor to triage the reason for not honoring the resource reservation done.

  • Hot adding is supported only for PV and SRIOV interfaces with Citrix ADC on AWS. VPX instances with ENA interfaces do not support hot-plug, and the behavior of the instances can be unpredictable if hot-plugging is attempted.
  • Hot removing either through the AWS Web console or AWS CLI interface is not supported with PV, SRIOV, and ENA interfaces for Citrix ADC. The behavior of the instances can be unpredictable if hot-removal is attempted.

Commands to control the packet engine CPU usage

You can use two commands (set ns vpxparamandshow ns vpxparam) to control the packet engine (non-management) CPU usage behavior of VPX instances in hypervisor and cloud environments:

  • set ns vpxparam [-cpuyield (YES | NO | DEFAULT)] [-masterclockcpu1 (YES | NO)]

    允许每个VM使用的CPU资源allocated to another VM but are not being used.

    Set ns vpxparamparameters:

    -cpuyield: Release or do not release of allocated but unused CPU resources.

    • YES: Allow allocated but unused CPU resources to be used by another VM.

    • NO: Reserve all CPU resources for the VM to which they have been allocated. This option shows higher percentage in hypervisor and cloud environments for VPX CPU usage.

    • DEFAULT: No.

    Note:

    On all the Citrix ADC VPX platforms, the vCPU usage on the host system is 100 percent. Type theset ns vpxparam –cpuyield YEScommand to override this usage.

    If you want to set the cluster nodes to “yield”, you must perform the following extra configurations on CCO:

    • If a cluster is formed, all the nodes come up with “yield=DEFAULT”.
    • If a cluster is formed using the nodes that are already set to “yield=YES”, then the nodes are added to cluster using the “DEFAULT” yield.

    Note:

    If you want to set the cluster nodes to “yield=YES”, you can configure only after forming the cluster but not before the cluster is formed.

    -masterclockcpu1: You can move the main clock source from CPU0 (management CPU) to CPU1. This parameter has the following options:

    • YES: Allow the VM to move the main clock source from CPU0 to CPU1.

    • NO: VM uses CPU0 for the main clock source. By default, CPU0 is the main clock source.

  • show ns vpxparam

    Display the currentvpxparamsettings.

Other References

Support matrix and usage guidelines