Getting Started with Citrix ADC
Deploy a Citrix ADC VPX instance
Optimize Citrix ADC VPX performance on VMware ESX, Linux KVM, and Citrix Hypervisors
Apply Citrix ADC VPX configurations at the first boot of the Citrix ADC appliance in cloud
我nstall a Citrix ADC VPX instance on Microsoft Hyper-V servers
我nstall a Citrix ADC VPX instance on Linux-KVM platform
Prerequisites for Installing Citrix ADC VPX Virtual Appliances on Linux-KVM Platform
Provisioning the Citrix ADC Virtual Appliance by using OpenStack
Provisioning the Citrix ADC Virtual Appliance by using the Virtual Machine Manager
Configuring Citrix ADC Virtual Appliances to Use SR-IOV Network Interface
Configuring Citrix ADC Virtual Appliances to use PCI Passthrough Network Interface
Provisioning the Citrix ADC Virtual Appliance by using the virsh Program
Provisioning the Citrix ADC Virtual Appliance with SR-IOV, on OpenStack
Configuring a Citrix ADC VPX Instance on KVM to Use OVS DPDK-Based Host Interfaces
Deploy a Citrix ADC VPX instance on AWS
Deploy a VPX high-availability pair with elastic IP addresses across different AWS zones
Deploy a VPX high-availability pair with private IP addresses across different AWS zones
Configure a Citrix ADC VPX instance to use SR-IOV network interface
Configure a Citrix ADC VPX instance to use Enhanced Networking with AWS ENA
Deploy a Citrix ADC VPX instance on Microsoft Azure
Network architecture for Citrix ADC VPX instances on Microsoft Azure
Configure multiple IP addresses for a Citrix ADC VPX standalone instance
Configure a high-availability setup with multiple IP addresses and NICs
Configure a high-availability setup with multiple IP addresses and NICs by using PowerShell commands
Configure a Citrix ADC VPX instance to use Azure accelerated networking
Configure HA-INC nodes by using the Citrix high availability template with Azure ILB
Configure a high-availability setup with Azure external and internal load balancers simultaneously
Configure address pools (IIP) for a Citrix Gateway appliance
Upgrade and downgrade a Citrix ADC appliance
我n Service Software Upgrade support for high availability
Solutions for Telecom Service Providers
Load Balance Control-Plane Traffic that is based on Diameter, SIP, and SMPP Protocols
Provide Subscriber Load Distribution Using GSLB Across Core-Networks of a Telecom Service Provider
Authentication, authorization, and auditing application traffic
Basic components of authentication, authorization, and auditing configuration
On-premises Citrix Gateway as an identity provider to Citrix Cloud
Authentication, authorization, and auditing configuration for commonly used protocols
Troubleshoot authentication and authorization related issues
-
-
-
-
-
-
Persistence and persistent connections
Advanced load balancing settings
Gradually stepping up the load on a new service with virtual server–level slow start
Protect applications on protected servers against traffic surges
Retrieve location details from user IP address using geolocation database
Use source IP address of the client when connecting to the server
Use client source IP address for backend communication in a v4-v6 load balancing configuration
Set a limit on number of requests per connection to the server
Configure automatic state transition based on percentage health of bound services
Use case 2: Configure rule based persistence based on a name-value pair in a TCP byte stream
Use case 3: Configure load balancing in direct server return mode
Use case 6: Configure load balancing in DSR mode for IPv6 networks by using the TOS field
Use case 7: Configure load balancing in DSR mode by using IP Over IP
Use case 10: Load balancing of intrusion detection system servers
Use case 11: Isolating network traffic using listen policies
Use case 12: Configure Citrix Virtual Desktops for load balancing
Use case 13: Configure Citrix Virtual Apps for load balancing
Use case 14: ShareFile wizard for load balancing Citrix ShareFile
Use case 15: Configure layer 4 load balancing on the Citrix ADC appliance
-
-
-
Authentication and authorization for System Users
-
Configuring a CloudBridge Connector Tunnel between two Datacenters
Configuring CloudBridge Connector between Datacenter and AWS Cloud
Configuring a CloudBridge Connector Tunnel Between a Datacenter and Azure Cloud
Configuring CloudBridge Connector Tunnel between Datacenter and SoftLayer Enterprise Cloud
Configuring a CloudBridge Connector Tunnel Between a Citrix ADC Appliance and Cisco IOS Device
CloudBridge Connector Tunnel Diagnostics and Troubleshooting
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde.(Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique.(Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica.(Aviso legal)
此内容已经过机器动态翻译。放弃
このコンテンツは動的に機械翻訳されています。免責事項
이 콘텐츠는 동적으로 기계 번역되었습니다.책임 부인
Este texto foi traduzido automaticamente.(Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica.(Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt.(Haftungsausschluss)
Ce article a été traduit automatiquement.(Clause de non responsabilité)
Este artículo ha sido traducido automáticamente.(Aviso legal)
この記事は機械翻訳されています.免責事項
이 기사는 기계 번역되었습니다.책임 부인
Este artigo foi traduzido automaticamente.(Aviso legal)
这篇文章已经过机器翻译.放弃
Questo articolo è stato tradotto automaticamente.(Esclusione di responsabilità))
Translation failed!
我n Service Software Upgrade support for high availability for performing zero downtime upgrade
During a regular upgrade process in a high availability setup, at some point, both nodes run different software builds. These two builds can have the same or different internal high availability version numbers.
我f both the builds have different high availability version numbers, connection failover (even if it is enabled) for existing data connections is not supported. In other words, all existing data connections are lost, which leads to downtime.
To address this issue, in Service Software Upgrade (ISSU) can be used for high availability set-ups. ISSU introduces a migration functionality, which replaces the force failover operation step in the upgrade process. The migration functionality takes care of honoring the existing connections and includes the force failover operation.
After migration operation is performed, the new primary node always receives traffic (request and response) related to the existing connections but steers them to the old primary node. The old primary node processes the data traffic and then sends them directly to the destination.
How the enhanced ISSU works
The regular upgrade process in a high availability setup consist of the following sequential steps:
Upgrade the secondary node. This step includes software upgrade of the secondary node and restart of the node.
Force Failover. Running the force failover makes the upgraded secondary node to primary, and the primary node to secondary.
Upgrade the new secondary node. This step includes software upgrade of the new secondary node and restart of the node.
During the time frame between step 1 and step 3, both nodes run different software builds. These two builds can have the same or different internal high availability versions.
我f both the builds have different high availability version numbers, connection failover (even if it is enabled) for existing data connections is not supported. In other words, all existing data connections are lost, which leads to downtime.
The ISSU upgrade process in a high availability setup consists of the following steps:
Upgrade the secondary node. This step includes software upgrade of the secondary node and restart of the node.
我SSU migration operation. The step includes the force failover operation and takes care of the existing connections. After you perform the migration operation, the new primary node always receives traffic (request and response) related to the existing connections but steers them to the old primary node through the configured SYNC VLAN in GRE tunnel. The old primary node processes the data traffic and then sends them directly to the destination. The ISSU migration operation is completed when all the existing connections are closed.
Upgrade the new secondary node. This step includes software upgrade of the new secondary node and restart of the node.
Before you begin
Before you start performing the ISSU process in a high availability setup, go through the following pre-requisites and limitations:
- Make sure the SYNC VLAN is configured on both the nodes of the high availability setup. For more information, seeRestricting high availability synchronization traffic to a VLAN.
- 我SSU is not supported in Microsoft Azure cloud because Microsoft Azure does not support GRE tunneling.
- High availability config propagation and synchronization do not work during ISSU.
- 我SSU is not supported for IPv6 high availability setup.
我SSU is not supported for following sessions:
- Jumbo frames
- 我Pv6 sessions
- Large scale NAT (LSN)
Configuration steps
我SSU includes a migration feature, which replaces the force failover operation in the regular upgrade process of a high availability setup. The migration functionality takes care of honoring the existing connections and includes the force failover operation.
During the ISSU process of a high availability setup, you run the migration operation just after you upgraded the secondary node. You can perform the migration operation from either of the two nodes.
CLI Procedure
To perform the high availability migration operation by using the CLI:
在命令提示符类型:
start ns migration
GUI Procedure
To perform the high availability migration operation by using the GUI:
Navigate toSystem, clickSystem Informationtab, clickMigration tab, and then clickStart Migration.
Display ISSU statistics
You can view the ISSU statistics for monitoring the current ISSU process in a high availability setup. The ISSU statistics displays the following information:
- Current status of ISSU migration operation
- Start time of the ISSU migration operation
- End time of the ISSU migration operation
- Start time of the ISSU rollback operation
You can view the ISSU statistics on either of HA nodes by using CLI or GUI.
CLI Procedure
To display the ISSU statistics by using the CLI:
在命令提示符类型:
show ns migration
GUI Procedure
To display the ISSU statistics by using the GUI:
Navigate toSystem, clickSystem Informationtab, clickMigration tab, and then clickShow Migration.
Rollback of the ISSU process
High availability (HA) setups now support rollback of the In Service Software Upgrade (ISSU) process. The ISSU rollback feature is helpful if you observe that the HA setup during the ISSU migration operation is not stable, or is not performing at an optimum level as expected.
The ISSU rollback is applicable when the ISSU migration operation is in progress. The ISSU rollback does not work if ISSU migration operation is already completed. In other words, you must run the ISSU rollback operation when ISSU migration operation is in progress.
The ISSU rollback functions differently based on the state of the ISSU migration operation when the ISSU rollback operation is triggered:
Force failover has not yet happened during ISSU migration operation. The ISSU rollback stops the ISSU migration operation, and removes any internal data related to the ISSU migration stored in both the nodes. The current primary node remains as primary node and continues to process data traffic related to existing and new connections.
Force failover has happened during ISSU migration operation. If the high availability failover has happened during the ISSU migration operation, then the new primary node (say it is N1) processes traffic related to the new connections. The old primary node (new secondary node, say it is N2) processes traffic related to the old connections (existing connections before the ISSU migration operation).
The ISSU rollback stops the ISSU migration operation and triggers a force failover. The new primary node (N2) now starts processing traffic related to the new connections. The new primary node (N2) also continues to processes traffic related to old connections (existing connections established before the ISSU migration operation). In other words, the existing connections established before the ISSU migration operation are not lost.
The new secondary node (N1) removes all the existing connections (new connections created during the ISSU migration operation) and does not process any traffic. In other words, any existing connections that were established after the force failover of ISSU migration operation are lost forever.
Configuration steps
You can use Citrix ADC CLI or GUI to perform the ISSU rollback operation.
CLI Procedure
To perform the ISSU rollback operation by using the CLI:
在命令提示符类型:
stop ns migration
GUI Procedure
To perform the ISSU rollback operation by using the GUI:
Navigate toSystem, clickSystem Informationtab, clickMigration tab, and then clickStop Migration.
SNMP traps for In Service Software Upgrade process
The In Service Software Upgrade (ISSU) process for a high availability setup supports the following SNMP trap messages at the start and end of the ISSU migration operation.
SNMP Trap | Description |
---|---|
migrationStarted | This SNMP trap is generated and sent to the configured SNMP trap listeners when the ISSU migration operation starts. |
migrationComplete | This SNMP trap is generated and sent to the configured SNMP trap listeners when the ISSU migration operation completes. |
The primary node (before the start of the ISSU process) always generates these two SNMP traps and sends them to the configured SNMP trap listeners.
There are no SNMP alarms associated with the ISSU SNMP traps. In other words, these traps are generated irrespective of the any SNMP alarm. You only have to configure the trap SNMP listeners.
For more information on configuring SNMP trap listeners, seeSNMP traps on Citrix ADC.
Share
Share
This Preview product documentation is Citrix Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.
我f you do not agree, select Do Not Agree to exit.