PoC Guide: Google Cloud Platform (GCP) Zone Selection Support with Citrix DaaS

Overview

Citrix DaaS service supports zone selection on Google Cloud Platform (GCP) to enable sole-tenant node functionality. You specify the zones where you want to create VMs in Citrix Studio. Then use Sole-tenant nodes allow to group your VMs together on the same hardware or separate your them from the other projects. Sole-tenant nodes also enable you to comply with network access control policy, security, and privacy requirements such as HIPAA.

This document covers:

  • Configuring a Google Cloud environment to support zone selection on the Google Cloud Platform in Citrix DaaS environments.

  • Provisioning Virtual Machines on Sole Tenant nodes.

  • Common error conditions and how to resolve them.

Prerequisites

You must have existing knowledge of Google Cloud and Citrix DaaS for provisioning machine catalogs in a Google Cloud Project.

To set up a GCP project for Citrix DaaS, follow the instructionshere.

Google Cloud sole tenant

Sole tenancy provides exclusive access to a sole tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project’s VMs. Sole Tenant nodes allow you to group your VMs together on the same hardware or separate your VMs. These nodes can help you meet dedicated hardware requirements forBring Your Own License (BYOL)scenarios.

Sole Tenant nodes enable customers to comply with network access control policy, security, and privacy requirements such as HIPAA. Customers can create VMs in desired locations where Sole Tenant nodes are allocated. This functionality supports Win-10 based VDI deployments. A detailed description regarding Sole Tenant can be found on theGoogle documentation site.

Reserving a Google Cloud sole tenant node

To reserve a Sole Tenant Node, access the Google Cloud Console menu and selectCompute Engineand then selectSole-tenant-nodes:

Select tenant nodes

Sole-tenant nodes screen

Sole tenants in Google Cloud are captured in Node Groups. The first step in reserving a sole tenant platform is to create a node group. In theGCP Console, selectCreate Node Group:

Sole tenant nodes

Creating a node group

Start by configuring the new node group. Citrix recommends that theRegion and Zoneselected for your new node group allows access to your domain controller and the subnets utilized for provisioning catalogs. Consider the following:

  • Fill in a name for the node group. In this example we usedmh-sole-tenant-node-group-1.

  • Select a Region. For example,us-east1.

  • Select a Zone where the reserved system resides. For example,us-east1-b.

All node groups are associated with anode template, which is used to indicate the performance characteristics of the systems reserved in the node group. These characteristics include the number of virtual CPUs, the quantity of memory dedicated to the node, and the machine type used for machines created on the node.

选择drop-down menu for theNode template. Then selectCreate node template:

Select node group template

Create a node template

Enter a name for the new template. For example,mh-sole-tenant-node-group-1-template-n1.

The next step is to select aNode Type. In the drop-down menu, select theNode typemost applicable to your needs.

Note:

You can refer to thisGoogle documentation pagefor more information on different node types.

Once you have chosen a node type clickCreate:

Create node group template

Finish creating the node group

After creating the node template, theCreate node groupscreen reappears. ClickCreate:

Create node group

Creating the VDA master image

For the catalog creation process to deploy machines on the sole-tenant node extra steps must be performed when creating and preparing the machine image from the provisioned catalog.

Machine Instances in Google Cloud have a property calledNode affinity labels. Instances that are used as master images for catalogs deployed to sole-tenant environments need to have aNode affinity labelthat matches the name of thetarget node group. There are two to apply the affinity label:

  1. Set the label in the Google Cloud Console when creating an Instance.

  2. Using thegcloudcommand line to set the label on instances that already exist.

An example of both approaches follows.

Set the node affinity label at instance creation

This section does not cover all the steps necessary for creating a GCP Instance. It provides sufficient information and context for you to understand the process of setting the Node affinity label. Recall that in the examples above, the node group was namedmh-sole-tenant-node-grouop-1. This is the node group we need to apply to theNode affinity labelon the Instance.

New instance screen

The new instance screen appears. A section for managing settings related to management, security, disks, networking, and sole tenancy appears at the bottom of the screen.

To set a new Instance:

  1. Click the section once to open theManagement settingspanel.

  2. Then clickSole tenancyto see the related settings panel.

Select management options

Sole tenancy settings

The panel for setting theNode affinity labelappears. ClickBrowseto see the availableNode Groupsin the currently selected Google Cloud project:

Assign an affinity label

Select node group screen

The Google Cloud Project used for these examples contains one node group, the one that was created in the earlier example.

To select the node group:

  1. Click the desired node group from the list.

  2. Then clickSelecton the bottom of the panel.

Select node group

Set the affinity label

After clickingSelectin the previous step you are returned to theInstance creation screen.TheNode affinity labelsfield contains the needed value to ensure catalogs created from this master image are deployed to the indicated node group:

Select affinity label

Set the node affinity label for an existing instance

To set theNode affinity labelfor an existing Instance, access the Google Cloud Shell and use thegcloud compute instancescommand.

More information about thegcloud compute instancescommand can be found on the Google Developer Toolspage.

Include three pieces of information with thegcloudcommand:

  • Name of the VM. This example uses an existing VM nameds2019-vda-base.

  • Name of the Node group. The node group name, previously defined, ismh-sole-tenant-node-grouop-1.

  • The Zone where the Instance resides. In this example the VM resides in theus-east-1bzone.

How to open the Google Cloud shell

The buttons appearing in the following image are present at the top right of theGoogle Cloud Consolewindow. Click theCloud Shellbutton:

Google Cloud Shell icon

Fresh Cloud shell window

When the Cloud Shell first opens it looks similar to:

Google Cloud Shell terminal window

Use thegcloudcommand to set the node affinity label

Run this command in theCloud Shellwindow:

gcloud compute instances set-scheduling "s2019-vda-base" --node-group="mh-sole-tenant-node-group-1" --zone="us-east1-b"

Verify the instance affinity label setting

Finally, verify the details for thes2019-vda-baseinstance:

VM instance details

Google shared VPCs

If you intend to use Google Sole-tenants with a Shared VPC, refer to theGCP Shared VPC Support with Citrix DaaSdocument. Shared VPC support requires extra configuration steps related to Google Cloud permissions and service accounts.

Create a machine catalog

After performing the previous steps in this document, you can create a machine catalog. Use the following steps to access Citrix Cloud and navigate to the Citrix Studio Console.

Citrix Studio main page

In Citrix Studio, selectMachine Catalogs:

Select machine catalogs

Create the machine catalog

SelectCreate Machine Catalog:

Create machine catalog

Introduction screen

ClickNextto being the configuration process:

Studio Introduction screen

Machine Type

Select an operating system type for the machines in the catalog. ClickNext:

Select OS

Machine management

Accept the default setting that the catalog utilizes power managed machines. Then select MCS resources. In this example case we are using the Resources namedGCP1-useast1(区域:我的资源位置). ClickNext:

Select machine management

Note:These resources come from a previously created host connection, representing the network and other resources like the domain controller and reserved sole tenants. These elements are used when deploying the catalog. The process of creating the host connection is not covered in this document. More information can be found on theConnections and resources page.

Master image

The next step is to select the master image for the catalog. Recall that to utilize the reservedNode Groupwe must select an image that has theNode affinityvalue set accordingly. For this example we use the image from the previous example,s2019-vda-base.

ClickNext:

Select master image

Storage

This screen indicates the storage type that will be used for the virtual macines in the machine catalog. For this example we use theStandard Persistent Disk.

ClickNext:

Select storage type

Virtual machines

This screen indicates the number of virtual machines and the zones to which the machines are deployed. In this example we have specified three machines in the catalog. When using Sole-tenant node groups for machine catalogs it is imperative that you only select Zones containing reserved node Groups. In our example we have a single node group and it resides in Zoneus-central1-a, so that is the only zone selected.

ClickNext:

Select VM

Disk Settings

This screen provides the option to enable Write-back cache. For this example we arenotenabling this setting.

ClickNext:

Select Disk Settings

Active Directory computer account

During the provisioning process MCS communicates with the domain controller to create host names for all the machines being created:

  1. 选择Domain into which the machines are created.

  2. Specify theAccount naming schemewhen generating the machine names.

Since the catalog in this example has three machines, and we have specified a naming scheme ofMySTVms-\#\#the machines are named:

  • MySTVms-01

  • MySTVms-02

  • MySTVms-03

ClickNext:

Select computer account

Domain credentials

Specify the credentials used to communicate with the domain controller, mentioned in the previous step:

  1. SelectEnter Credentials.

  2. Supply the credentials, then clickOK.

Select domain credentials

Summary

This screen displays a summary of key information during the catalog creation process. The final step is to enter a catalog name and an optional description. In this example the catalog name isMy Sole Tenant Catalog.

Enter the catalog name and clickFinish:

Studio Summary screen

当catalog creation process finishes, the Citrix Studio Console resembles:

Studio Console showing machine catalogs

Verify in Google Cloud console

Use the Google Console to verify that the machines were created on the node group as expected:

Verify machines

Migrating non-sole tenant catalogs

Currently migrating machine catalogs from Google Cloud general/shared space to sole tenant nodes is not possible.

Commonly encountered issues and errors

Working with any complex system containing interdependencies results in unexpected situations. A few of the common issues and errors encountered when performing the setup and configuration to utilize CVAD and GCP Sole-tenants appears in this section.

Catalog created successfully but machines are not provisioned to reserved node group

If you have successfully created a catalog the most likely reasons are:

  • The node affinity label was not set on the master image.

  • The node affinity label value does not match the name of the Node group.

  • Incorrect zones were selected in theVirtual Machinesscreen during the catalog creation process.

Catalog creation fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone’

This situation presents itself with this error whenView detailsis selected in the Citrix Studio dialog window:

System.Reflection.TargetInvocationException: One or more errors occurred. Citrix.MachineCreationAPI.MachineCreationException: One or more errors occurred. System.AggregateException: One or more errors occurred. Citrix.Provisioning.Gcp.Client.Exceptions.OperationException: Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone.

一个或多个以下是可能的原因of receiving this message:

升级现有的目录失败的实例could not be scheduled due to absence of sole-tenant nodes in specified project and zone’

There are two cases in which this occurs:

  1. You are upgrading an existing sole tenant catalog that has already been provisioned using Sole Tenancy and Zone Selection. The causes of this are the same as those found in the earlier entryCatalog creation fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone’.

  2. You are upgrading an existingnon-sole tenant catalogand do not have a sole tenant node reserved in each zone already provisioned with machines for the catalog. This case is considered amigration,where the intent is tomigratemachines from Google Cloud Common/Shared runtime space to a Sole Tenant Group. As noted inMigrating Non-Sole Tenant Catalogs, this is not possible.

Unknown errors during catalog provisioning

If you encounter a dialog like this when creating the catalog:

Image prep error

And selectingView detailsproduces a screen resembling:

Image prep error details

There are a few things you can check:

  • Ensure that the Machine Type specified in the Node Group Template matches the Machine Type for the master image Instance.

  • Ensure that the Machine Type for the master image has 2 or more CPUs.

Test plan

This section contains some exercises you may want to consider trying to get a feel for CVAD support of Google Cloud Sole-tenants.

Single tenant catalog

Reserve a group node in a single zone and provision both a persistent and a non-persistent catalog to that zone. During the steps below monitor the node group using the Google Cloud Console to ensure proper behavior:

  1. Power off the machines.

  2. Add machines.

  3. Power all machines on.

  4. Power all machines off.

  5. Delete some machines.

  6. Delete the machine catalog.

  7. Update the catalog

  8. Update the catalog from Non-Sole Tenant template to Sole Tenant Template

  9. Update the catalog from Sole Tenant template to Non-Sole Tenant Template

Two zone catalog

Similar to the exercise above but reserve two Node Groups and provision a persistent catalog in one zone and a Non-Persistent catalog in another zone. During the steps below monitor the Node Group using the Google Cloud Console to ensure proper behavior:

  1. Power off the machines.

  2. Add machines.

  3. Power all machines on.

  4. Power all machines off.

  5. Delete some machines.

  6. Delete the machine catalog.

  7. Update the catalog.

  8. Update the catalog from non-sole tenant template to sole tenant template.

  9. Update the catalog from sole tenant template to non-sole tenant template.

PoC Guide: Google Cloud Platform (GCP) Zone Selection Support with Citrix DaaS