Installing RES HyperDrive Hypervisor Tools

RES Software have finally released RES HyperDrive and it’s available as a virtual appliance for VMware vSphere, VMware Workstation, Citrix XenServer and Microsoft Hyper-V. One thing that might surprise you is that the relevant hypervisor integration tools are not preinstalled. It surprised me, but then I thought about it; how will RES know what particular build of vSphere or XenServer you’ll be running?!

Installing the integration tools will allow the RES HyperDrive appliance to play nicely with the hypervisor and permit migration and dynamic memory support etc. Whilst there are numerous articles on the internet, I thought I’d put a few notes together in one place to make it easy for people to evaluate or deploy the virtual appliance into production.

VMware Workstation 8 and vSphere 5

Installing VMware Tools into both VMware Workstation and VMware vSphere is relatively straight forward.

  1. Mount the VMware Tools CD/DVD by choosing the “Install VMware Tools” option within VMware Workstation/vSphere client
  2. Logon to the HyperDrive console or access the virtual appliance via SSH
  3. Mount the CD image with mount /dev/cdrom /mnt
  4. Copy the installation files to the temp directory cp /mnt/VMwareTools-x.x.x-xxxxx.tar.gz /tmp
  5. Change the temp directory cd /tmp
  6. Extract the Tarball tar zxpf /mnt/VMwareTools-x.x.x-xxxxx.tar.gz
  7. Dismount the CD image with umount /dev/cdrom
  8. Install VMware Tools by running ./vmware-tools-distrib/vmware-install.pl
  9. Accept all the defaults and select a default resolution for X support (not that it appears to be used on the appliance!)
  10. Reboot the virtual appliance
  11. If using vSphere the tools should be reported as installed

If you need to reconfigure the VMware tools configuration simply rerun the /usr/bin/vmware-config-tools.pl command as and when required.

Citrix XenServer 6

This is where the fun begins.. Installing the XenTools into the XenServer virtual appliance image is just as easy as VMware Worksstation/vSphere.

  1. Mount the Citrix XenTools CD/DVD by choosing the “Install XenServer Tools” or manually mount the xs-tools.iso image within XenCenter
  2. Logon to the HyperDrive console or access the virtual appliance via SSH
  3. Mount the CD image with mount /dev/cdrom /mnt
  4. Install XenTools by running /mnt/Linux/install.sh
  5. Accept all the default prompts
  6. Reboot the virtual appliance

Once you reboot the appliance you may find that XenCenter still reports that XenTools are still not installed?! After some investigation (and not being a Linux expert) it appears that the kernel used to build the appliance in not paravirtualization (PV) aware, rather in HVM mode. Therefore, without recompiling the kernel it’s impossible to get the XenServer tools installed to support live migrations. Hopefully this is a simple oversight on RES’s part and will be rectified in future builds (I hope!).

In the meantime I will try to find out how to recompile the HyperDrive appliance kernel. I’ll let you if I’m successful but it doesn’t look particularly easy. There appears to be an issue with the configured Nomadesk YUM repository and it won’t retrieve the package list correctly to install the additional kernel. My recommendation at this point would be utilise the VMware appliances wherever possible or live with the fact that you’ll need to shut the HyperDrive appliance down to move it to another host.. Sad smile

I’ve not played with the Hyper-V appliance as we have no customers running it but please let me know if it works and how to install them!

Replacing the RES HyperDrive SSL Certificate

We’ve had to replace numerous HyperDrive SSL certificates already as the self-signed SSL certificates generated by the RES HyperDrive appliance won’t cut it if you want to use the appliance in production or if iOS/OS X devices are deployed. image Fellow RES guru Rob Aarts has an article published on RESguru.com, but I’ve had differing experiences and our process is slightly different.

Unfortunately (for seemingly me in particular) I always appear to receive a “SSL key not valid” error when trying to import the certificate via the wizard (Nomadesk are aware of the problem and are investigating):

RES do have a KB article (login required) that details how to manually replace the certificate. There are some fairly simple steps that you follow, but as with all the RES HyperDrive documentation so far, there are some holes in it if you’ve never performed the actions before.

In the post I will assume that you have you SSL certificate in 2 parts; the public certificate (.crt file) and the private key (.key file). If you need to know how to generate these files from a .pfx file, I suggest you refer to the instructions in the Replacing the Default XenServer WSS Certificate post first and look for the “Converting the Certificate to a .CRT and .KEY Pair” section. Note: there must not be a password on the .key file!

Additionally you will need to be comfortable with Transferring Files to RES HyperDrive and probably Remotely Administering RES HyperDrive.

Preparing the Files

The RES HyperDrive appliance requires 3 files; the public certificate file, the private key file and the CA intermediaries. These files need to be named localhost.crt, localhost.key and ca-bundle.crt respectively.

It is probably easier to rename these files before copying them to the appliance (and it’ll keep the post shorter!).

Backup the Self-Signed Certificate

Once connected to the RES HyperDrive appliance console you can backup the existing certificate files with the following commands:

mv /etc/pki/tls/certs/localhost.crt /etc/pki/tls/certs/localhost.crt.old
mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.old
mv /etc/pki/tls/private/localhost.key /etc/pki/tls/private/localhost.key.old

If you get any permissions errors, remember to elevate to root with the su – command first.

Transfer the Files

The next step is to transfer the files to the HyperDrive appliance. I’ll assume that you’ve copied these to the appliance via SSH/SCP and they reside in the /home/hyperdrive directory. If you’ve used RES Automation Manager you can put them wherever needed 😉

Move the Files

Now that we’ve backed up the original self-signed certificate and copied the new files in they’ll need to be relocated. Move the files with the following commands:

mv /home/hyperdrive/localhost.crt /etc/pki/tls/certs/
mv /home/hyperdrive/ca-bundle.crt /etc/pki/tls/certs/
mv /home/hyperdrive/localhost.key /etc/pki/tls/private/

Fixing Permissions

I don’t actually know what permissions are needed by RES HyperDrive but my assumption is that they probably need to mirror what was there before. Fix the permissions by running the following commands:

chmod 0644 /etc/pki/tls/certs/localhost.crt
chmod 0644 /etc/pki/tls/certs/ca-bundle.crt
chmod 0600 /etc/pki/tls/private/localhost.key

If you copied the files in via SSH/SCP then they will be owned by the hyperdrive account. To reset the file owner on the files back, run:

chown root:root /etc/pki/tls/certs/localhost.crt
chown root:root /etc/pki/tls/certs/ca-bundle.crt
chown root:root /etc/pki/tls/private/localhost.key

Restart the Web Server

Once the files have been replaced and updated, restart the web server by running the service httpd restart command and BINGO!

Pre-canned RES AM Building Blocks

If you have integrated RES HyperDrive with an existing RES Automation Manager installation (remember you get a complimentary RES AM license) I’ve included a building block (click the red brick to download) that will perform the required configuration for you. Note: remember to replace the localhost.crt, ca-bundle.crt and localhost.key files in the \virtualengine.co.uk\RES HyperDrive\ resources folder before running it!

[wpdm_file id=10]

Automating Citrix Provisioning Server Install with RES AM

Here is a blog post I put together on automating the build of Citrix Provisioning Services using RES Automation Manager 2012. Before we get into the details I thought I’d mention a few resources and solutions I found on the way which helped me out. A big thanks to:

Before you can begin you will need to make sure you have the following prerequisites in place:

  • Provisioning Server Software (PVS 6.1 used for this example);
  • Windows Server 2003 upwards (Windows 2008 R2 SP1 used in this example);
  • NET 3.5 or higher is installed;
  • RES Automation Manager 2012;
  • Use the latest Citrix Licensing server.

I’ve split the automated process in to two distinct parts; creating the PVS database and installing PVS to make it easier to digest. If you’re lazy or just want to crack on you can just download the building blocks and get going! Note: you will need to update the resource references to the PVS 6.1 installation files.

Creating the PVS Database

Before you can automate the PVS installation we need to have a database in place for the PVS servers to connect to. Unfortunately for us there’s not an easy way to accomplish this as we need to generate an SQL script with our required database values. As we’re invoking the creation process from RES Automation Manager 2012 we can utilise parameters so we can prompt the administrator for these values at run time.

To create the SQL script we first need to install the Provisioning Services software on a clean Windows 2008 R2 server or if you have an install already you can obtain from here. Once installed we can run C:\Program Files\Citrix\Provisioning Services\DBscript.exe to launch the Provisioning Services Database Script Generator. Exciting stuff I know !!!

image

If we complete the details with placeholders (as above) for the database name and farm name, DBscript will create the required .SQL script with values that we can use within our RES Automation Manager jobs. Click OK and it will create the CreateProvisioningServerDatabase.sql file in the path specified, complete with embedded placeholders.

We can now import this file as a resource into the RES Automation Manager console. Note: remember to tick the ‘Parse Environment variable and parameters’ checkbox. If you forget to do this we’ll attempt to create a database with a name of $[PVSDB] which probably won’t work (not that I’ve checked!).

To create the required SQL database we can utilise the CreateProvisioningServerDatabase.sql file with the built in RES Automation Manager database connector task(s) or via SQLCMD on the local Microsoft SQL instance. As we’re cheap and can’t assume that you’re licensed for the relevant connector, we’ve utilised SQLCMD in the building blocks. For more details on this, download them and have a look.

After the database has been created we need add SQL permissions to the database (if using a network user for the SOAP and STREAM services). This is achieved with a couple of SQL statements (see the building blocks for more information). If we’re using an Windows service account to run these services, the user will be configured later during the install… And now the fun begins;

Installing and Configuring PVS

Now that the database is created we can move on to installing the software, configuring and adding servers to the farm. Installing the software is no problem however configuring and adding servers to the farm is a bit more involved. The method I used for configuring the servers was by utilising the configwizard.ans file which holds all the configuration items. By running the %PROGRAMFILES%\Citrix\Provisioning Services\configwizard.exe /s the answer file is in turn created here C:\ProgramData\Citrix\Provisioning Services\configwizard.ans.

Once we have the configwizard.ans file we can edit it and embed our RES Automation Manager 2012 parameters within it. If you’d like to know what options can be configured in the answer file, run configwizard.exe /c. The configuration wizard will write a C:\ProgramData\Citrix\Provisioning Services\configwizard.out file. Again, all this information is in our building blocks.

I used two different answer files one for the first server joining the farm and the other for all subsequent servers. Below is an example of the first server configwizard.ans file:

[code]IPServiceType=$[IPServiceType]
PXEServiceType=$[PXEServiceType]
FarmConfiguration=2
DatabaseServer=$[DBSERVER]
DatabaseInstance= FarmExisting=$[PVSFARM]
ExistingSite=$[PVSSITE]
ADGroup=$[DOMAIN]/Builtin/Administrators
Store=$[PVSSTORE]
DefaultPath=$[STOREDRIVE]$[STORELOCATION]
UserName=$[SERVICEACCOUNTUSER]
UserPass=$[SERVICEACCOUNTUSERPASSWORD]
network=$[NETWORKACCOUNT]
Database=$[DBCONFIGUSER]
PasswordManagementInterval=7
StreamNetworkAdapterIP=$[STREAMINGSERVERIP]
IpcPortBase=6890
IpcPortCount=20
SoapPort=54321
BootstrapFile=C:\ProgramData\Citrix\Provisioning Services\Tftpboot\ARDBP32.BIN
LS1=$[STREAMINGSERVERIP],0.0.0.0,0.0.0.0,6910
AdvancedVerbose=0
AdvancedInterrultSafeMode=0
AdvancedMemorySupport=1
AdvancedRebootFromHD=0
AdvancedRecoverSeconds=50
AdvancedLoginPolling=5000
AdvancedLoginGeneral=30000[/code]

Once the answer file/files have been created and modified, import them into the RES Automation Manager resources. Note: remember to select the ‘Parse Environment variable and parameters’ checkbox!

Finally to automate the actual PVS install, we need to make sure we download these resources to the C:\ProgramData\Citrix\Provisioning Services\ directory on the target server. Then we kick off the configuration wizard which will apply the configuration, by running configwizard.exe /a. Once complete the services should start automatically and when you start the PVS console and connect you should be presented with the new farm, well hopefully anyway !!

Problems Encountered

If you do have problems using the answer file and the install fails the best place to start troubleshooting is under C:\ProgramData\Citrix\Provisioning Services\Log directory. If all goes wrong you will notice that there will be only one file here;  configwizard.log. And at the end of this file hopefully it should give you some meaningful reason as to the failure. If all works fine and the services start you should see around 8 Log files and have a big smile on your face :D.

I did have other issues whilst getting this to work. Here are a few notes in case they help:

  • No device License available when a new machine is booted using provisioning server you will see the error in the streamprocess log on the PVS server and also on the device a pop message will say “No device License currently available for this computer a system shutdown will be initiated in 96 hours. I found the resolution to this problem was to upgrade the license server to the latest build.
  • PVS Console install does not install via AM job – ensure that UAC is disabled and use a security context to run the job instead of the local System account.
  • After a server install I could not mount Vdisks on PVS server and might get an error similar to “Cannot mount Vdisk mapi error”. Looked at device manager and noticed that the Citrix virtual hard disk Enumerator driver was not installed correctly. To resolve this first remove the device and then go to %PROGRAMFILES%\Citrix\Provision Services\Drivers right hand click and install cfsdep2.inf and then go back to device manager and add legacy hardware and select “I have disk” and then point to same location and the file is cvhdbusp6.inf. It should then hopefully install this device without any issues. Or the Preferred option with RES AM create a module to download the following CFSDep2.cat, CFSDep2.inf and CFSDep2.sys to C:\windows\system32\drivers before installing provisioning server and all should be okay.
  • When using a service account make sure that this user is given the required permissions i.e read/write on the PVS store directory on the PVS servers / db_datareader and db_datawriter on the database although the latter can be done if you select configure user for database.

Building blocks now updated as there was a problem with the Service Account password passing through to the answer file, this should be resolved. Have also added a module to remove the answer file as the password is in plain text.

Hope this helps, Enjoy ! Smile Simon

[wpdm_file id=7]

Upgrading RES AM Linux Agents

There comes a time when RES Automation Manager Linux agents need upgrading. A typical example is with the GA release of RES HyperDrive. Now that RES Automation Manager 2012 SR1 has been released, there is a newer Linux agent that isn’t (currently) is the RES HyperDrive appliance.

If you’re like me, you’ll want to upgrade this. The Getting Started with RES Automation Manager Agent for Linux guide will point you in the right direction, but unless you’re a fairly competent Linux administrator you may struggle with certain aspects. For example, to upgrade the RES AM Linux agent all you need to do is:

1. Stop the currently installed RES Automation Manager Agent for Linux by using the command /etc/init.d/resamad stop.
2. Uninstall the RES Automation Manager Agent for Linux.
3. Install the new version of the RES Automation Manager Agent for Linux.
4. Start the new RES Automation Manager Agent for Linux.

So there you have it – simple!

I’ll actually take you through the individual steps to upgrade the Linux agent installed in a RES HyperDrive appliance. These steps are equally applicable to any Linux installation but this will no doubt be a common scenario. As an overview the steps required are:

  1. Find installed RES Automation Manager Agent for Linux version;
  2. Uninstall existing RES Automation Manager Agent for Linux;
  3. Copy new RES Automation Manager Agent for Linux;
  4. Extract RES Automation Manager Agent for Linux;
  5. Install RES Automation Manager Agent for Linux;
  6. Configure RES Automation Manager Agent for Linux;
  7. Start the RES Automation Manager Agent for Linux.

Connecting

Firstly you’ll need to connect to the RES HyperDrive virtual appliance via SSH (see Remotely Administering RES HyperDrive) or connect to the console session.

Uninstall Existing Version

To uninstall the existing RES Automation Manager Agent for Linux you’ll need to find the currently installed version before you can actually remove it. To find the existing version run:

[code]rpm –qa | grep –i res-am[/code]
This will display the current version. Make a note as you’ll need it in a minute or two! Here’s an example screenshot from the RC2 appliance:

image

To uninstall the agent run:

[code]rpm –e <res-am-agent-version>[/code]

The <res-am-agent-version> is listed in the first command, for example res-am-agent-6.5-0.102354. If successful the agent service should be stopped and the agent uninstalled.

Note: I have seen multiple agents installed in both the RC2 and GA releases. It looks like an oversight and the 6.4-2 version is not actually installed. If you want to remove both entries then the second rpm –e command may give you an error but it will be removed from the list.

Copy Agent Files

You will need to download the latest Linux agent version from the RES support portal as they’re not included in the management console like the Windows clients. Once you’ve downloaded the tarball, copy it to the RES HyperDrive appliance (see Transferring Files to RES HyperDrive) into the /home/hyperdrive directory.

From your SSH/console session run:

[code]mv /home/hyperdrive/res-am-agent-<version>.tgz /tmp[/code]

This will move the file into the /tmp directory. Note: If you don’t have permissions to do this run the ‘su –‘ command first, enter the root password and try again.

Extracting the Agent Installer

As the RES Automation Manager Agent for Linux is compressed it needs extracting before it can be installed. Change the working directory and extract the archive by running the tar command:

[code]cd /tmp
tar xvzf ./res-am-agent-<version>.tgz[/code]

This expands the files into the /tmp/AIX, /tmp/RedHat and /tmp/Suse directories. As CentOS is based on RedHat 5 we need to install this agent version. Install the new agent version by running:

[code]rpm –i /tmp/RedHat/Release5/x86_64/res-am-agent-<version>.x86_64.rpm[/code]

Configuring the Agent

To connect the RES Automation Manager Agent for Linux, we either need to enable auto discovery or specify a Dispatcher list. If you wish to enable auto discovery you can configure the agent with the following command:

[code]/usr/local/bin/resamad –d m[/code]

If you wish to specify a Dispatcher run this instead:

[code]/usr/local/bin/resamad –dd<Dispatcher>[/code]

For example, if your Dispatcher was called RESAMDISP01 (with an IP address of 192.168.0.100) you could either run

[code]/usr/local/bin/resamad –ddRESAMDISP01[/code]

or

[code]/usr/local/bin/resamad –dd192.168.0.100[/code]

Starting/Stopping the Agent

After the upgrade you’ll probably need to start the agent. To do this you can simply run:

[code]service resamad start[/code]

If you check the RES Automation Manager console you should see your agent online. The version shown below (6.00.111676) is the RES Automation Manager 2012 SR1 Agent for Linux.

image

If you need to restart the RES Automation Manager Agent for Linux run service resamad stop and then service resamad start. Why there is no service resamad restart command I don’t know! If I wasn’t lazy I’d create the required script but as the appliance is supposed to be “rip and replace” I don’t think I’ll bother 🙂

Phew – hopefully someone finds this useful? Iain

Transferring Files to RES HyperDrive

As I’ve discussed previously, connecting the RES HyperDrive appliance via SSH is more involved than is typical for other Linux appliances. My assumption is that, as SSH is used by OS X clients and is exposed to the big bad world, it needs be secured. And tightly!

I have come across numerous times that I’ve needed to transfer files to or from the virtual appliance. This normally involves copying SSL certificates and keys and grabbing log files etc. Various people have asked me how they can achieve this so I thought I’d document the process. It’s fairly straight forward and assuming you have have your SSH private key and have downloaded WinSCP (or your SCP client of choice) you’re all set. WinSCP will transfer files over SSH and therefore, the process is almost identical to the earlier Remotely Administering RES HyperDrive post.

Note: If you have RES Automation Manager 2012 deployed then you can always transfer files to the appliance with the built-in Linux/Unix Resource Download task. If you don’t or want to know how to do this manually, feel free to continue..

After launching WinSCP you need to enter the connection information. Enter the hostname/FQDN, port number, username and private key as highlighted below (replace the hostname accordingly!). Make sure that you enter the username as hyperdrive and leave the password blank!:

image

When you connect by clicking the Login button you’ll be asked whether you trust the server’s key, so go ahead and do so. Once connected you should be able to transfer and drag ‘n drop files from left to right.

image

As we’re connecting as the hyperdrive user account we can only really copy files into the hyperdrive user’s home directory (/home/hyperdrive). After you’ve copied the files into the home directory you’ll need to move the files via the command line, i.e. via the console/SSH (don’t forget to change the owner and permissions as required!). Reading files is generally less of an issue, but you might need to relocate them into the /home/hyperdrive directory before you can copy them out; diagnostic or log files for example.

Good luck! Iain

Remotely Administering RES HyperDrive

Connecting and administering a RES HyperDrive appliance can be frustrating the first time you try. Therefore, I thought I’d put a few notes together on how to connect and transfer files to the appliance. If you’re planning on deploying SSL wildcard certificates, you’re going to need to know how to do this. Whilst you can always use the XenServer or vSphere console, connecting via SSH has many benefits.

The first thing to realise is that the HyperDrive appliance listens on port TCP port 80 for Windows client synchronisation and TCP port 8080 for OS X/Mac client sync. The OS X client tunnels over SSH and therefore the default SSH (TCP port 22) is not used.

Secondly you will also need the SSH RSA key to connect. After successfully completing the configuration wizard, you are offered the option to download the PuTTY or OpenSSH keys. Don’t worry if you never saved these somewhere safe as you can always download them later from the https://<ApplianceFQDN>/va/keys/putty or https://<ApplianceFQDN>/va/keys/rsa URLs (this doesn’t appear to work on the RC release). Notice that you will need to authenticate with the root password to download the keys (notice the typo?!):

image

Once you have the private key you can configure the PuTTY client. Fire up the PuTTY client and enter the RES HyperDrive appliance IP address or FQDN. You must make sure that the port is set to TCP port 8080.

image

Before continuing you need to import the SSH private key. Expand the Connection > SSH > Auth node and select the saved key file you downloaded earlier:

image

When you click the Open button you should be connected to the RES HyperDrive appliance and asked if whether you trust the server’s key:

image

Click Yes to trust the key and continue. You’ll be prompted for a username. As you’re authenticating with a key you won’t need to enter a password. You may not be aware, but you’re actually authenticating with the local “hyperdrive” user account key. Therefore, you must use a username of hyperdrive to connect. If you enter any other user account, e.g. root, you’ll be denied access.

image

Once you have access to the appliance you can switch users (SU/SUDO) to perform the required administrative tasks. Enjoy!

Multi-Homing RES HyperDrive

In certain situations you may wish to install two network adapters into a RES HyperDrive appliance. For example, you may not want to route internal traffic via the same gateway interface as external traffic. In this scenario there are some things that you need to be aware of. The RES HyperDrive documentation intimates that the primary NIC is the internet facing interface.

This isn’t necessarily the case and either NIC can be used. What you do need to be aware of though, is that if you configure a default gateway on all NICs, CentOS 5.3 will use the highest interface gateway as the default route. Therefore, if you specify a default gateway on both NICs in multi-homed deployment, the eth1 gateway will be used for the default route. If you look closely at the example above you will notice (there are 2 x 10.0.1.1 firewalls!) that the eth1 interface has no gateway specified. I recommend that you do leave the internet facing NIC with the default gateway, but whether this is eth0 or eth1 is up to you.

If you wish to manually alter the IP addressing information you can find the configuration scripts in the /etc/sysconfig/network-scripts/ directory. There will be an ifcfg-eth0, ifcfg-eth1 for each attached NIC. Use your favourite text editor to update the appropriate file.

Once you have configured the correct IP address(es) and gateway you will need to add static routes to the “internal” network(s). The RES HyperDrive CentOS installation stores static routes in the route-<interface> file in the /etc/sysconfig/network-scripts/ directory. As an example, if our internal networks were 172.16.0.0/255.255.0.0 and 172.17.0.0/255.255.0.0 we would create the following entries in the /etc/sysconfig/network-scripts/route-eth1 file (assuming the internal gateway is actually 192.168.1.1 and not 10.0.1.1!):

[code]172.16.0.0/16 via 192.168.1.1 dev eth1
172.17.0.0/16 via 192.168.1.1 dev eth1[/code]

Once you’ve made all your changes you can restart the networking stack by running the service network restart or reboot the appliance. If you want to view the routing table, just run the route command. Simples!

RES Automation Manager Emergency Patch Management

I previously covered the reasons why you probably wouldn’t use RES Automation Manager for patch management (see here). Max Ranzau (AKA @RESguru) made a great point that you can certainly use Automation Manager to push a patch out individual patches easily. With the release of the Microsoft RDP critical patch MS12-020 and an exploit apparently in the wild, this proves that RES Automation Manager certainly still has its place in your patch management strategy.

Assuming that you haven’t exposed port 3389 directly to the internet you may feel that you’re somewhat “safe.” I actually think that the greater risk comes from worms that will be run from within the corporate network firewalls. All it takes is for one machine to be compromised… How many desktops and servers do you have inside the corporate network that have RDP access enabled?

Microsoft provides some workarounds that will give you time to test the patch prior to deployment. Fortunately, RES Automation Manager gives you the following options in dealing with this exploit using the built-in Automation Manager tasks/tasklets:

    1. Deploy the patch within minutes and/or
    2. Disable RDP connections completely and/or
    3. Enable/modify the Windows firewall rules to block RDP connections and/or
    4. Enable Network Level Authentication for RDP connections.

One thing is for certain, you need to be acting and mitigating this risk now. I think it’s only a matter of time before things get interesting. Who remembers Slammer?! I know people who are still mentally scarred by its long lasting effects!

GPOs could help you with some of this, but nothing is going to be able to deploy any of (or a mixture of) the above workarounds within minutes. How will you be sure that your workarounds are in place on all machines? RES Automation Manager will give you near instant feedback on what tasks failed and provide you with the data to target those computers. Remember, if you use RDP/Remote Assistance for support then you’re probably limited to option #1 (or maybe #4).

If you don’t have RES Automation Manager today, you probably wish you did! You’ve been warned Smile with tongue out..

Iain

PVS Image Management

There has been a bit of banter on the Twittersphere about how people manage and document their PVS images. It was suggested (by more than just me Smile) that RES Automation Manager could be utilised for this task. This post is not a best practice guide as to how to create, update or document your images, rather a use case on how and why we use/recommend RES Automation Manager. Heaven forbid, you might even decide to do away with Provisioning Services as a result. Either way RES Automation Manager will play very nicely with or without PVS but I couldn’t fit it into 140 characters!

Provisioning Services Private Image Management

Maintaining the gold or private mode PVS image can be a complex task for a number of reasons. Simplifying any of these potential hurdles can only be a good thing, right?

  1. A certain level of skill is required to both create and maintain images. There are numerous tasks that need to be completed and in some cases, performed in a particular order. As a result, this task is typically left or assigned to the senior administrators.
  2. Application upgrades can taint or stain the master image. Some applications require an uninstallation of the old version and installation of the new product MSI. I don’t need to tell you that uninstallers are not always reliable or clean everything out when run!
  3. How are changes to the gold image documented to ensure that they’re incorporated into all other PVS images? It is typical that there will be more than one image for deployment. For example, hardware differences will typically require separate images.
  4. Ad-hoc and emergency changes can wreak havoc with your PVS images. How quick and easy is it to push an update out to 100 XenApp servers streamed from a central image? If we make changes whilst the servers are running then they’ll be lost when the write cache is erased meaning we either have to reapply this change after every reboot or update our gold image pronto! This will get a lot more interesting if the servers are rebooted on a nightly basis and the write cache cleaned!

RES Automation Manager

If know me by now, you probably know that I’m going to say that RES Automation Manager is the answer to all your prayers! Now whilst it can certainly address the above “issues” (and I would recommend it in conjunction with Provisioning Services any day of the week) there are other processes and solutions that may address one or more of the above and deploying RES Automation Manager won’t automagically fix them. A good example of this is documentation. If your internal processes mandate that all changes are documented and you bypass this process, there is nothing to stop you bypassing this process even if Automation Manager is installed!

What RES Automation Manager Won’t Do

Thought I’d better get this bit out of the way before you get all the way to the end and are disappointed! RES Automation Manager is a Run Book Automation tool and not an imaging/deployment tool. This means that we cannot (directly) deploy an Operating System from RES Automation Manager. Fortunately for us there are many technologies out there that can, e.g. Windows Deployment Services/Microsoft Deployment Toolkit which BTW can by combined with RES Automation Manager – take a look at this White Paper. Why reinvent the wheel?!

What RES Automation Manager Will Do

So once we have our Operating System deployed and the RES Automation Manager agent installed (we can do this with WDS/MDT as mentioned earlier) what benefits will this give us? Well, at a simplistic level, RES Automation Manager can automate the entire server configuration and application deployment process. This process can also include installing XenApp and XenApp Prep as well as any other applications. This obviously takes some additional time but gives us a clean, repeatable process for deploying a XenApp server from scratch. It’s a strategic decision and not a tactical one!

Why is this important? Typically it comes down to issues #2 and #3 so let’s take them one at a time..

Issue #1

RES Automation Manager can reduce this complexity by removing Provisioning Services altogether. I’m not suggesting that you remove this from your infrastructure. Not even for one minute. However, if you don’t need to have a clean image after every reboot getting shot of PVS maybe an option? We have automated the complete server deployment and can typically provision a new server in a few hours from start to finish; Operating System, XenApp and applications. OK it’s a few hours of time, but there is no user interaction required. I’m guessing that it’s probably not that often that you need to add a new server within 30 minutes?

This benefits the typical IT department as these are now regular servers. They’re supported in the same way as other servers and they have a proper OS install etc. There are downsides too. Now we need to patch and maintain multiple OS instances and not just one master image. Isn’t this part of the reason you deployed Provisioning Services in the first place?

Issue #2

By having a repeatable process for building our XenApp server(s) from scratch we can avoid tainting our image. If we need to cut a new image then we can deploy a completely clean server and deploy the required applications as required. We don’t need to uninstall and reinstall or upgrade applications. I’m not advocating this as a best practice, but I know lots of admins that are a lot happier with this process. It doesn’t need to be performed for all updates, but you now have an option as to whether you update the master image or cut a new one. If you have not automated the entire deployment and configuration process, recreating a new image from scratch probably doesn’t make you feel warm and fluffy inside!

When you finally get run over by a bus (it’s going to happen one day as everyone keeps saying it) pretty much anyone with ounce of intelligence can deploy a new server or reverse engineer the Modules and Tasks in the Run Book to discover how things are tied together.

Issue #3

By virtue of automating the entire configuration and deployment process with RES Automation Manager, you have actually documented every step in the process. RES Automation Manager includes the ability to create an Instant Report of any or all Run Books, Projects and/or Modules. These reports are very detailed (small example here) and typically run to 1,000+ pages. For us consultants, this feature alone is worth its weight in gold. Did I mention that it’s available in RES Workspace Manager too? Winking smile

Issue #4

Finding out when and by whom the changes were made. Whether they be changes made to the gold image or Ad-Hoc emergency it doesn’t matter the audit trail of the changes is vitally important especially with change management processes. Well would be surprised to hear that RES Automation Manager has an in-built Audit Trail which allows you to view all actions performed in RES Automation Manager – how handy is that when a witch hunt is on (Oh that never happens now does it!?).

Issue #5

As usual I’ve saved the best until last and you didn’t see number 5 coming! The “pièce de résistance” if you like. This might get a bit confusing so strap yourselves in ready…

Emergency changes to running PVS instances are pain. Depending on your configuration after a reboot changes may be lost and depending on your requirements, you may reboot nightly or even weekly. If there is a configuration change that needs to be made then ultimately we need to update the master image. We can implement the change on the running instances, but it will be lost at some point when the write cache is cleared. Until the master image is updated we will need to implement the change, potentially after every reboot.

Because RES Automation Manager is a Run Book Automation tool we can implement this change across all running instances within minutes. “WAIT!”, I hear you cry, “These changes will be lost after a reboot!” Correct. But now we have achieved two things; documented the change and can automate the update to the master image at some point in the future.

Why did I say at “some point in the future?” Fortunately for us there is a hidden gem within RES Automation Manager called Snapshot Intelligence. With a name like that it better be good right!?

As the RES Automation Manager database has a record of all jobs that have executed on a given agent it can detect a snapshot. Whether this is a virtual snapshot or a backup restoration, it makes no odds. In our PVS world, if RES Automation Manager jobs have been run on a machine and the PVS instance is reset back to our master image state (write cache cleared), RES Automation Manager will detect this as a snapshot. You with me so far..?!

Once a snapshot is detected, RES Automation Manager can automatically reapply the job history (I’ll pause whilst you take this in and wait for the penny to drop!).

So if we automate all the emergency or ad-hoc updates with Automation Manager we can automatically reapply these after every reboot? Yes. No need to update the master image for every change? Yes.

In fact it gets better than that. When we update our master image we can run the exact same job history (automatically if you wish) to update the gold image. If you want to cut a new image from scratch we’ve got that covered too. Above all, if everything is automated with RES Automation Manager it’s automatically documented too. Needless to say, you get all the usual audit logging and change history.

Summary

So, in summary, using RES Automation Manager in combination with Citrix Provisioning Services has huge benefits, but there’s obviously a cost associated. Would I recommend it? Absolutely! For all of the above reasons. Is it worth it? Unfortunately, I can’t tell you that as only you know your environment.

Can RES Automation Manager replace Provisioning Services? Not entirely as you’ll still need WDS/MDT (or equivalent) to deploy the OS. It also depends on your reasons for deploying PVS in the first place. If it’s for near instant deployment, remove local disks, reduce the storage footprint or a clean image on every reboot, you’ll probably be using it for a long while yet. If your reasons are purely for “single image” management then you could potentially replace PVS in favour of a “traditional” deployment. Would I recommend this? It depends!

I know we’ve been focused on Provisioning Services in this article but RES Automation Manager will help you with the rest of your infrastructure automation. Desktops, laptops, servers; Exchange and Active Directory etc. You may have XenDesktop, Quest vWorkspace or VMware View for your virtual desktops. The same principal applies and you may even be using PVS in combination with these. Anyway, I don’t need to preach to the converted!

I will say that it should be a strategic decision to deploy RES Automation Manager. Don’t underestimate the amount of time it takes to automate and test. But I guess you already spend a lot of time testing your images?

You can find some video overviews/introductions on RES Automation Manager on Citrix TV and RES Tutorials. If you don’t want to take the time to download, install and configure RES Automation Manager but want to take a quick look, you can always request access to the RES Showcase. Some background and example videos on the Showcase platform can be also found here.

I’ll get off my soapbox now and crawl back to whence I came! Please feel free to comment and I’d love to hear your thoughts. Iain

RES Workspace Manager VDX Options Explained

When delivering RES Workspace Manager training there can be some confusion over some of the settings available when integrating it with RES Virtual Desktop Extender (VDX). The purpose of this post is to attempt to clarify what option does what!

image

  1. This is the global option that will enable or disable the RES Workspace Manager and VDX integration, i.e. the ability to run applications as a “Workspace Extension”. Note: If this option is disabled and the RES VDX Engine is installed, there is potentially nothing from stopping this running, just you won’t get the RES WM integration, i.e. licensing etc.
  2. If this option is enabled then the VDX Engine process will be started. By disabling this option you will effectively be turning on the legacy RES Subscriber/Workspace Extender functionality only.
  3. If you have the VDX Engine installed, then this option will enforce the VDX settings to take preference over the RES Subscriber/Workspace Extender, e.g. options 7 – 13 and the Z-ordering improvements. Don’t disable this option if deploying RES VDX!
  4. Depending on the setting chosen the following taskbar behaviour will occur:
    1. Autodetect: The behaviour as defined in the Display Settings section will be honoured, i.e. the configuration of the remote session display.
    2. Yes: The taskbar of the local session will always be hidden regardless of the remote session display configuration.
    3. No: The taskbar of the local session will never be hidden. Note: with the RES Subscriber/VDX Shell, the taskbar is hidden by default.
  5. Depending on the setting chosen, the following client application pass-through behaviour will occur:
    1. Autodetect: If running client application window will be obscured by the remote session, it will automatically be displayed in the remote session.
    2. Yes: All running client applications will be automatically displayed in the remote session.
    3. No: No existing client applications/processes will be displayed in the remote session when it’s first started. Note: this does not prevent reverse seamless applications from being launched from the remote session!
  6. If selected, when a user logs off a remote session, all locally running client applications will be closed (or attempted to be closed!) before the remote session is ended.
  7. Enable this option if you which users to be able to access the system tray icon and enumerate the applications present in the user’s local client Start Menu.
  8. Enable this option if you which users to be able to access the system tray icon and enumerate the applications present in the user’s local client Desktop.
  9. Enable this option if you which users to be able to access the system tray icon and enumerate the applications present in the local client System Tray.
  10. Client applications can be excluded from the VDX seamless window integration by entering the process name, i.e. pnagent.exe.
    1. Multiple processes should be separated with semicolons.
    2. Processes can still be launched, but will not be displayed in the taskbar the remote session and Z-ordering will not be implemented.
  11. If you wish client applications to be unavailable via the System Tray integration, enter the process(es) here.
    1. Multiple processes should be separated with semicolons.
    2. If a client process is excluded, it will not be displayed in System Tray Start Menu or Desktop folders.
  12. Allows you to override the default VDX pop-up balloon title (a).
  13. Allows you to override the default VDX pop-up balloon text (b).
    1. This text is also displayed at the top of the VDX system tray Client Start Menu/Desktop window (c).

image

SNAGHTML6c0f49e

Display Settings

The following information has been taken from the RES VDX User Guide and is used when configuring the Local Client Taskbar integration in option #4 above:

VDX supports several display scenarios. Each scenario may require specific settings. For an overview of these settings, see Setting up the Behavior of the RES VDX Engine (on page 10). Some examples of supported scenarios are:

A single display

In this scenario, both the local desktop and the remote desktop are run on the same display. The remote desktop is the visible desktop, showing both remote applications and local applications. The local taskbar is obscured if the remote session is maximized. With the remote session at a smaller size, the client taskbar is shown twice, once in its original position on the client and once integrated in the remote session.

A dual display setup with the remote desktop spanning both displays

In this scenario, both the local desktop and the remote desktop span both displays. The remote desktop is the visible desktop, showing both remote applications and local applications. The local taskbar is obscured if the remote session is maximized on the first monitor or if both sessions are maximized. 

A dual display setup with the local desktop running on the primary display and the remote desktop running on the secondary display

Both taskbars are displayed. 

A dual display setup with one remote desktop running on the primary display and another remote desktop running on the secondary display

Both remote desktop taskbars are displayed. Client windows can be moved from desktop to desktop.

I hope this helps clarify things for someone! Iain