Replacing the RES HyperDrive SSL Certificate

We’ve had to replace numerous HyperDrive SSL certificates already as the self-signed SSL certificates generated by the RES HyperDrive appliance won’t cut it if you want to use the appliance in production or if iOS/OS X devices are deployed. image Fellow RES guru Rob Aarts has an article published on RESguru.com, but I’ve had differing experiences and our process is slightly different.

Unfortunately (for seemingly me in particular) I always appear to receive a “SSL key not valid” error when trying to import the certificate via the wizard (Nomadesk are aware of the problem and are investigating):

RES do have a KB article (login required) that details how to manually replace the certificate. There are some fairly simple steps that you follow, but as with all the RES HyperDrive documentation so far, there are some holes in it if you’ve never performed the actions before.

In the post I will assume that you have you SSL certificate in 2 parts; the public certificate (.crt file) and the private key (.key file). If you need to know how to generate these files from a .pfx file, I suggest you refer to the instructions in the Replacing the Default XenServer WSS Certificate post first and look for the “Converting the Certificate to a .CRT and .KEY Pair” section. Note: there must not be a password on the .key file!

Additionally you will need to be comfortable with Transferring Files to RES HyperDrive and probably Remotely Administering RES HyperDrive.

Preparing the Files

The RES HyperDrive appliance requires 3 files; the public certificate file, the private key file and the CA intermediaries. These files need to be named localhost.crt, localhost.key and ca-bundle.crt respectively.

It is probably easier to rename these files before copying them to the appliance (and it’ll keep the post shorter!).

Backup the Self-Signed Certificate

Once connected to the RES HyperDrive appliance console you can backup the existing certificate files with the following commands:

mv /etc/pki/tls/certs/localhost.crt /etc/pki/tls/certs/localhost.crt.old
mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.old
mv /etc/pki/tls/private/localhost.key /etc/pki/tls/private/localhost.key.old

If you get any permissions errors, remember to elevate to root with the su – command first.

Transfer the Files

The next step is to transfer the files to the HyperDrive appliance. I’ll assume that you’ve copied these to the appliance via SSH/SCP and they reside in the /home/hyperdrive directory. If you’ve used RES Automation Manager you can put them wherever needed 😉

Move the Files

Now that we’ve backed up the original self-signed certificate and copied the new files in they’ll need to be relocated. Move the files with the following commands:

mv /home/hyperdrive/localhost.crt /etc/pki/tls/certs/
mv /home/hyperdrive/ca-bundle.crt /etc/pki/tls/certs/
mv /home/hyperdrive/localhost.key /etc/pki/tls/private/

Fixing Permissions

I don’t actually know what permissions are needed by RES HyperDrive but my assumption is that they probably need to mirror what was there before. Fix the permissions by running the following commands:

chmod 0644 /etc/pki/tls/certs/localhost.crt
chmod 0644 /etc/pki/tls/certs/ca-bundle.crt
chmod 0600 /etc/pki/tls/private/localhost.key

If you copied the files in via SSH/SCP then they will be owned by the hyperdrive account. To reset the file owner on the files back, run:

chown root:root /etc/pki/tls/certs/localhost.crt
chown root:root /etc/pki/tls/certs/ca-bundle.crt
chown root:root /etc/pki/tls/private/localhost.key

Restart the Web Server

Once the files have been replaced and updated, restart the web server by running the service httpd restart command and BINGO!

Pre-canned RES AM Building Blocks

If you have integrated RES HyperDrive with an existing RES Automation Manager installation (remember you get a complimentary RES AM license) I’ve included a building block (click the red brick to download) that will perform the required configuration for you. Note: remember to replace the localhost.crt, ca-bundle.crt and localhost.key files in the \virtualengine.co.uk\RES HyperDrive\ resources folder before running it!

[wpdm_file id=10]

Automating Citrix Provisioning Server Install with RES AM

Here is a blog post I put together on automating the build of Citrix Provisioning Services using RES Automation Manager 2012. Before we get into the details I thought I’d mention a few resources and solutions I found on the way which helped me out. A big thanks to:

Before you can begin you will need to make sure you have the following prerequisites in place:

  • Provisioning Server Software (PVS 6.1 used for this example);
  • Windows Server 2003 upwards (Windows 2008 R2 SP1 used in this example);
  • NET 3.5 or higher is installed;
  • RES Automation Manager 2012;
  • Use the latest Citrix Licensing server.

I’ve split the automated process in to two distinct parts; creating the PVS database and installing PVS to make it easier to digest. If you’re lazy or just want to crack on you can just download the building blocks and get going! Note: you will need to update the resource references to the PVS 6.1 installation files.

Creating the PVS Database

Before you can automate the PVS installation we need to have a database in place for the PVS servers to connect to. Unfortunately for us there’s not an easy way to accomplish this as we need to generate an SQL script with our required database values. As we’re invoking the creation process from RES Automation Manager 2012 we can utilise parameters so we can prompt the administrator for these values at run time.

To create the SQL script we first need to install the Provisioning Services software on a clean Windows 2008 R2 server or if you have an install already you can obtain from here. Once installed we can run C:\Program Files\Citrix\Provisioning Services\DBscript.exe to launch the Provisioning Services Database Script Generator. Exciting stuff I know !!!

image

If we complete the details with placeholders (as above) for the database name and farm name, DBscript will create the required .SQL script with values that we can use within our RES Automation Manager jobs. Click OK and it will create the CreateProvisioningServerDatabase.sql file in the path specified, complete with embedded placeholders.

We can now import this file as a resource into the RES Automation Manager console. Note: remember to tick the ‘Parse Environment variable and parameters’ checkbox. If you forget to do this we’ll attempt to create a database with a name of $[PVSDB] which probably won’t work (not that I’ve checked!).

To create the required SQL database we can utilise the CreateProvisioningServerDatabase.sql file with the built in RES Automation Manager database connector task(s) or via SQLCMD on the local Microsoft SQL instance. As we’re cheap and can’t assume that you’re licensed for the relevant connector, we’ve utilised SQLCMD in the building blocks. For more details on this, download them and have a look.

After the database has been created we need add SQL permissions to the database (if using a network user for the SOAP and STREAM services). This is achieved with a couple of SQL statements (see the building blocks for more information). If we’re using an Windows service account to run these services, the user will be configured later during the install… And now the fun begins;

Installing and Configuring PVS

Now that the database is created we can move on to installing the software, configuring and adding servers to the farm. Installing the software is no problem however configuring and adding servers to the farm is a bit more involved. The method I used for configuring the servers was by utilising the configwizard.ans file which holds all the configuration items. By running the %PROGRAMFILES%\Citrix\Provisioning Services\configwizard.exe /s the answer file is in turn created here C:\ProgramData\Citrix\Provisioning Services\configwizard.ans.

Once we have the configwizard.ans file we can edit it and embed our RES Automation Manager 2012 parameters within it. If you’d like to know what options can be configured in the answer file, run configwizard.exe /c. The configuration wizard will write a C:\ProgramData\Citrix\Provisioning Services\configwizard.out file. Again, all this information is in our building blocks.

I used two different answer files one for the first server joining the farm and the other for all subsequent servers. Below is an example of the first server configwizard.ans file:

[code]IPServiceType=$[IPServiceType]
PXEServiceType=$[PXEServiceType]
FarmConfiguration=2
DatabaseServer=$[DBSERVER]
DatabaseInstance= FarmExisting=$[PVSFARM]
ExistingSite=$[PVSSITE]
ADGroup=$[DOMAIN]/Builtin/Administrators
Store=$[PVSSTORE]
DefaultPath=$[STOREDRIVE]$[STORELOCATION]
UserName=$[SERVICEACCOUNTUSER]
UserPass=$[SERVICEACCOUNTUSERPASSWORD]
network=$[NETWORKACCOUNT]
Database=$[DBCONFIGUSER]
PasswordManagementInterval=7
StreamNetworkAdapterIP=$[STREAMINGSERVERIP]
IpcPortBase=6890
IpcPortCount=20
SoapPort=54321
BootstrapFile=C:\ProgramData\Citrix\Provisioning Services\Tftpboot\ARDBP32.BIN
LS1=$[STREAMINGSERVERIP],0.0.0.0,0.0.0.0,6910
AdvancedVerbose=0
AdvancedInterrultSafeMode=0
AdvancedMemorySupport=1
AdvancedRebootFromHD=0
AdvancedRecoverSeconds=50
AdvancedLoginPolling=5000
AdvancedLoginGeneral=30000[/code]

Once the answer file/files have been created and modified, import them into the RES Automation Manager resources. Note: remember to select the ‘Parse Environment variable and parameters’ checkbox!

Finally to automate the actual PVS install, we need to make sure we download these resources to the C:\ProgramData\Citrix\Provisioning Services\ directory on the target server. Then we kick off the configuration wizard which will apply the configuration, by running configwizard.exe /a. Once complete the services should start automatically and when you start the PVS console and connect you should be presented with the new farm, well hopefully anyway !!

Problems Encountered

If you do have problems using the answer file and the install fails the best place to start troubleshooting is under C:\ProgramData\Citrix\Provisioning Services\Log directory. If all goes wrong you will notice that there will be only one file here;  configwizard.log. And at the end of this file hopefully it should give you some meaningful reason as to the failure. If all works fine and the services start you should see around 8 Log files and have a big smile on your face :D.

I did have other issues whilst getting this to work. Here are a few notes in case they help:

  • No device License available when a new machine is booted using provisioning server you will see the error in the streamprocess log on the PVS server and also on the device a pop message will say “No device License currently available for this computer a system shutdown will be initiated in 96 hours. I found the resolution to this problem was to upgrade the license server to the latest build.
  • PVS Console install does not install via AM job – ensure that UAC is disabled and use a security context to run the job instead of the local System account.
  • After a server install I could not mount Vdisks on PVS server and might get an error similar to “Cannot mount Vdisk mapi error”. Looked at device manager and noticed that the Citrix virtual hard disk Enumerator driver was not installed correctly. To resolve this first remove the device and then go to %PROGRAMFILES%\Citrix\Provision Services\Drivers right hand click and install cfsdep2.inf and then go back to device manager and add legacy hardware and select “I have disk” and then point to same location and the file is cvhdbusp6.inf. It should then hopefully install this device without any issues. Or the Preferred option with RES AM create a module to download the following CFSDep2.cat, CFSDep2.inf and CFSDep2.sys to C:\windows\system32\drivers before installing provisioning server and all should be okay.
  • When using a service account make sure that this user is given the required permissions i.e read/write on the PVS store directory on the PVS servers / db_datareader and db_datawriter on the database although the latter can be done if you select configure user for database.

Building blocks now updated as there was a problem with the Service Account password passing through to the answer file, this should be resolved. Have also added a module to remove the answer file as the password is in plain text.

Hope this helps, Enjoy ! Smile Simon

[wpdm_file id=7]

Upgrading RES AM Linux Agents

There comes a time when RES Automation Manager Linux agents need upgrading. A typical example is with the GA release of RES HyperDrive. Now that RES Automation Manager 2012 SR1 has been released, there is a newer Linux agent that isn’t (currently) is the RES HyperDrive appliance.

If you’re like me, you’ll want to upgrade this. The Getting Started with RES Automation Manager Agent for Linux guide will point you in the right direction, but unless you’re a fairly competent Linux administrator you may struggle with certain aspects. For example, to upgrade the RES AM Linux agent all you need to do is:

1. Stop the currently installed RES Automation Manager Agent for Linux by using the command /etc/init.d/resamad stop.
2. Uninstall the RES Automation Manager Agent for Linux.
3. Install the new version of the RES Automation Manager Agent for Linux.
4. Start the new RES Automation Manager Agent for Linux.

So there you have it – simple!

I’ll actually take you through the individual steps to upgrade the Linux agent installed in a RES HyperDrive appliance. These steps are equally applicable to any Linux installation but this will no doubt be a common scenario. As an overview the steps required are:

  1. Find installed RES Automation Manager Agent for Linux version;
  2. Uninstall existing RES Automation Manager Agent for Linux;
  3. Copy new RES Automation Manager Agent for Linux;
  4. Extract RES Automation Manager Agent for Linux;
  5. Install RES Automation Manager Agent for Linux;
  6. Configure RES Automation Manager Agent for Linux;
  7. Start the RES Automation Manager Agent for Linux.

Connecting

Firstly you’ll need to connect to the RES HyperDrive virtual appliance via SSH (see Remotely Administering RES HyperDrive) or connect to the console session.

Uninstall Existing Version

To uninstall the existing RES Automation Manager Agent for Linux you’ll need to find the currently installed version before you can actually remove it. To find the existing version run:

[code]rpm –qa | grep –i res-am[/code]
This will display the current version. Make a note as you’ll need it in a minute or two! Here’s an example screenshot from the RC2 appliance:

image

To uninstall the agent run:

[code]rpm –e <res-am-agent-version>[/code]

The <res-am-agent-version> is listed in the first command, for example res-am-agent-6.5-0.102354. If successful the agent service should be stopped and the agent uninstalled.

Note: I have seen multiple agents installed in both the RC2 and GA releases. It looks like an oversight and the 6.4-2 version is not actually installed. If you want to remove both entries then the second rpm –e command may give you an error but it will be removed from the list.

Copy Agent Files

You will need to download the latest Linux agent version from the RES support portal as they’re not included in the management console like the Windows clients. Once you’ve downloaded the tarball, copy it to the RES HyperDrive appliance (see Transferring Files to RES HyperDrive) into the /home/hyperdrive directory.

From your SSH/console session run:

[code]mv /home/hyperdrive/res-am-agent-<version>.tgz /tmp[/code]

This will move the file into the /tmp directory. Note: If you don’t have permissions to do this run the ‘su –‘ command first, enter the root password and try again.

Extracting the Agent Installer

As the RES Automation Manager Agent for Linux is compressed it needs extracting before it can be installed. Change the working directory and extract the archive by running the tar command:

[code]cd /tmp
tar xvzf ./res-am-agent-<version>.tgz[/code]

This expands the files into the /tmp/AIX, /tmp/RedHat and /tmp/Suse directories. As CentOS is based on RedHat 5 we need to install this agent version. Install the new agent version by running:

[code]rpm –i /tmp/RedHat/Release5/x86_64/res-am-agent-<version>.x86_64.rpm[/code]

Configuring the Agent

To connect the RES Automation Manager Agent for Linux, we either need to enable auto discovery or specify a Dispatcher list. If you wish to enable auto discovery you can configure the agent with the following command:

[code]/usr/local/bin/resamad –d m[/code]

If you wish to specify a Dispatcher run this instead:

[code]/usr/local/bin/resamad –dd<Dispatcher>[/code]

For example, if your Dispatcher was called RESAMDISP01 (with an IP address of 192.168.0.100) you could either run

[code]/usr/local/bin/resamad –ddRESAMDISP01[/code]

or

[code]/usr/local/bin/resamad –dd192.168.0.100[/code]

Starting/Stopping the Agent

After the upgrade you’ll probably need to start the agent. To do this you can simply run:

[code]service resamad start[/code]

If you check the RES Automation Manager console you should see your agent online. The version shown below (6.00.111676) is the RES Automation Manager 2012 SR1 Agent for Linux.

image

If you need to restart the RES Automation Manager Agent for Linux run service resamad stop and then service resamad start. Why there is no service resamad restart command I don’t know! If I wasn’t lazy I’d create the required script but as the appliance is supposed to be “rip and replace” I don’t think I’ll bother 🙂

Phew – hopefully someone finds this useful? Iain

Citrix HDX RealTime Media Engine Fails to Install

Since the recent release of the Citrix HDX RealTime Optimization Pack for Lync, one of my colleagues Simon Pettit has been installing and configuring it on our development XenDesktop environment. The Citrix HDX RealTime Optimization Pack for Lync can be installed with both Citrix XenApp and Citrix XenDesktop. In our scenario we’re installing onto Citrix XenDesktop but it is equally applicable to Citrix XenApp. The installation has two install components:

  1. HDX RealTime Connector (HDX RealTime Connector LC.msi) that’s installed on the XenDesktop virtual machine;
  2. HDX RealTime Media Engine (Citrix HDX RealTime Media Engine.msi) that should installed on the endpoint device connecting the XenDesktop virtual machine.

Once Step 1 had been completed, we were then in a position to complete Step 2 – and this is where I started to have issues. No sooner had I started the installation I was faced with this warning message.

image

Now I knew for a fact that I had the Citrix Receiver installed and working so the warning message was infact lying!! – but why?

So I next decided to crack open the MSI, in one of my must have tools InstEd, and see what logic the MSI was using to determine why the Citrix Receiver wasn’t installed. The first place I always look is the CustomAction Table; this is where some ISVs love to try and cheat the built-in methods within the Windows Installer, i.e. using the AppSearch Table and a like. Please don’t use Custom Actions; I hate them with a passion?!

Looking in the CustomAction Table we can see two actions “CheckCitrixPluginVersion” and “CitrixPluginNotFound” which look like our culprits. You should also probably notice the source of all this evil is coming from the file “Install.vbs”.

SNAGHTMLaba1627

The “Install.vbs” file can be found in the Binary table where such evils are normally hidden Devil!! Now as this file is embedded into the MSI, we CAN’T easily see what logic the ISV has implemented. Did I mention, that this is why I hate them with a passion Angry smile?

SNAGHTMLac1fcec

Now luckily we can use InstEd to export the contents of the table by simply right clicking on the table and selecting “Export Table” (and selecting the location where to export the table to). If we now browse to the destination folder we will see a “Binary” (named after the table name) directory which will contain the files in the Binary table.

SNAGHTMLaca14a1

We can now open the file up in our favourite text editor and take a peek inside to see what’s going on. So lets look at the contents of this file:

[code]Sub CheckRunningApp()

Set objWMIService = GetObject(“winmgmts:\\.\root\cimv2”)

 

Set colProcesses = objWMIService.ExecQuery _

(“SELECT * FROM Win32_Process WHERE ” & _

“Name = ‘MediaEngineHost.exe'”)

 

If colProcesses.Count > 0 Then

Session.Property(“APP_RUNNING”) = “1”

End If

End Sub

 

Sub CheckCitrixVersion()

Dim strComputer

Dim oReg

Dim strKeyPath

Dim strCitrixReceiverVersion

Dim strMinCitrixReceiverVersion

Dim strCitrixInstallFolder

Dim strKeyCitrixPathPath

Dim strKeyCitrixVerPath

Dim bHasAccess

Const HKEY_LOCAL_MACHINE = &H80000002

 

strComputer = “.”

strMinCitrixReceiverVersion=”11.2″

strCitrixReceiverVersion=””

strCitrixInstallFolder = “”

strKeyCitrixPathPath = “”

strKeyCitrixVerPath = “”

bHasAccess=false

 

Set oReg = GetObject(“winmgmts:{impersonationLevel=impersonate}!\\” & strComputer & “\root\default:StdRegProv”)

strKeyPath = “SOFTWARE\Wow6432Node\Citrix\ICA Client”

oReg.CheckAccess HKEY_LOCAL_MACHINE, strKeyPath, &H20000, bHasAccess

 

if (Err.number=0 and bHasAccess=true) Then

strKeyCitrixVerPath = “SOFTWARE\Wow6432Node\Citrix\InstallDetect\{A9852000-047D-11DD-95FF-0800200C9A66}”

strKeyCitrixPathPath = “SOFTWARE\Wow6432Node\Citrix\Install\ICA Client”

Else

Err.Clear

strKeyPath = “SOFTWARE\Citrix\ICA Client”

oReg.CheckAccess HKEY_LOCAL_MACHINE, strKeyPath, &H20000, bHasAccess

 

if (Err.number=0 and bHasAccess=true) Then

strKeyCitrixVerPath = “SOFTWARE\Citrix\InstallDetect\{A9852000-047D-11DD-95FF-0800200C9A66}”

strKeyCitrixPathPath = “SOFTWARE\Citrix\Install\ICA Client”

end if

end if

 

if(Err.number=0 and bHasAccess=true) then

oReg.GetStringValue HKEY_LOCAL_MACHINE, strKeyCitrixVerPath, “DisplayVersion”, strCitrixReceiverVersion

if (StrComp(strCitrixReceiverVersion,strMinCitrixReceiverVersion)>=0) Then

Session.Property(“CITRIX_VERSION_112”) = “1”

Else

Session.Property(“CITRIX_VERSION_112”) = “0”

End If

Err.Clear

oReg.GetStringValue HKEY_LOCAL_MACHINE, strKeyCitrixPathPath, “InstallFolder”, strCitrixInstallFolder

 

if (Err.number=0) Then

Session.Property(“CITRIX_PATH”) = strCitrixInstallFolder

else

Session.Property(“CITRIX_PATH”) = “0”

end if

else

Session.Property(“CITRIX_VERSION_112”) = “0”

Session.Property(“CITRIX_PATH”) = “0”

end if

End Sub

 

Sub SetMEHostLocationsValue()

installPath = Session.Property(“INSTALLDIR”)

If IsNull(installPath) Or Len(installPath) = 0 Then

Exit Sub

End If

 

locationsStr = Session.Property(“MEHOST_LOCATIONS_ENTRIES”)

 

If IsNull(locationsStr) Or Len(locationsStr) = 0 Then

newLocVal = installPath

 

tempVarStr = Session.Property(“%TEMP”)

If Not IsNull(tempVarStr) and Len(tempVarStr) > 0 Then

newLocVal = newLocVal + “;%TEMP%”

End If

Else

If InStr(locationsStr, installPath) = 0 Then

newLocVal = installPath + “;” + locationsStr

Else

newLocVal = locationsStr

End If

End If

 

Session.Property(“MEHOST_LOCATIONS_ENTRIES”) = newLocVal

End Sub[/code]

Now I’m not going to talk through the logic of the VBScript line by line as you can do that yourselves. However, I will draw your attention to these bits:

[code]strKeyPath = “SOFTWARE\Wow6432Node\Citrix\ICA Client”
oReg.CheckAccess HKEY_LOCAL_MACHINE, strKeyPath, &H20000, bHasAccess[/code]

And:

[code]strKeyPath = “SOFTWARE\Citrix\ICA Client”
oReg.CheckAccess HKEY_LOCAL_MACHINE, strKeyPath, &H20000, bHasAccess[/code]

You can see the VBScript is checking in HKEY_LOCAL_MACHINE for the existence of certain registry values that determine if the Citrix Receiver is installed (or not) and setting Windows Installer Properties that instruct the MSI to display the warning message we originally received.

Having now found out where it is determining this information, I checked those related HKLM registry keys on my local machine and surprise surprise, they weren’t there! So it’s taken me a while to find out why I’m getting the warning message (I could have potentially just used ProcMon when running the installer – but this blog also gives you some handy insight into the workings of Windows Installer Winking smile). Interestingly those very same registry keys are present in HKCU!

image

I hope you’re still with me? If you are, it begs the question “why when I installed the Citrix Receiver did these registry keys appear in HKCU?” If you were looking at the above screen shot very closely, you will also notice that the Citrix Receiver files are installed into my profile too!?

SNAGHTMLaecc77c

Well as it turns out the answer is simple, if not flawed in my view (but that’s another post). I installed the Citrix Receiver as my “standard” user account i.e. no admin privileges and as such the Citrix Receiver installed the files into my profile and registry keys into HKCU. The HDX RealTime Media Engine installer is obviously unaware that this is at all possible hence why its only checking HKLM.

So in conclusion, for the HDX RealTime Media Engine install to work without the warning message, you need to have installed the Citrix Receiver as an administrator. This will ensure the files are installed into either “%ProgramFiles%” or “%ProgramFiles(x86)%” and the registry keys into HKEY_LOCAL_MACHINE. If you’re (thinking of) operating a BYOC scheme you will probably need to be aware of this as your users won’t be and who knows what they’re doing!

Thanks for staying with me on this one – but I hope it was worth the wait Smile.

Nathan

Transferring Files to RES HyperDrive

As I’ve discussed previously, connecting the RES HyperDrive appliance via SSH is more involved than is typical for other Linux appliances. My assumption is that, as SSH is used by OS X clients and is exposed to the big bad world, it needs be secured. And tightly!

I have come across numerous times that I’ve needed to transfer files to or from the virtual appliance. This normally involves copying SSL certificates and keys and grabbing log files etc. Various people have asked me how they can achieve this so I thought I’d document the process. It’s fairly straight forward and assuming you have have your SSH private key and have downloaded WinSCP (or your SCP client of choice) you’re all set. WinSCP will transfer files over SSH and therefore, the process is almost identical to the earlier Remotely Administering RES HyperDrive post.

Note: If you have RES Automation Manager 2012 deployed then you can always transfer files to the appliance with the built-in Linux/Unix Resource Download task. If you don’t or want to know how to do this manually, feel free to continue..

After launching WinSCP you need to enter the connection information. Enter the hostname/FQDN, port number, username and private key as highlighted below (replace the hostname accordingly!). Make sure that you enter the username as hyperdrive and leave the password blank!:

image

When you connect by clicking the Login button you’ll be asked whether you trust the server’s key, so go ahead and do so. Once connected you should be able to transfer and drag ‘n drop files from left to right.

image

As we’re connecting as the hyperdrive user account we can only really copy files into the hyperdrive user’s home directory (/home/hyperdrive). After you’ve copied the files into the home directory you’ll need to move the files via the command line, i.e. via the console/SSH (don’t forget to change the owner and permissions as required!). Reading files is generally less of an issue, but you might need to relocate them into the /home/hyperdrive directory before you can copy them out; diagnostic or log files for example.

Good luck! Iain

Remotely Administering RES HyperDrive

Connecting and administering a RES HyperDrive appliance can be frustrating the first time you try. Therefore, I thought I’d put a few notes together on how to connect and transfer files to the appliance. If you’re planning on deploying SSL wildcard certificates, you’re going to need to know how to do this. Whilst you can always use the XenServer or vSphere console, connecting via SSH has many benefits.

The first thing to realise is that the HyperDrive appliance listens on port TCP port 80 for Windows client synchronisation and TCP port 8080 for OS X/Mac client sync. The OS X client tunnels over SSH and therefore the default SSH (TCP port 22) is not used.

Secondly you will also need the SSH RSA key to connect. After successfully completing the configuration wizard, you are offered the option to download the PuTTY or OpenSSH keys. Don’t worry if you never saved these somewhere safe as you can always download them later from the https://<ApplianceFQDN>/va/keys/putty or https://<ApplianceFQDN>/va/keys/rsa URLs (this doesn’t appear to work on the RC release). Notice that you will need to authenticate with the root password to download the keys (notice the typo?!):

image

Once you have the private key you can configure the PuTTY client. Fire up the PuTTY client and enter the RES HyperDrive appliance IP address or FQDN. You must make sure that the port is set to TCP port 8080.

image

Before continuing you need to import the SSH private key. Expand the Connection > SSH > Auth node and select the saved key file you downloaded earlier:

image

When you click the Open button you should be connected to the RES HyperDrive appliance and asked if whether you trust the server’s key:

image

Click Yes to trust the key and continue. You’ll be prompted for a username. As you’re authenticating with a key you won’t need to enter a password. You may not be aware, but you’re actually authenticating with the local “hyperdrive” user account key. Therefore, you must use a username of hyperdrive to connect. If you enter any other user account, e.g. root, you’ll be denied access.

image

Once you have access to the appliance you can switch users (SU/SUDO) to perform the required administrative tasks. Enjoy!

Multi-Homing RES HyperDrive

In certain situations you may wish to install two network adapters into a RES HyperDrive appliance. For example, you may not want to route internal traffic via the same gateway interface as external traffic. In this scenario there are some things that you need to be aware of. The RES HyperDrive documentation intimates that the primary NIC is the internet facing interface.

This isn’t necessarily the case and either NIC can be used. What you do need to be aware of though, is that if you configure a default gateway on all NICs, CentOS 5.3 will use the highest interface gateway as the default route. Therefore, if you specify a default gateway on both NICs in multi-homed deployment, the eth1 gateway will be used for the default route. If you look closely at the example above you will notice (there are 2 x 10.0.1.1 firewalls!) that the eth1 interface has no gateway specified. I recommend that you do leave the internet facing NIC with the default gateway, but whether this is eth0 or eth1 is up to you.

If you wish to manually alter the IP addressing information you can find the configuration scripts in the /etc/sysconfig/network-scripts/ directory. There will be an ifcfg-eth0, ifcfg-eth1 for each attached NIC. Use your favourite text editor to update the appropriate file.

Once you have configured the correct IP address(es) and gateway you will need to add static routes to the “internal” network(s). The RES HyperDrive CentOS installation stores static routes in the route-<interface> file in the /etc/sysconfig/network-scripts/ directory. As an example, if our internal networks were 172.16.0.0/255.255.0.0 and 172.17.0.0/255.255.0.0 we would create the following entries in the /etc/sysconfig/network-scripts/route-eth1 file (assuming the internal gateway is actually 192.168.1.1 and not 10.0.1.1!):

[code]172.16.0.0/16 via 192.168.1.1 dev eth1
172.17.0.0/16 via 192.168.1.1 dev eth1[/code]

Once you’ve made all your changes you can restart the networking stack by running the service network restart or reboot the appliance. If you want to view the routing table, just run the route command. Simples!

FOR THE LOVE OF GOD…

I promised myself it would never come to this and I’m writing this against my better judgement. However, when my independence, professionalism and credibility are called into question I feel that it warrants a response. The root of this seems to stem from some tweets I sent last night as can be seen here:

image

In particular the “I think you’ll find it’s more one sided (as usual)” comment hit a nerve or two. I awoke this morning and nearly spat my coffee everywhere after reading this Direct Message (identity purposely removed):

image

I would not normally air this in public but I could not send a DM response back as they chose to “unfollow” me after sending the message. Therefore, I can only make this public response.

To set the record straight, I’m not on anyone’s side. Virtual Engine as a company do not sell licenses of any product. We deliver consultancy and implementation services of various products. We’re here to ensure that our customers get the right solution for their requirements and do recommend AppSense, Liquidware Labs, RES or any other product that fits. Each product suite has its strengths and weaknesses. Period.

Believe me, I am highly critical of the RES suite of products (just ask anyone attending one of my training courses or a member of Product Management!). The simple fact that they OEM’d some of the technology for HyperDrive and lead everyone to believe differently doesn’t sit well. I don’t understand the reasoning and surely they knew that this was going to be uncovered at some point? That is what the above tweets say (just not in so many words!).

What my “one sided” comment was referring to is the seemingly non-stop “bashing” of the competition from the boys in green. This “one-upmanship” and playground antics is tiresome.

I don’t understand what the purpose of this is and if anyone can enlighten me, I’ll gladly listen.

I can only perceive that it’s for one of two reasons; 1) increasing sales or 2) attempting to throw so much mud that it sticks and forces the company out of business. Now, I hope that it’s not the second option as competition is good for everyone; the end users and the vendors. It will probably never work and even if it did, I wouldn’t want that on my conscience.

If the purpose is to increase sales then I think this approach is ultimately flawed too. Constantly being negative will eventually turn the customers and the channel off. Sure keep a very close eye on your competitors. However, don’t constantly criticise their approach or their ill-informed decisions. Use these perceived misadventures to your advantage and outmanoeuvre them with a better solution! That’s what successful businesses are all about.

So How Can AppSense Fix This?

In my opinion it’s very simple; people would like to know why they should be buying AppSense’s products. What are the differentiators between their offerings and the competition? Some might call this good old fashioned marketing?! For example, to the majority it doesn’t matter that a product has OEM’d components/technologies or is written in native code etc. What people want is a product that works and does what they need to do.

Now I will go on the record and state again that the AppSense suite of products are great and they have some fantastic technology. There are new technologies coming down the line that our existing and potential customers can leverage so please do bang the drum (and very loudly too) about how great DataNow and the other products are. Do tell us why we should be buying them! Just please, please, for the Love of God, focus on the marketing of products and not spreading FUD.

Rant over! If you feel offended, then it’s not my intention and I’m happy to discuss any of this with anyone if you feel it’s off the mark or factually incorrect. You can contact me via the usual channels or leave a comment. Now lets start afresh and move on.. Iain

RES Automation Manager Emergency Patch Management

I previously covered the reasons why you probably wouldn’t use RES Automation Manager for patch management (see here). Max Ranzau (AKA @RESguru) made a great point that you can certainly use Automation Manager to push a patch out individual patches easily. With the release of the Microsoft RDP critical patch MS12-020 and an exploit apparently in the wild, this proves that RES Automation Manager certainly still has its place in your patch management strategy.

Assuming that you haven’t exposed port 3389 directly to the internet you may feel that you’re somewhat “safe.” I actually think that the greater risk comes from worms that will be run from within the corporate network firewalls. All it takes is for one machine to be compromised… How many desktops and servers do you have inside the corporate network that have RDP access enabled?

Microsoft provides some workarounds that will give you time to test the patch prior to deployment. Fortunately, RES Automation Manager gives you the following options in dealing with this exploit using the built-in Automation Manager tasks/tasklets:

    1. Deploy the patch within minutes and/or
    2. Disable RDP connections completely and/or
    3. Enable/modify the Windows firewall rules to block RDP connections and/or
    4. Enable Network Level Authentication for RDP connections.

One thing is for certain, you need to be acting and mitigating this risk now. I think it’s only a matter of time before things get interesting. Who remembers Slammer?! I know people who are still mentally scarred by its long lasting effects!

GPOs could help you with some of this, but nothing is going to be able to deploy any of (or a mixture of) the above workarounds within minutes. How will you be sure that your workarounds are in place on all machines? RES Automation Manager will give you near instant feedback on what tasks failed and provide you with the data to target those computers. Remember, if you use RDP/Remote Assistance for support then you’re probably limited to option #1 (or maybe #4).

If you don’t have RES Automation Manager today, you probably wish you did! You’ve been warned Smile with tongue out..

Iain

PVS Image Management

There has been a bit of banter on the Twittersphere about how people manage and document their PVS images. It was suggested (by more than just me Smile) that RES Automation Manager could be utilised for this task. This post is not a best practice guide as to how to create, update or document your images, rather a use case on how and why we use/recommend RES Automation Manager. Heaven forbid, you might even decide to do away with Provisioning Services as a result. Either way RES Automation Manager will play very nicely with or without PVS but I couldn’t fit it into 140 characters!

Provisioning Services Private Image Management

Maintaining the gold or private mode PVS image can be a complex task for a number of reasons. Simplifying any of these potential hurdles can only be a good thing, right?

  1. A certain level of skill is required to both create and maintain images. There are numerous tasks that need to be completed and in some cases, performed in a particular order. As a result, this task is typically left or assigned to the senior administrators.
  2. Application upgrades can taint or stain the master image. Some applications require an uninstallation of the old version and installation of the new product MSI. I don’t need to tell you that uninstallers are not always reliable or clean everything out when run!
  3. How are changes to the gold image documented to ensure that they’re incorporated into all other PVS images? It is typical that there will be more than one image for deployment. For example, hardware differences will typically require separate images.
  4. Ad-hoc and emergency changes can wreak havoc with your PVS images. How quick and easy is it to push an update out to 100 XenApp servers streamed from a central image? If we make changes whilst the servers are running then they’ll be lost when the write cache is erased meaning we either have to reapply this change after every reboot or update our gold image pronto! This will get a lot more interesting if the servers are rebooted on a nightly basis and the write cache cleaned!

RES Automation Manager

If know me by now, you probably know that I’m going to say that RES Automation Manager is the answer to all your prayers! Now whilst it can certainly address the above “issues” (and I would recommend it in conjunction with Provisioning Services any day of the week) there are other processes and solutions that may address one or more of the above and deploying RES Automation Manager won’t automagically fix them. A good example of this is documentation. If your internal processes mandate that all changes are documented and you bypass this process, there is nothing to stop you bypassing this process even if Automation Manager is installed!

What RES Automation Manager Won’t Do

Thought I’d better get this bit out of the way before you get all the way to the end and are disappointed! RES Automation Manager is a Run Book Automation tool and not an imaging/deployment tool. This means that we cannot (directly) deploy an Operating System from RES Automation Manager. Fortunately for us there are many technologies out there that can, e.g. Windows Deployment Services/Microsoft Deployment Toolkit which BTW can by combined with RES Automation Manager – take a look at this White Paper. Why reinvent the wheel?!

What RES Automation Manager Will Do

So once we have our Operating System deployed and the RES Automation Manager agent installed (we can do this with WDS/MDT as mentioned earlier) what benefits will this give us? Well, at a simplistic level, RES Automation Manager can automate the entire server configuration and application deployment process. This process can also include installing XenApp and XenApp Prep as well as any other applications. This obviously takes some additional time but gives us a clean, repeatable process for deploying a XenApp server from scratch. It’s a strategic decision and not a tactical one!

Why is this important? Typically it comes down to issues #2 and #3 so let’s take them one at a time..

Issue #1

RES Automation Manager can reduce this complexity by removing Provisioning Services altogether. I’m not suggesting that you remove this from your infrastructure. Not even for one minute. However, if you don’t need to have a clean image after every reboot getting shot of PVS maybe an option? We have automated the complete server deployment and can typically provision a new server in a few hours from start to finish; Operating System, XenApp and applications. OK it’s a few hours of time, but there is no user interaction required. I’m guessing that it’s probably not that often that you need to add a new server within 30 minutes?

This benefits the typical IT department as these are now regular servers. They’re supported in the same way as other servers and they have a proper OS install etc. There are downsides too. Now we need to patch and maintain multiple OS instances and not just one master image. Isn’t this part of the reason you deployed Provisioning Services in the first place?

Issue #2

By having a repeatable process for building our XenApp server(s) from scratch we can avoid tainting our image. If we need to cut a new image then we can deploy a completely clean server and deploy the required applications as required. We don’t need to uninstall and reinstall or upgrade applications. I’m not advocating this as a best practice, but I know lots of admins that are a lot happier with this process. It doesn’t need to be performed for all updates, but you now have an option as to whether you update the master image or cut a new one. If you have not automated the entire deployment and configuration process, recreating a new image from scratch probably doesn’t make you feel warm and fluffy inside!

When you finally get run over by a bus (it’s going to happen one day as everyone keeps saying it) pretty much anyone with ounce of intelligence can deploy a new server or reverse engineer the Modules and Tasks in the Run Book to discover how things are tied together.

Issue #3

By virtue of automating the entire configuration and deployment process with RES Automation Manager, you have actually documented every step in the process. RES Automation Manager includes the ability to create an Instant Report of any or all Run Books, Projects and/or Modules. These reports are very detailed (small example here) and typically run to 1,000+ pages. For us consultants, this feature alone is worth its weight in gold. Did I mention that it’s available in RES Workspace Manager too? Winking smile

Issue #4

Finding out when and by whom the changes were made. Whether they be changes made to the gold image or Ad-Hoc emergency it doesn’t matter the audit trail of the changes is vitally important especially with change management processes. Well would be surprised to hear that RES Automation Manager has an in-built Audit Trail which allows you to view all actions performed in RES Automation Manager – how handy is that when a witch hunt is on (Oh that never happens now does it!?).

Issue #5

As usual I’ve saved the best until last and you didn’t see number 5 coming! The “pièce de résistance” if you like. This might get a bit confusing so strap yourselves in ready…

Emergency changes to running PVS instances are pain. Depending on your configuration after a reboot changes may be lost and depending on your requirements, you may reboot nightly or even weekly. If there is a configuration change that needs to be made then ultimately we need to update the master image. We can implement the change on the running instances, but it will be lost at some point when the write cache is cleared. Until the master image is updated we will need to implement the change, potentially after every reboot.

Because RES Automation Manager is a Run Book Automation tool we can implement this change across all running instances within minutes. “WAIT!”, I hear you cry, “These changes will be lost after a reboot!” Correct. But now we have achieved two things; documented the change and can automate the update to the master image at some point in the future.

Why did I say at “some point in the future?” Fortunately for us there is a hidden gem within RES Automation Manager called Snapshot Intelligence. With a name like that it better be good right!?

As the RES Automation Manager database has a record of all jobs that have executed on a given agent it can detect a snapshot. Whether this is a virtual snapshot or a backup restoration, it makes no odds. In our PVS world, if RES Automation Manager jobs have been run on a machine and the PVS instance is reset back to our master image state (write cache cleared), RES Automation Manager will detect this as a snapshot. You with me so far..?!

Once a snapshot is detected, RES Automation Manager can automatically reapply the job history (I’ll pause whilst you take this in and wait for the penny to drop!).

So if we automate all the emergency or ad-hoc updates with Automation Manager we can automatically reapply these after every reboot? Yes. No need to update the master image for every change? Yes.

In fact it gets better than that. When we update our master image we can run the exact same job history (automatically if you wish) to update the gold image. If you want to cut a new image from scratch we’ve got that covered too. Above all, if everything is automated with RES Automation Manager it’s automatically documented too. Needless to say, you get all the usual audit logging and change history.

Summary

So, in summary, using RES Automation Manager in combination with Citrix Provisioning Services has huge benefits, but there’s obviously a cost associated. Would I recommend it? Absolutely! For all of the above reasons. Is it worth it? Unfortunately, I can’t tell you that as only you know your environment.

Can RES Automation Manager replace Provisioning Services? Not entirely as you’ll still need WDS/MDT (or equivalent) to deploy the OS. It also depends on your reasons for deploying PVS in the first place. If it’s for near instant deployment, remove local disks, reduce the storage footprint or a clean image on every reboot, you’ll probably be using it for a long while yet. If your reasons are purely for “single image” management then you could potentially replace PVS in favour of a “traditional” deployment. Would I recommend this? It depends!

I know we’ve been focused on Provisioning Services in this article but RES Automation Manager will help you with the rest of your infrastructure automation. Desktops, laptops, servers; Exchange and Active Directory etc. You may have XenDesktop, Quest vWorkspace or VMware View for your virtual desktops. The same principal applies and you may even be using PVS in combination with these. Anyway, I don’t need to preach to the converted!

I will say that it should be a strategic decision to deploy RES Automation Manager. Don’t underestimate the amount of time it takes to automate and test. But I guess you already spend a lot of time testing your images?

You can find some video overviews/introductions on RES Automation Manager on Citrix TV and RES Tutorials. If you don’t want to take the time to download, install and configure RES Automation Manager but want to take a quick look, you can always request access to the RES Showcase. Some background and example videos on the Showcase platform can be also found here.

I’ll get off my soapbox now and crawl back to whence I came! Please feel free to comment and I’d love to hear your thoughts. Iain