Configuring CredSSP for Deploying XenDesktop via DSC

During my presentation at E2EVC in Berlin, we released new open source Powershell Desired State Configuration (DSC) resources for Citrix XenDesktop 7. Unfortunately for everyone there, I never actually finished the presentation.

The number of questions and level of interaction in the session was awesome. Even with the attempted sabotage by @drtritsch when he tried to blow the camera up, we recovered and soldiered on. Ultimately – having not completed the presentation and missing a few key implementation details – I thought it would be prudent to at least document how you can use the new resources! You know me – always a true professional :).

Credential Delegation

If you are using the new Citrix XenDesktop DSC resources in conjunction with WMF 4.0 then you will need to configure CredSSP on any machines that you will be installing the Citrix XenDesktop delivery controller role on. These machines will need to be configured to delegate credentials to all other delivery controllers and the Microsoft SQL server machine hosting the XenDesktop databases.

The reason for this is that the underlying Citrix XenDesktop PowerShell cmdlets have to run under Active Directory domain credentials. Unfortunately for us the DSC Local Configuration Manager (LCM) runs under the LOCALSYSTEM context and Citrix do not provide us with a –Credential parameter [sigh].

This means that we have to resort to using Powershell remoting to connect to local machine with alternate credentials! Unfortunately, this “loopback” mechanism means that if we subsequently attempt to connect to another XenDesktop delivery controller or even the Microsoft SQL server, we are subjected to the “double-hop” restriction.

You can see the “loopback” implementation details in the code. The majority of the custom resources invoke a script block on the local machine when credentials are supplied like so:

if ($Credential) { AddInvokeScriptBlockCredentials -Hashtable $invokeCommandParams -Credential $Credential; }
else { $invokeCommandParams['ScriptBlock'] = [System.Management.Automation.ScriptBlock]::Create($scriptBlock.ToString().Replace('$using:','$')); }
Write-Verbose ($localizedData.InvokingScriptBlockWithParams -f [System.String]::Join("','", @($Name, $Enabled, $Ensure)));
return Invoke-Command @invokeCommandParams;

Here, if the -Credential parameter is supplied in the configuration document then we add splatted parameters to invoke the script block on the local computer via remoting and specifying the credentials. If the –Credential parameter is not supplied then we just execute the script block as-is (after stripping out all the $using: statements).

Example Configuration

You can see an example of the CredSSP implementation in the example files used in the E2E presentation. Here, all node names in the $ConfigurationData that have a role of ‘Controller’ are put into an array (both NetBIOS and FQDN). Finally, the Microsoft SQL server’s name is added to ensure we can delegate credentials to create the databases.

## Need delegated access to all Controllers (NetBIOS and FQDN) and the database server
$credSSPDelegatedComputers = $ConfigurationData.AllNodes | Where Role -eq 'Controller' | ForEach {
    Write-Output $_.NodeName
    if ($_.NodeName.Contains('.')) { ## Output NetBIOS name as well
        Write-Output ('{0}' -f $_.NodeName.Split('.')[0]);
    }
    else { ## Output FQDN as well
        Write-Output ('{0}.{1}' -f $_.NodeName, $ConfigurationData.NonNodeData.XenDesktop.Site.DomainName);
    }
};
$credSSPDelegatedComputers += $ConfigurationData.NonNodeData.XenDesktop.Site.DatabaseServer;

When enumerating the configuration for each delivery controller, for the first delivery controller we create the site and for all other delivery controllers we join the (now existing) site.

node ($AllNodes | Where Role -eq 'Controller' | Select -First 1).NodeName {
    XD7LabSite FirstSiteController {
        Credential = $Credential;
        DatabaseServer = $ConfigurationData.NonNodeData.XenDesktop.Site.DatabaseServer;
        DelegatedComputers = $credSSPDelegatedComputers;
        LicenseServer = ($ConfigurationData.AllNodes | Where Role -eq 'Licensing' | Select -First 1).NodeName;
        SiteAdministrators = $ConfigurationData.NonNodeData.XenDesktop.Site.Administrators;
        SiteName = $ConfigurationData.NonNodeData.XenDesktop.Site.Name;
        XenDesktopMediaPath = $Node.MediaPath;
    }
    ...
}
...
node ($AllNodes | Where Role -eq 'Controller' | Select -Skip 1 | ForEach { $_.NodeName } ) {
    XD7LabController AdditionalSiteController {
        Credential = $Credential;
        DelegatedComputers = $credSSPDelegatedComputers;
        ExistingControllerAddress = ($ConfigurationData.AllNodes | Where Role -eq 'Controller' | Select -First 1).NodeName;
        SiteName = $ConfigurationData.NonNodeData.XenDesktop.Site.Name;
        XenDesktopMediaPath = $Node.MediaPath;
    }
}

The list of computers to permitted to delegate credentials to is then passed to the composite DSC resource with the -DelegatedComputers parameter. Within the CitrixXenDesktop7Lab composite resources you can see the CredSSP implementation where the CredSSP client is actually configured:

Import-DscResource -ModuleName xCredSSP, CitrixXenDesktop7;

xCredSSP CredSSPServer {
    Role = 'Server';
}
    
xCredSSP CredSSPClient {
    Role = 'Client';
    DelegateComputers = $DelegatedComputers;
}

Easy when you know how, eh?! Just remember that how you configure CredSSP is up to you and you don’t have to use the xCredSSP DSC resource; you could use Group Policy if you wanted to. Just remember that if you’re running WMF 4.0 then CredSSP has to be configured somehow!

WMF 5.0

If you’re lucky enough to be able to use WMF 5.0 then all this CredSSP configuration becomes a moot point. In the latest WMF 5.0 preview the Powershell team added a default -PsDscRunAsCredential parameter to all resources that handles the impersonation for us. Yay! \o/

This means that if you use the CitrixXenDesktop7 DSC resources on WMF 5.0 machines you should not specify the –Credential parameter in your configuration, but leverage the built-in -PsDscRunAsCredential parameter instead. This also means that the xCredSSP resources is no longer required and can be removed from your configuration or any composite DSC resources.

Remember, the CitrixXenDeskop7 and CitrixXenDesktop7Lab resources are open-source projects. We would love to see the community contribute, add resources and round out the implementation. Join us and get involved!

Let’s Get Chocolatey

It’s taken a little while, but we are pleased to announce that the Virtual Engine Toolkit (VET) and the App-V Configuration Editor (ACE) are now available on Chocolatey! If you have Chocolatey installed on your systems then you can get going by running the commands listed below.

image

Microsoft Package Management

It gets even better if you’re running the Windows 10 Insider preview, the latest WMF 5.0 April 2015 preview or have installed the latest experimental OneGet build on Windows 7 and up, you can download these directly – today!

To do this, run the Install-Package ace or Install-Package vet commands, like so:

image
Other Packages

It doesn’t end here though. We have also published the following packages that you might find useful to the public Chocolatey feed:

Now – go get Chocolatey 😀

The Anywhere Home Drive

I thought I’d put pen to paper to discuss a topic that was raised during the most recent UK Citrix User Group meeting. Discussions in the afternoon session were focused around user data, mobility and possibly the end of the user home drive as we know it. There has been lots of talk about how insecure DropBox is and the desire for businesses to implement an on premise solution. Obviously there are other commercial and open source services/products in market today that were not covered, e.g. Box, Google Drive and SkyDrive etc.

Ultimately we had AppSense, Citrix and RES in the same room demonstrating their respective products; DataNow, ShareFile and HyperDrive.

The ways in which these products are implemented varies significantly and they’re seemingly targeted at different markets or are trying to solve slightly different problems (some of which I’m not convinced really exist). For example, AppSense take a data aggregation perspective whereas RES are outwardly targeting DropBox with their current release.

From a UEM perspective there is obviously the desire for both AppSense and RES that the users’ personalisation is moved to a solution that also enables ubiquitous access within their respective UEM solutions. Synchronisation of favourites and signatures etc. regardless of device would be a massive boon for corporate users. Microsoft are doing some of this with Windows 8 today via SkyDrive.

My feeling is that regardless of what solution is implemented the vendors need to make using their product easier to use than “free” consumer products. I’m sure the majority of corporate users will happily utilise an alternative solution if it’s as easy or easier to use then their existing tool of choice. Key to this process is ensuring that a user does not have to move their data to a new solution and that their existing workflows continue to work. How many linked spread sheets are entrenched within businesses today?

Out of the solutions demonstrated the other week AppSense appear to have a head start. DataNow will currently (only) make a user’s existing home drive/directory available via the DataNow client on iOS and Android devices. This will be extended to other SMB shares and SharePoint in the future with the DataNow Enterprise product. You never know, it might actually make using SharePoint bearable!

Contrary to the DataNow product, the HyperDrive appliance is targeted directly at replacing DropBox with an on premise alternative. The caveat here is that data still needs to be moved from its existing location to a secured area managed by HyperDrive. Now don’t get me wrong; DropBox users do this already so this should not be huge undertaking. However, moving and duplicating data is not a long term solution.

Citrix announced at Synergy in Barcelona that the “Control Plane” that federates authentication and management for it’s ShareFile product is now available within the EU (Germany). For European organisations that was always going to be necessary requirement. One can only assume that this will be extended to further geographies in the near future?

The introduction of Storage Zones for ShareFile enabling on-premise data storage is something of a (welcome) surprise considering the relationship Citrix have with both AppSense and RES. OK; I’m not really surprised but it has happened quicker than I thought it would. Unfortunately for Citrix, until the Control Plane is able to run on-site it is likely there will still be some reluctance from enterprises as you’re still reliant on Citrix data centres for access to your data.

In summary all three of the products still need some development before they’re adopted by enterprises en masse, if at all.

  • The DataNow Enterprise product isn’t currently shipping (the Essentials version is) so it’s hard to recommend but looks promising.
  • HyperDrive is functional although a “little rough around the edges” but this should be expected for a v1 product. As a direct replacement for DropBox it is workable.
  • ShareFile has some fantastic opportunities with the Citrix Receiver integration but you’ll need both ShareFile Enterprise and CloudGateway Enterprise to the get the most out it. Would a smaller “point” solution be cheaper and easier to implement?

So who can deliver first…?

Move Machine Based Context Menus to Per User (Part I)

WARNING! This post requires you to edit the registry. Using Registry Editor incorrectly can cause serious problems that might require you to reinstall your operating system. Virtual Engine cannot guarantee that problems resulting from the incorrect use of Registry Editor can be solved. Use Registry Editor at your own risk. Be sure to back up the registry before you begin. If this hasn’t scared you off keep reading….

imageIn my time installing and configuring applications for multi user environments like XenApp or RDS, I come across many applications that will create context menus within Windows explorer that can help a user quickly perform a function. The screen shot below shows how WINRAR has added context menus in Windows explorer that allows the user to easily create a .RAR file having selected file(s) or folder(s).

Generally these context menus are machine based, i.e. any user that logs in to a XenApp server will be able to see and use these context menus. On the face of things you might ask yourself why would this be a problem? Well suppose this application is strictly licensed for particular/named users. Therefore, you wouldn’t want anyone having the option to use them otherwise you would need to license the application for all users! In this case what you’d really like is to only have them available to users whom are licensed or whom you deem need them. A typical example of this might be Adobe Acrobat Professional that adds in a context menu to combine documents to single PDF.

The good news is there is a way of moving them from being machine based to per user with some fancy manipulation of various registry keys. So lets begin using our example of WINRAR and see how this can be done.

Firstly, we need to understand where context menus are located within the registry. From my experience when you right click on file(s) within windows explorer the context menus will be found in:

HKEY_CLASSES_ROOT\*\shellex\ContextMenuHandlers

Here’s an example with WinRAR:

SNAGHTML10a57999

When you right click on folder(s) within windows explorer the context menus will be found in one or both of the following registry locations:

HKEY_CLASSES_ROOT\Folder\ShellEx\ContextMenuHandlers
HKEY_CLASSES_ROOT\Directory\ShellEx\ContextMenuHandlers

Below is a screen shot showing these registry keys for WinRAR:

SNAGHTML10a90573

So now we know where they are located we should open up the registry editor (REGEDIT.EXE) and export the context menu registry keys that we would like to make per user to .REG files (saving them to a location for safe keepings should you need to revert it back!).

What we need to do next is take a copy of those same registry (.REG) files so we can edit them. Using those copies open them in say notepad and replace HKEY_CLASSES_ROOT with HKEY_CURRENT_USER\Software\Classes (this is where the equivalent registry keys are kept for a user). It should now look something like this; using WinRAR as the example. Once completed save and close the .REG file.

image

Now we get dangerous (well not really if you’re in the registry all time adding, deleting and generally tinkering – sound familiar?!?). The next step requires us to alter the permissions of those context menu registry keys located in:

HKEY_CLASSES_ROOT\Folder\ShellEx\ContextMenuHandlers
HKEY_CLASSES_ROOT\Directory\ShellEx\ContextMenuHandlers
HKEY_CLASSES_ROOT\*\shellex\ContextMenuHandlers

Again using WINRAR as the example I would open up REGEDIT.EXE and browse to the following locations:

HKEY_CLASSES_ROOT\Folder\ShellEx\ContextMenuHandlers\WinRAR
HKEY_CLASSES_ROOT\Directory\ShellEx\ContextMenuHandlers\WinRAR
HKEY_CLASSES_ROOT\*\shellex\ContextMenuHandlers\WinRAR

Modify the ‘Users’ permissions from ‘Read’ to ‘Deny’ on each registry key (as listed above) like so:

SNAGHTML55f270b

Having changed those permissions you have successfully removed the context menus from a per machine basis or more precisely denied access to users and administrators. I’m no fan of doing things manually so I try and automate where possible. My choice of tool to change the registry key permissions in that automated fashion would be to use RES Automation Manager which has a built-in task to manage registry key functions, e.g. registry permissions. Unfortunately there appears to be a bug – which has been logged with RES Support – in RES Automation Manager for this task when the registry key contains “*” i.e.

HKEY_CLASSES_ROOT\*\shellex\ContextMenuHandlers\WinRAR

So I turned to the ever reliable SetACL from Helge Klein (follow Helge on Twitter here) to set the required registry permissions and added the command line into my RES Automation Manager job. For any existing users of RES Automation Manager I’ve attached a handy building block (just click on the big red brick) that can be used and manipulated for your needs to change those permissions as described above.

[wpdm_file id=9]

In Part II of this blog post I’ll describe how you go about targeting these same context menus at specific users Smile.

Enjoy!

Nathan

Installing RES HyperDrive Hypervisor Tools

RES Software have finally released RES HyperDrive and it’s available as a virtual appliance for VMware vSphere, VMware Workstation, Citrix XenServer and Microsoft Hyper-V. One thing that might surprise you is that the relevant hypervisor integration tools are not preinstalled. It surprised me, but then I thought about it; how will RES know what particular build of vSphere or XenServer you’ll be running?!

Installing the integration tools will allow the RES HyperDrive appliance to play nicely with the hypervisor and permit migration and dynamic memory support etc. Whilst there are numerous articles on the internet, I thought I’d put a few notes together in one place to make it easy for people to evaluate or deploy the virtual appliance into production.

VMware Workstation 8 and vSphere 5

Installing VMware Tools into both VMware Workstation and VMware vSphere is relatively straight forward.

  1. Mount the VMware Tools CD/DVD by choosing the “Install VMware Tools” option within VMware Workstation/vSphere client
  2. Logon to the HyperDrive console or access the virtual appliance via SSH
  3. Mount the CD image with mount /dev/cdrom /mnt
  4. Copy the installation files to the temp directory cp /mnt/VMwareTools-x.x.x-xxxxx.tar.gz /tmp
  5. Change the temp directory cd /tmp
  6. Extract the Tarball tar zxpf /mnt/VMwareTools-x.x.x-xxxxx.tar.gz
  7. Dismount the CD image with umount /dev/cdrom
  8. Install VMware Tools by running ./vmware-tools-distrib/vmware-install.pl
  9. Accept all the defaults and select a default resolution for X support (not that it appears to be used on the appliance!)
  10. Reboot the virtual appliance
  11. If using vSphere the tools should be reported as installed

If you need to reconfigure the VMware tools configuration simply rerun the /usr/bin/vmware-config-tools.pl command as and when required.

Citrix XenServer 6

This is where the fun begins.. Installing the XenTools into the XenServer virtual appliance image is just as easy as VMware Worksstation/vSphere.

  1. Mount the Citrix XenTools CD/DVD by choosing the “Install XenServer Tools” or manually mount the xs-tools.iso image within XenCenter
  2. Logon to the HyperDrive console or access the virtual appliance via SSH
  3. Mount the CD image with mount /dev/cdrom /mnt
  4. Install XenTools by running /mnt/Linux/install.sh
  5. Accept all the default prompts
  6. Reboot the virtual appliance

Once you reboot the appliance you may find that XenCenter still reports that XenTools are still not installed?! After some investigation (and not being a Linux expert) it appears that the kernel used to build the appliance in not paravirtualization (PV) aware, rather in HVM mode. Therefore, without recompiling the kernel it’s impossible to get the XenServer tools installed to support live migrations. Hopefully this is a simple oversight on RES’s part and will be rectified in future builds (I hope!).

In the meantime I will try to find out how to recompile the HyperDrive appliance kernel. I’ll let you if I’m successful but it doesn’t look particularly easy. There appears to be an issue with the configured Nomadesk YUM repository and it won’t retrieve the package list correctly to install the additional kernel. My recommendation at this point would be utilise the VMware appliances wherever possible or live with the fact that you’ll need to shut the HyperDrive appliance down to move it to another host.. Sad smile

I’ve not played with the Hyper-V appliance as we have no customers running it but please let me know if it works and how to install them!

Automating Citrix Provisioning Server Install with RES AM

Here is a blog post I put together on automating the build of Citrix Provisioning Services using RES Automation Manager 2012. Before we get into the details I thought I’d mention a few resources and solutions I found on the way which helped me out. A big thanks to:

Before you can begin you will need to make sure you have the following prerequisites in place:

  • Provisioning Server Software (PVS 6.1 used for this example);
  • Windows Server 2003 upwards (Windows 2008 R2 SP1 used in this example);
  • NET 3.5 or higher is installed;
  • RES Automation Manager 2012;
  • Use the latest Citrix Licensing server.

I’ve split the automated process in to two distinct parts; creating the PVS database and installing PVS to make it easier to digest. If you’re lazy or just want to crack on you can just download the building blocks and get going! Note: you will need to update the resource references to the PVS 6.1 installation files.

Creating the PVS Database

Before you can automate the PVS installation we need to have a database in place for the PVS servers to connect to. Unfortunately for us there’s not an easy way to accomplish this as we need to generate an SQL script with our required database values. As we’re invoking the creation process from RES Automation Manager 2012 we can utilise parameters so we can prompt the administrator for these values at run time.

To create the SQL script we first need to install the Provisioning Services software on a clean Windows 2008 R2 server or if you have an install already you can obtain from here. Once installed we can run C:\Program Files\Citrix\Provisioning Services\DBscript.exe to launch the Provisioning Services Database Script Generator. Exciting stuff I know !!!

image

If we complete the details with placeholders (as above) for the database name and farm name, DBscript will create the required .SQL script with values that we can use within our RES Automation Manager jobs. Click OK and it will create the CreateProvisioningServerDatabase.sql file in the path specified, complete with embedded placeholders.

We can now import this file as a resource into the RES Automation Manager console. Note: remember to tick the ‘Parse Environment variable and parameters’ checkbox. If you forget to do this we’ll attempt to create a database with a name of $[PVSDB] which probably won’t work (not that I’ve checked!).

To create the required SQL database we can utilise the CreateProvisioningServerDatabase.sql file with the built in RES Automation Manager database connector task(s) or via SQLCMD on the local Microsoft SQL instance. As we’re cheap and can’t assume that you’re licensed for the relevant connector, we’ve utilised SQLCMD in the building blocks. For more details on this, download them and have a look.

After the database has been created we need add SQL permissions to the database (if using a network user for the SOAP and STREAM services). This is achieved with a couple of SQL statements (see the building blocks for more information). If we’re using an Windows service account to run these services, the user will be configured later during the install… And now the fun begins;

Installing and Configuring PVS

Now that the database is created we can move on to installing the software, configuring and adding servers to the farm. Installing the software is no problem however configuring and adding servers to the farm is a bit more involved. The method I used for configuring the servers was by utilising the configwizard.ans file which holds all the configuration items. By running the %PROGRAMFILES%\Citrix\Provisioning Services\configwizard.exe /s the answer file is in turn created here C:\ProgramData\Citrix\Provisioning Services\configwizard.ans.

Once we have the configwizard.ans file we can edit it and embed our RES Automation Manager 2012 parameters within it. If you’d like to know what options can be configured in the answer file, run configwizard.exe /c. The configuration wizard will write a C:\ProgramData\Citrix\Provisioning Services\configwizard.out file. Again, all this information is in our building blocks.

I used two different answer files one for the first server joining the farm and the other for all subsequent servers. Below is an example of the first server configwizard.ans file:

[code]IPServiceType=$[IPServiceType]
PXEServiceType=$[PXEServiceType]
FarmConfiguration=2
DatabaseServer=$[DBSERVER]
DatabaseInstance= FarmExisting=$[PVSFARM]
ExistingSite=$[PVSSITE]
ADGroup=$[DOMAIN]/Builtin/Administrators
Store=$[PVSSTORE]
DefaultPath=$[STOREDRIVE]$[STORELOCATION]
UserName=$[SERVICEACCOUNTUSER]
UserPass=$[SERVICEACCOUNTUSERPASSWORD]
network=$[NETWORKACCOUNT]
Database=$[DBCONFIGUSER]
PasswordManagementInterval=7
StreamNetworkAdapterIP=$[STREAMINGSERVERIP]
IpcPortBase=6890
IpcPortCount=20
SoapPort=54321
BootstrapFile=C:\ProgramData\Citrix\Provisioning Services\Tftpboot\ARDBP32.BIN
LS1=$[STREAMINGSERVERIP],0.0.0.0,0.0.0.0,6910
AdvancedVerbose=0
AdvancedInterrultSafeMode=0
AdvancedMemorySupport=1
AdvancedRebootFromHD=0
AdvancedRecoverSeconds=50
AdvancedLoginPolling=5000
AdvancedLoginGeneral=30000[/code]

Once the answer file/files have been created and modified, import them into the RES Automation Manager resources. Note: remember to select the ‘Parse Environment variable and parameters’ checkbox!

Finally to automate the actual PVS install, we need to make sure we download these resources to the C:\ProgramData\Citrix\Provisioning Services\ directory on the target server. Then we kick off the configuration wizard which will apply the configuration, by running configwizard.exe /a. Once complete the services should start automatically and when you start the PVS console and connect you should be presented with the new farm, well hopefully anyway !!

Problems Encountered

If you do have problems using the answer file and the install fails the best place to start troubleshooting is under C:\ProgramData\Citrix\Provisioning Services\Log directory. If all goes wrong you will notice that there will be only one file here;  configwizard.log. And at the end of this file hopefully it should give you some meaningful reason as to the failure. If all works fine and the services start you should see around 8 Log files and have a big smile on your face :D.

I did have other issues whilst getting this to work. Here are a few notes in case they help:

  • No device License available when a new machine is booted using provisioning server you will see the error in the streamprocess log on the PVS server and also on the device a pop message will say “No device License currently available for this computer a system shutdown will be initiated in 96 hours. I found the resolution to this problem was to upgrade the license server to the latest build.
  • PVS Console install does not install via AM job – ensure that UAC is disabled and use a security context to run the job instead of the local System account.
  • After a server install I could not mount Vdisks on PVS server and might get an error similar to “Cannot mount Vdisk mapi error”. Looked at device manager and noticed that the Citrix virtual hard disk Enumerator driver was not installed correctly. To resolve this first remove the device and then go to %PROGRAMFILES%\Citrix\Provision Services\Drivers right hand click and install cfsdep2.inf and then go back to device manager and add legacy hardware and select “I have disk” and then point to same location and the file is cvhdbusp6.inf. It should then hopefully install this device without any issues. Or the Preferred option with RES AM create a module to download the following CFSDep2.cat, CFSDep2.inf and CFSDep2.sys to C:\windows\system32\drivers before installing provisioning server and all should be okay.
  • When using a service account make sure that this user is given the required permissions i.e read/write on the PVS store directory on the PVS servers / db_datareader and db_datawriter on the database although the latter can be done if you select configure user for database.

Building blocks now updated as there was a problem with the Service Account password passing through to the answer file, this should be resolved. Have also added a module to remove the answer file as the password is in plain text.

Hope this helps, Enjoy ! Smile Simon

[wpdm_file id=7]

Citrix HDX RealTime Media Engine Fails to Install

Since the recent release of the Citrix HDX RealTime Optimization Pack for Lync, one of my colleagues Simon Pettit has been installing and configuring it on our development XenDesktop environment. The Citrix HDX RealTime Optimization Pack for Lync can be installed with both Citrix XenApp and Citrix XenDesktop. In our scenario we’re installing onto Citrix XenDesktop but it is equally applicable to Citrix XenApp. The installation has two install components:

  1. HDX RealTime Connector (HDX RealTime Connector LC.msi) that’s installed on the XenDesktop virtual machine;
  2. HDX RealTime Media Engine (Citrix HDX RealTime Media Engine.msi) that should installed on the endpoint device connecting the XenDesktop virtual machine.

Once Step 1 had been completed, we were then in a position to complete Step 2 – and this is where I started to have issues. No sooner had I started the installation I was faced with this warning message.

image

Now I knew for a fact that I had the Citrix Receiver installed and working so the warning message was infact lying!! – but why?

So I next decided to crack open the MSI, in one of my must have tools InstEd, and see what logic the MSI was using to determine why the Citrix Receiver wasn’t installed. The first place I always look is the CustomAction Table; this is where some ISVs love to try and cheat the built-in methods within the Windows Installer, i.e. using the AppSearch Table and a like. Please don’t use Custom Actions; I hate them with a passion?!

Looking in the CustomAction Table we can see two actions “CheckCitrixPluginVersion” and “CitrixPluginNotFound” which look like our culprits. You should also probably notice the source of all this evil is coming from the file “Install.vbs”.

SNAGHTMLaba1627

The “Install.vbs” file can be found in the Binary table where such evils are normally hidden Devil!! Now as this file is embedded into the MSI, we CAN’T easily see what logic the ISV has implemented. Did I mention, that this is why I hate them with a passion Angry smile?

SNAGHTMLac1fcec

Now luckily we can use InstEd to export the contents of the table by simply right clicking on the table and selecting “Export Table” (and selecting the location where to export the table to). If we now browse to the destination folder we will see a “Binary” (named after the table name) directory which will contain the files in the Binary table.

SNAGHTMLaca14a1

We can now open the file up in our favourite text editor and take a peek inside to see what’s going on. So lets look at the contents of this file:

[code]Sub CheckRunningApp()

Set objWMIService = GetObject(“winmgmts:\\.\root\cimv2”)

 

Set colProcesses = objWMIService.ExecQuery _

(“SELECT * FROM Win32_Process WHERE ” & _

“Name = ‘MediaEngineHost.exe'”)

 

If colProcesses.Count > 0 Then

Session.Property(“APP_RUNNING”) = “1”

End If

End Sub

 

Sub CheckCitrixVersion()

Dim strComputer

Dim oReg

Dim strKeyPath

Dim strCitrixReceiverVersion

Dim strMinCitrixReceiverVersion

Dim strCitrixInstallFolder

Dim strKeyCitrixPathPath

Dim strKeyCitrixVerPath

Dim bHasAccess

Const HKEY_LOCAL_MACHINE = &H80000002

 

strComputer = “.”

strMinCitrixReceiverVersion=”11.2″

strCitrixReceiverVersion=””

strCitrixInstallFolder = “”

strKeyCitrixPathPath = “”

strKeyCitrixVerPath = “”

bHasAccess=false

 

Set oReg = GetObject(“winmgmts:{impersonationLevel=impersonate}!\\” & strComputer & “\root\default:StdRegProv”)

strKeyPath = “SOFTWARE\Wow6432Node\Citrix\ICA Client”

oReg.CheckAccess HKEY_LOCAL_MACHINE, strKeyPath, &H20000, bHasAccess

 

if (Err.number=0 and bHasAccess=true) Then

strKeyCitrixVerPath = “SOFTWARE\Wow6432Node\Citrix\InstallDetect\{A9852000-047D-11DD-95FF-0800200C9A66}”

strKeyCitrixPathPath = “SOFTWARE\Wow6432Node\Citrix\Install\ICA Client”

Else

Err.Clear

strKeyPath = “SOFTWARE\Citrix\ICA Client”

oReg.CheckAccess HKEY_LOCAL_MACHINE, strKeyPath, &H20000, bHasAccess

 

if (Err.number=0 and bHasAccess=true) Then

strKeyCitrixVerPath = “SOFTWARE\Citrix\InstallDetect\{A9852000-047D-11DD-95FF-0800200C9A66}”

strKeyCitrixPathPath = “SOFTWARE\Citrix\Install\ICA Client”

end if

end if

 

if(Err.number=0 and bHasAccess=true) then

oReg.GetStringValue HKEY_LOCAL_MACHINE, strKeyCitrixVerPath, “DisplayVersion”, strCitrixReceiverVersion

if (StrComp(strCitrixReceiverVersion,strMinCitrixReceiverVersion)>=0) Then

Session.Property(“CITRIX_VERSION_112”) = “1”

Else

Session.Property(“CITRIX_VERSION_112”) = “0”

End If

Err.Clear

oReg.GetStringValue HKEY_LOCAL_MACHINE, strKeyCitrixPathPath, “InstallFolder”, strCitrixInstallFolder

 

if (Err.number=0) Then

Session.Property(“CITRIX_PATH”) = strCitrixInstallFolder

else

Session.Property(“CITRIX_PATH”) = “0”

end if

else

Session.Property(“CITRIX_VERSION_112”) = “0”

Session.Property(“CITRIX_PATH”) = “0”

end if

End Sub

 

Sub SetMEHostLocationsValue()

installPath = Session.Property(“INSTALLDIR”)

If IsNull(installPath) Or Len(installPath) = 0 Then

Exit Sub

End If

 

locationsStr = Session.Property(“MEHOST_LOCATIONS_ENTRIES”)

 

If IsNull(locationsStr) Or Len(locationsStr) = 0 Then

newLocVal = installPath

 

tempVarStr = Session.Property(“%TEMP”)

If Not IsNull(tempVarStr) and Len(tempVarStr) > 0 Then

newLocVal = newLocVal + “;%TEMP%”

End If

Else

If InStr(locationsStr, installPath) = 0 Then

newLocVal = installPath + “;” + locationsStr

Else

newLocVal = locationsStr

End If

End If

 

Session.Property(“MEHOST_LOCATIONS_ENTRIES”) = newLocVal

End Sub[/code]

Now I’m not going to talk through the logic of the VBScript line by line as you can do that yourselves. However, I will draw your attention to these bits:

[code]strKeyPath = “SOFTWARE\Wow6432Node\Citrix\ICA Client”
oReg.CheckAccess HKEY_LOCAL_MACHINE, strKeyPath, &H20000, bHasAccess[/code]

And:

[code]strKeyPath = “SOFTWARE\Citrix\ICA Client”
oReg.CheckAccess HKEY_LOCAL_MACHINE, strKeyPath, &H20000, bHasAccess[/code]

You can see the VBScript is checking in HKEY_LOCAL_MACHINE for the existence of certain registry values that determine if the Citrix Receiver is installed (or not) and setting Windows Installer Properties that instruct the MSI to display the warning message we originally received.

Having now found out where it is determining this information, I checked those related HKLM registry keys on my local machine and surprise surprise, they weren’t there! So it’s taken me a while to find out why I’m getting the warning message (I could have potentially just used ProcMon when running the installer – but this blog also gives you some handy insight into the workings of Windows Installer Winking smile). Interestingly those very same registry keys are present in HKCU!

image

I hope you’re still with me? If you are, it begs the question “why when I installed the Citrix Receiver did these registry keys appear in HKCU?” If you were looking at the above screen shot very closely, you will also notice that the Citrix Receiver files are installed into my profile too!?

SNAGHTMLaecc77c

Well as it turns out the answer is simple, if not flawed in my view (but that’s another post). I installed the Citrix Receiver as my “standard” user account i.e. no admin privileges and as such the Citrix Receiver installed the files into my profile and registry keys into HKCU. The HDX RealTime Media Engine installer is obviously unaware that this is at all possible hence why its only checking HKLM.

So in conclusion, for the HDX RealTime Media Engine install to work without the warning message, you need to have installed the Citrix Receiver as an administrator. This will ensure the files are installed into either “%ProgramFiles%” or “%ProgramFiles(x86)%” and the registry keys into HKEY_LOCAL_MACHINE. If you’re (thinking of) operating a BYOC scheme you will probably need to be aware of this as your users won’t be and who knows what they’re doing!

Thanks for staying with me on this one – but I hope it was worth the wait Smile.

Nathan

PVS Image Management

There has been a bit of banter on the Twittersphere about how people manage and document their PVS images. It was suggested (by more than just me Smile) that RES Automation Manager could be utilised for this task. This post is not a best practice guide as to how to create, update or document your images, rather a use case on how and why we use/recommend RES Automation Manager. Heaven forbid, you might even decide to do away with Provisioning Services as a result. Either way RES Automation Manager will play very nicely with or without PVS but I couldn’t fit it into 140 characters!

Provisioning Services Private Image Management

Maintaining the gold or private mode PVS image can be a complex task for a number of reasons. Simplifying any of these potential hurdles can only be a good thing, right?

  1. A certain level of skill is required to both create and maintain images. There are numerous tasks that need to be completed and in some cases, performed in a particular order. As a result, this task is typically left or assigned to the senior administrators.
  2. Application upgrades can taint or stain the master image. Some applications require an uninstallation of the old version and installation of the new product MSI. I don’t need to tell you that uninstallers are not always reliable or clean everything out when run!
  3. How are changes to the gold image documented to ensure that they’re incorporated into all other PVS images? It is typical that there will be more than one image for deployment. For example, hardware differences will typically require separate images.
  4. Ad-hoc and emergency changes can wreak havoc with your PVS images. How quick and easy is it to push an update out to 100 XenApp servers streamed from a central image? If we make changes whilst the servers are running then they’ll be lost when the write cache is erased meaning we either have to reapply this change after every reboot or update our gold image pronto! This will get a lot more interesting if the servers are rebooted on a nightly basis and the write cache cleaned!

RES Automation Manager

If know me by now, you probably know that I’m going to say that RES Automation Manager is the answer to all your prayers! Now whilst it can certainly address the above “issues” (and I would recommend it in conjunction with Provisioning Services any day of the week) there are other processes and solutions that may address one or more of the above and deploying RES Automation Manager won’t automagically fix them. A good example of this is documentation. If your internal processes mandate that all changes are documented and you bypass this process, there is nothing to stop you bypassing this process even if Automation Manager is installed!

What RES Automation Manager Won’t Do

Thought I’d better get this bit out of the way before you get all the way to the end and are disappointed! RES Automation Manager is a Run Book Automation tool and not an imaging/deployment tool. This means that we cannot (directly) deploy an Operating System from RES Automation Manager. Fortunately for us there are many technologies out there that can, e.g. Windows Deployment Services/Microsoft Deployment Toolkit which BTW can by combined with RES Automation Manager – take a look at this White Paper. Why reinvent the wheel?!

What RES Automation Manager Will Do

So once we have our Operating System deployed and the RES Automation Manager agent installed (we can do this with WDS/MDT as mentioned earlier) what benefits will this give us? Well, at a simplistic level, RES Automation Manager can automate the entire server configuration and application deployment process. This process can also include installing XenApp and XenApp Prep as well as any other applications. This obviously takes some additional time but gives us a clean, repeatable process for deploying a XenApp server from scratch. It’s a strategic decision and not a tactical one!

Why is this important? Typically it comes down to issues #2 and #3 so let’s take them one at a time..

Issue #1

RES Automation Manager can reduce this complexity by removing Provisioning Services altogether. I’m not suggesting that you remove this from your infrastructure. Not even for one minute. However, if you don’t need to have a clean image after every reboot getting shot of PVS maybe an option? We have automated the complete server deployment and can typically provision a new server in a few hours from start to finish; Operating System, XenApp and applications. OK it’s a few hours of time, but there is no user interaction required. I’m guessing that it’s probably not that often that you need to add a new server within 30 minutes?

This benefits the typical IT department as these are now regular servers. They’re supported in the same way as other servers and they have a proper OS install etc. There are downsides too. Now we need to patch and maintain multiple OS instances and not just one master image. Isn’t this part of the reason you deployed Provisioning Services in the first place?

Issue #2

By having a repeatable process for building our XenApp server(s) from scratch we can avoid tainting our image. If we need to cut a new image then we can deploy a completely clean server and deploy the required applications as required. We don’t need to uninstall and reinstall or upgrade applications. I’m not advocating this as a best practice, but I know lots of admins that are a lot happier with this process. It doesn’t need to be performed for all updates, but you now have an option as to whether you update the master image or cut a new one. If you have not automated the entire deployment and configuration process, recreating a new image from scratch probably doesn’t make you feel warm and fluffy inside!

When you finally get run over by a bus (it’s going to happen one day as everyone keeps saying it) pretty much anyone with ounce of intelligence can deploy a new server or reverse engineer the Modules and Tasks in the Run Book to discover how things are tied together.

Issue #3

By virtue of automating the entire configuration and deployment process with RES Automation Manager, you have actually documented every step in the process. RES Automation Manager includes the ability to create an Instant Report of any or all Run Books, Projects and/or Modules. These reports are very detailed (small example here) and typically run to 1,000+ pages. For us consultants, this feature alone is worth its weight in gold. Did I mention that it’s available in RES Workspace Manager too? Winking smile

Issue #4

Finding out when and by whom the changes were made. Whether they be changes made to the gold image or Ad-Hoc emergency it doesn’t matter the audit trail of the changes is vitally important especially with change management processes. Well would be surprised to hear that RES Automation Manager has an in-built Audit Trail which allows you to view all actions performed in RES Automation Manager – how handy is that when a witch hunt is on (Oh that never happens now does it!?).

Issue #5

As usual I’ve saved the best until last and you didn’t see number 5 coming! The “pièce de résistance” if you like. This might get a bit confusing so strap yourselves in ready…

Emergency changes to running PVS instances are pain. Depending on your configuration after a reboot changes may be lost and depending on your requirements, you may reboot nightly or even weekly. If there is a configuration change that needs to be made then ultimately we need to update the master image. We can implement the change on the running instances, but it will be lost at some point when the write cache is cleared. Until the master image is updated we will need to implement the change, potentially after every reboot.

Because RES Automation Manager is a Run Book Automation tool we can implement this change across all running instances within minutes. “WAIT!”, I hear you cry, “These changes will be lost after a reboot!” Correct. But now we have achieved two things; documented the change and can automate the update to the master image at some point in the future.

Why did I say at “some point in the future?” Fortunately for us there is a hidden gem within RES Automation Manager called Snapshot Intelligence. With a name like that it better be good right!?

As the RES Automation Manager database has a record of all jobs that have executed on a given agent it can detect a snapshot. Whether this is a virtual snapshot or a backup restoration, it makes no odds. In our PVS world, if RES Automation Manager jobs have been run on a machine and the PVS instance is reset back to our master image state (write cache cleared), RES Automation Manager will detect this as a snapshot. You with me so far..?!

Once a snapshot is detected, RES Automation Manager can automatically reapply the job history (I’ll pause whilst you take this in and wait for the penny to drop!).

So if we automate all the emergency or ad-hoc updates with Automation Manager we can automatically reapply these after every reboot? Yes. No need to update the master image for every change? Yes.

In fact it gets better than that. When we update our master image we can run the exact same job history (automatically if you wish) to update the gold image. If you want to cut a new image from scratch we’ve got that covered too. Above all, if everything is automated with RES Automation Manager it’s automatically documented too. Needless to say, you get all the usual audit logging and change history.

Summary

So, in summary, using RES Automation Manager in combination with Citrix Provisioning Services has huge benefits, but there’s obviously a cost associated. Would I recommend it? Absolutely! For all of the above reasons. Is it worth it? Unfortunately, I can’t tell you that as only you know your environment.

Can RES Automation Manager replace Provisioning Services? Not entirely as you’ll still need WDS/MDT (or equivalent) to deploy the OS. It also depends on your reasons for deploying PVS in the first place. If it’s for near instant deployment, remove local disks, reduce the storage footprint or a clean image on every reboot, you’ll probably be using it for a long while yet. If your reasons are purely for “single image” management then you could potentially replace PVS in favour of a “traditional” deployment. Would I recommend this? It depends!

I know we’ve been focused on Provisioning Services in this article but RES Automation Manager will help you with the rest of your infrastructure automation. Desktops, laptops, servers; Exchange and Active Directory etc. You may have XenDesktop, Quest vWorkspace or VMware View for your virtual desktops. The same principal applies and you may even be using PVS in combination with these. Anyway, I don’t need to preach to the converted!

I will say that it should be a strategic decision to deploy RES Automation Manager. Don’t underestimate the amount of time it takes to automate and test. But I guess you already spend a lot of time testing your images?

You can find some video overviews/introductions on RES Automation Manager on Citrix TV and RES Tutorials. If you don’t want to take the time to download, install and configure RES Automation Manager but want to take a quick look, you can always request access to the RES Showcase. Some background and example videos on the Showcase platform can be also found here.

I’ll get off my soapbox now and crawl back to whence I came! Please feel free to comment and I’d love to hear your thoughts. Iain

Replacing the default XenServer WSS Certificate

Something a little bit different from the normal RES related posts this time. During the deployment of the Demo Showcase platform we needed to replace the SSL certificate used by the XenServer Web Self Service (WSS) servers. Reviewing the WSS documentation revealed very little about how to achieve this. As you can see the user and installation guides offer very little guidance!

image

Much to my surprise, I couldn’t locate a web resource that details how to do this, i.e. generate the required ssl.crt and ssl.key files. There are lots of snippets of information but no simple post that details the requirements nor the steps to perform. This is my attempt to rectify this situation!

Pre-requisites

Before you begin there is the assumption that you have the following prepared/installed:

  1. The required SSL certificate has been exported into .PFX format (and you know the private key password!);
  2. You have OpenSSL is installed;
  3. WinSCP (or other SCP client) is installed.

Converting the Certificate to a .CRT and .KEY Pair

The WSS appliance expects the certificate and private key to be provided as two separate files rather one as contained in the .PFX (or .PEM) file. We can generate the correct files by utilising the OpenSSL tools. The secret to this part is to ensure that the generated .KEY file is not encoded with a password. If there is, you’ll receive an error when attempting to start the web service on the WSS appliance.

To export the certificate (.CRT) component from the .PFX file run the following OpenSSL command: openssl pkcs12 -in <ssl-certificate.pfx> -clcerts -nokeys -out <ssl.crt>

To export the private key (.KEY) without a password, run the following OpenSSL command: openssl pkcs12 -in <ssl-certificate.pfx> -nodes -nocerts -out <ssl.key>

Transferring the Certificate Files to the WSS Appliance

Once you have the required .CRT/.KEY file pair, you’ll need to copy them to the Web Self Service appliance. This is a fairly straightforward process but requires enabling the SSH daemon on the appliance first. To do this you’ll need to connect to the WSS appliance console via XenCenter. Once you’ve logged onto the console, issue the following command: service sshd start

You’ll also want to stop the Web Self Service process by running the following command: service webss stop

After the SSH service has started and WSS services are stopped, you can now copy the .CRT and .KEY files to the /root/sse/conf directory via WinSCP (or your tool of choice). Note: you might want to rename the original .CRT and .KEY files before copying the replacements in!

Restart the WSS services by executing: service webss start

All being well, you should receive no errors and when browsing to the WSS homepage you should not be warned about the SSL certificate! Here’s an example using a certificate with the Common Name set as the default sse-https-server.

image

Simples! I hope someone finds this useful one day! Iain

The case of the missing Office 2007 Quick Access Toolbars

Recently I’ve been involved with one of our customers who is migrating from RES PowerFuse 2008 and roaming profiles on XenApp to RES Workspace Manager 2011 (WM) utilising Zero Profile Technology (ZPT) and a mandatory profile.

One of the issues they had on the new environment was the Office 2007 Quick Access Toolbars (QAT) weren’t being captured by WM using the built-in templates provided which capture the locations set out below:

Windows XP and Windows Server 2003 (v1 profiles):
%USERPROFILE%\Local Settings\Application Data\Microsoft\Office

Vista, Windows 7 and Windows Server 2008 (v2 profiles):
%USERPROFILE%\Appdata\Local\Microsoft\Office

So everything appeared to be configured correctly; but still the QAT files weren’t being captured or worse still weren’t being saved in those locations.

Upon further investigation the QAT files where actually being saved in ‘%APPDATA%\Microsoft\Office’ which seemed very odd to me as I was sure one of the problems that people found when using roaming profiles with Office 2007 was the QAT didn’t roam!. This led me into thinking that maybe Microsoft had at some point released a hotfix that did in fact allow the QAT files to roam. So lets turn to the interweb and see what that brings up, and low and behold, Microsoft did just that and released a hotfix http://support.microsoft.com/kb/958062 (included in Office 2007 SP2). Setting the registry key as described in the Microsoft article forces the QAT files to be saved in the users profile which would then allow them to roam.

Once I had this vital information I quickly found that this particular customer had uncovered this fix and had implemented it using RES PowerFuse 2008 (clever boys/gals!!). Because we had copied and upgraded the RES PowerFuse 2008 datastore to WM 2011, this user registry setting was being applied to the new environment. To resolve the problem we simply removed that registry setting and let the power of the templates supplied with RES WM 2011 do their thing and capture the QAT files in the default location.

NOTE: One thing I will add about the supplied templates is they don’t capture the QAT files for Outlook 2007 i.e. Olkaddritem.qat, Olkapptitem.qat, Olkdistitem.qat, Olklogitem.qat, Olkmailitem.qat, Olkpostitem.qat and Olktaskitem.qat.

To resolve this issue you can add file filter ‘%LOCALAPPDATA%\Microsoft\Office\Olk*.qat’ to the capture settings for Outlook 2007. Hopefully this will get rolled into the default Office 2007 application templates by RES in a future release.

Hope this helps.

Nathan