BigHand Digital Dictation and RES Automation Manager – BHF File locations

Whilst this post is targeted at retrieving archived BigHand dictation file locations, it does demonstrate how we can use RES Automation Manager parameters and its Microsoft SQL integration to good effect.

Locating a Dictation File

BigHand holds dictations in a proprietary format with the extension .BHF.  This is essentially just a WAV file with minor modifications.  Once a user has completed a dictation it will disappear from their view after a pre-determined number of days.  If you have a reason to retrieve one of these files once it is removed or if a system corruption means you need to re-import dictations there is no easy way to identify the file, as the files have GUID filenames and can be sitting on any file store server in your BigHand estate.  Using a combination of SQL and Automation Manager you can make everyone’s life easier.

Create SQL View

First you will need to create a SQL view on your BigHand SQL instance called Find_Dictation_Location as per the below script:
SELECT     a.BH_FirstName, a.BH_LastName, a.BH_UserName, b.BH_Title, CONVERT(varchar(10), b.BH_CreationDate, 103) AS BH_CreationDate,
b.BH_CompletionDate, b.BH_Deleted AS BH_Deleted_Task, b.BH_Destination, b.BH_Description, b.BH_MatterNumber, b.BH_DocumentType,
b.BH_Confidential, b.BH_FileRequired, b.BH_Open, c.BH_FileGuid, c.BH_Location, c.BH_Version, CAST(d.BH_URL AS nvarchar(4000))
+ CAST(SUBSTRING(c.BH_Path, 2, LEN(c.BH_Path)) AS nvarchar(4000)) AS Path, c.BH_Deleted AS BH_Deleted_File
FROM         dbo.BH_Users WITH (nolock) a INNER JOIN
dbo.BH_Tasks WITH (nolock) b ON a.BH_UserGuid = b.BH_Author INNER JOIN
dbo.BH_FileLocations WITH (nolock) c ON b.BH_TaskGuid = c.BH_FileGuid INNER JOIN
dbo.BH_Locations WITH (nolock) d ON c.BH_Location = d.BH_Location

Automation Manager SQL Query

Next you need to run a SQL query against that view to retrieve the URL of the file, but the SQL query needs to be edited every time you use it, so let’s create an AM job to do this.
1. Create a Module and Add a SQL Statement (Query) task from the advanced folder
2. On the settings tab add the following SQL query:
select BH_UserName,BH_Title,BH_CreationDate,path,BH_Deleted_File from Find_Dictation_Location
where bh_username like ‘%$[Username]%’ and
BH_Title like ‘%$[Title]%’ and
BH_CreationDate like ‘%$[Date]%’
Note the parameter references in the SQL query ($[Username], $[Title] and $[Date]).  These are the fields required to identify the dictation.  Obviously these can be changed to any of the fields available in the SQL view.
So that AM can insert the correct variables into the script you will need to create parameters for:
a. Username (Username)
• This will allow a full or part entry of the user who created the dictation
b. Dictation title (Title)
• This will allow a full or part entry of the Dictation title
c. Time and Date (Date)
• This must be the exact creation date; you can set a mask for the parameter on the input tab in the format 00/00/0000.
Once these are created, you will be able to run the query against your SQL instance.  It will collect information on the dictations you specified by looking up the relevant GUIDs in the database and create a field entry (Path) that has the URL built up from the separate BH_Locations and BH_FileLocations tables.  This will then be presented in the Job results for the query.
View results in Automation Manager Job History

Quest vWorkspace and RES VDX Integration

We’ve come across some issues with the integration of RES VDX and Quest’s vWorkspace on a couple of customer sites just recently. Some old timers might remember an issue that occured when using the then, RES Subscrber/Workspace Extender, with Provision Networks Virtual Access Suite (VAS). The issue fixed by Q201760 RES Subscriber and RES PowerFuse Workspace Extender do not work with Provision Networks VAS is the same issue experienced with vWorkspace.

Both the legacy VAS client and the newer vWorkspace client are virtually the same and the registry key is still called ‘HKLM\Software\Provision Networks\Terminal Server Client\’. However, to integrate the newer VDX components we need to reference the newer VDX DLL(s) rather than the Workspace Extender DLL.

To get the VDX integration to work with vWorkspace we need to create one of the following keys on the local machine. Note: these registry keys are HKEY_LOCAL_MACHINE.

For 32-bit clients:
KEY: HKLM\Software\Provision Networks\Terminal Services Client\Addins\RESVDX
VALUE: Name
TYPE: REG_SZ
DATA: C:\Program Files\RES Software\VDX Plugin\VDXRDPPlugin_x86.dll

For 64-bit clients:
KEY: HKLM\Software\Wow6432Node\Provision Networks\Terminal Services Client\Addins\RESVDX
VALUE: Name
TYPE: REG_SZ
DATA: C:\Program Files\RES Software\VDX Plugin\VDXRDPPlugin_x86.dll

Simples once you know how! Iain

Using Software Restriction Policies to Block Scripts

When we are implementing RES Workspace Manager POC/Pilot’s on a customer’s site, one of the first things we try and do is create an new AD organisation unit (OU) where our test PC’s or XenApp/RDS servers will be placed. One of the reasons we do this is it allows us to block any existing AD group policies (GPOs) that might impact the POC e.g. startup/shutdown/logon/logoff scripts; especially as these might be the cause of slow logins that we are trying improve using Workspace Manager.

For computer related GPO’s we use “block inheritance” on the new OU. For user related GPO’s we employ the “GPO loopback > replace” technique.

These methods work very well but something I’ve come across on customers sites, they have set the login script in the AD properties for each user and not within any GPO that you are trying to block as you can see in the screen shot below. Generally this is the “old school” method of doing this but its still out there!

image

This causes us some headaches in our POC/Pilot especially when these users are asked to start testing the POC/Pilot and the first thing that happens is they start complaining that it takes an age to login. Why? Because the script is mapping 24 network drives and 15 printers at logon!!

Therefore, we need to stop this script from running on our POC/Pilot environment. We could do this by simply removing the line from their AD properties but what happens if they still want to use the existing environment that relies on this script to map drives and printers? We need to find another way of doing it…in steps “Microsoft Software Restriction Policies”.

Using Software Restriction Policies will allow us to block these logon scripts without affecting the users ability to use the existing environment and here is how.

Firstly we need to add the Software Restriction Policy to a GPO which will allow it to apply; the easiest way to achieve this would be to add it to the new GPO we have created in the first instance that applies the computer related settings.

Using the Group Policy Management Console (GPMC) edit the GPO and expand the “Computer Configuration/Windows Settings/Security Settings/Software Restriction Policies”

image

Right click on “Software Restriction Policies” and select “New Software Restriction Policies”.

image

At which point the you will see some additional settings available.

image

Right click on “Additional Rules” and select “New Path Rule”.

image

You now need to tell the policy what path to block scripts running from. Most lightly these scripts will located in the NETLOGON share on your domain controllers (DC); the problem now being which DC will the script run from should you have more than one DC in your environment. Easy we can use the %LOGONSERVER% environment variable that is used to store the logon DC used by the user who is logging on. The Security level should obviously be set to “Disallowed”.

image

That’s about it!! Now when you logon to the POC/Pilot environment you can be sure any unwanted logon/logoff scripts will be blocked from running.

Nathan

Change RDS User Logon Modes using RES Automation Manager

[wpdm_file id=6]If you are using Windows 2008 R2 Remote Desktop Services you might have noticed that there are various user logon modes available on the Remote Desktop Session Host; which you can see from the screen shot below:

These are all well and good should you wish to manually change the user logon modes i.e. Allow reconnections but prevent new logins until the server is restarted for say maintenance purposes. But if you are already using RES Automation Manager why not complete this task in a more automated fashion? Well this can be easily achieved by adding the following tasks and module parameter.

Module Tasks:

image

Module Parameters.

RDSConnModParam

Module Task 1 (Enable Logons):
  1. Add Task > Remote Terminal Server Logons.
  2. Settings > Enable Logons.
  3. Condition Expression > User Logon Mode = 1.
  4. Condition Expression > Computer Function = Terminal Server.
  5. If condition is TRUE then > Execute this task, but skip all remaining tasks in this module.
  6. If condition is FALSE then > Skip this task.
Module Task 2 (Disable Logons) :
  1. Add Task > Remote Terminal Server Logons.
  2. Settings > Disable Logons.
  3. Condition Expression > User Logon Mode = 2.
  4. Condition Expression > Computer Function = Terminal Server.
  5. If condition is TRUE then > Execute this task, but skip all remaining tasks in this module.
  6. If condition is FALSE then > Skip this task.
Module Task 3 (Allow connections) :
  1. Add Task > Registry Settings (Apply).
  2. Settings > Add the following registry value.
  3. [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server]
    “TSServerDrainMode”=dword:00000000.
  4. Condition Expression > User Logon Mode = 3.
  5. Condition Expression > Computer Function = Terminal Server.
  6. If condition is TRUE then > Execute this task, but skip all remaining tasks in this module.
  7. If condition is FALSE then > Skip this task.
Module Task 4 (Allow reconnections, but prevent new logons) :
  1. Add Task > Registry Settings (Apply).
  2. Settings > Add the following registry value.
  3. [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server]
    “TSServerDrainMode”=dword:00000002.
  4. Condition Expression > User Logon Mode = 4.
  5. Condition Expression > Computer Function = Terminal Server.
  6. If condition is TRUE then > Execute this task, but skip all remaining tasks in this module.
  7. If condition is FALSE then > Skip this task.
Module Task 5 (Allow reconnections, but prevent new logons until server is restarted) :
  1. Add Task > Registry Settings (Apply).
  2. Settings > Add the following registry value.
  3. [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server]
    “TSServerDrainMode”=dword:00000003.
  4. Condition Expression > User Logon Mode = 5.
  5. Condition Expression > Computer Function = Terminal Server.
  6. If condition is TRUE then > Execute this task, but skip all remaining tasks in this module.
  7. If condition is FALSE then > Skip this task.

Once you have created the module, when you come to schedule the job its then a simple matter of selecting which logon mode you would like to apply from the job parameters. Of course you can schedule the job on individual, multiple or teams of agents.

image

To make life even easier I’ve created a handy building block that you can import into your environment.

[wpdm_file id=6]

Any questions just ask.

Enjoy

Nathan

Patch Management with RES Automation Manager

It’s a question that I get asked; lots. As you may well know my stock answer to nearly all questions is “YES! However…” In this particular case I’m going to stick with my stock answer and I’ll explain why here.

Using RES Automation Manager to manage application deployment and patching is absolutely the right thing to be doing, but not for Microsoft products, i.e. Windows and Office updates etc. RES Automation Manager provides us with the advanced scheduling capabilities that are required when deploying updates. We can send feedback to users, control reboots and also create our own dependencies and prerequisites using the built-in Conditions. Creating installation and/or update modules for Adobe Reader, Flash and Java etc. is a fairly straightforward process.

So why can’t we use RES Automation Manager for MS updates? Let’s get one thing clear – there is nothing stopping you from doing this. However, do you really (and I mean REALLY!) want to be creating modules for every hotfix, patch and Service Pack that Microsoft releases? What about controlling installation orders and prerequisites? What about testing and ensuring the update is really needed before attempting to install it? When I last checked WSUS for Windows and Office updates I think there was more than 20,000+ updates. Hopefully you get the point!

What can we do then? By leveraging the metadata that Microsoft has already created we don’t need to and I’m guessing the product teams inside Microsoft are the best people to create this information! We use WSUS internally to manage our patching for Microsoft products on clients, servers, hosted servers and training labs. We recommend that our customers either use WSUS or an existing patch management tool if one is already in place. We then manage all other application installations and updates as Modules within RES Automation Manager and use its advanced scheduling capabilities to push them out.

Surely if we’re going to implement WSUS for Microsoft patching, it would make sense to patch all our products with an extension to WSUS? Not necessarily. Remember you’re getting a whole load more with RES Automation Manager than just pushing out patches; think Run Books and Evaluators. We can deploy software and machine configurations, provision resources such as Active Directory accounts and Exchange mailboxes and a whole lot more.

For those that want to extend WSUS there are products available in the market to integrate 3rd party patches such as Adobe and Java. Check out the EminentWare web site for one such example. You can roll your own application updates as well with the free Local Update Publisher on SourceForge.

At the end of the day, if you have to package a single internally developed application or products not supported with 3rd party tools you might as well extend that list to common middleware such as Flash and/or Java as save the money on the 3rd party integration. Leverage WSUS for what it’s good at and use RES Automation Manager for what it’s very good at; the rest!

Migrating RES Databases to a New SQL Server

We have recently had the requirement to move the SQL databases to a new Microsoft SQL Server 2008 R2 server. This process is not complicated, but there does seem to be a lack of documentation available. At a high level we need to perform the following actions:

  1. Backup the RES Workspace Manager and Automation Manager databases on the source server;
  2. Restore the RES Workspace Manager and Automation Manager databases on the destination server;
  3. Fix the SQL permissions, i.e. recreate the users and redelegate access;
  4. Update the RES Automation Manager Dispatchers to point to the new database server;
  5. Update the RES Workspace Manager Agents to point to the new database server.

I’m not going to cover Steps 1 and 2 as these are well documented on Microsoft’s web site and many other various blogs. In this particular instance we’re moving from SQL 2008 to SQL 2008 R2 and I’ve restored copies of the Workspace Manager and Automation Manager databases on the new server. The SQL user account for the Workspace Manager database (RES-WM) is ‘RES-WM’ and the user account for the RES Automation Manager database is ‘RES-AM’ (note that naming the database and user accounts the same is not best practice but it helps in our lab environment!).

Migrating RES Automation Manager

We’ll start with the RES AM database as we’ll then use this to update the RES WM information! Firstly we need to check that the correct user permissions have be granted on the new database server. When creating the SQL user accounts you’ll need to ensure that the password policies are set correctly:

In short make sure that the user password policies are disabled (unless you want to be forever updating your Dispatchers!). If you forgot to uncheck this and you can’t seem to change it, you can run the following SQL script via the SQL Management Studio (remember to change the RES-AM reference to your SQL user account!):

USE Master
GO
ALTER LOGIN “RES-AM” WITH PASSWORD = ‘samepassword’
GO
ALTER LOGIN “RES-AM” WITH CHECK_POLICY = OFF, CHECK_EXPIRATION = OFF;

After this is complete you will need to ensure that the RES-AM user account has DB Owner (DBO) rights to the database via the “User Mapping” page of the user account:

Once we’re happy with this we can focus our attention on the RES Automation Manager console. As it’s only the RES AM Consoles and Dispatchers that talk directly to the SQL database, we do not need to worry about the RES AM Agents. From the RES AM management console select the Infrastructure > Setup > Database node and enter the new SQL database server name (if the username/password has changed you can update them here).

After you click the Connect button the management console will reload and ask if you want to use the connection information permanently.

The management console will reload. The final piece to this puzzle is to update all the Dispatchers and Consoles. From the Infrastructure > Engines node we need to repair each Dispatcher and update the SQL connection information.

After the Dispatchers have been updated don’t forget to update the consoles in the same manner. You could also run a registry job via AM to update the connection information (remember to do this from the old database as the Dispatchers will communicating with the old database until updated!) or push out a new MSI from the Components node.

Migrating RES Workspace Manager

Note: RES Workspace Manager 2011 has the built-in ability to migrate the exisiting database to a new SQL server and/or database; this can be found in Setup > Datastore > Connections > Click on the ‘…’ next to the Primary datastore to display the migrate wizard. At the end of the migration process after various other prompts you will also be prompted if you wish to create a handy building block that can be used in RES Automation Manager Module.  You can use this Module to migrate RES Workspace Manager Agents running an older version of RES Workspace Manager, not yet containing the Datastore Migration Wizard. The only downside using the migrate method I’ve found is the fact you have to activate the licenses again; if this going to cause some issues follow the procedure set out below. [Nathan Sperry]

RES Workspace Manager is slightly different as all RES WM agents talk directly to the SQL server rather than via a Dispatcher. After migrating the RES Workspace Manager database (as above) and fixing the user permission we need to update the  RES WM agents’ registry settings  via RES Automation Manager! For this task I created a module that updates the required registry value and restarts the RES Workspace Manager agent service.

You will notice that there are two Reigstry Settings tasks with conditions; 1 is for 64-bit machines and the other for 32-bit. Note: if the authentication details have changed you’ll need to the relevant registry settings to both Registry tasks.

Note: if you have a mixture of  freshly deployed RES Workspace Manager agents and agents upgraded from RES PowerFuse 2010 or earlier then the registry settings are in different locations and you may have more tasks/conditions!

Iain

RES Workspace Manager Zones and USB Devices

Have you ever had a requirement to base a Device Zone within RES Workspace Manager on a particular hardware device? If so, you may have already discovered that if is a storage based device then you’re good to go. However, if it’s not of a “removable storage” type then we’re seemingly out of luck. Not quite..

Whilst on a customer site, a requirement arose that necessitated that we detect whether a particular USB device was connected or not so that we could configure an application for the hardware device. After a lot of digging and searching, I discovered that device information is located in the [HKEY_LOCAL_MACHINE\System\CurrentContolSet\ENUM] registry subkey(s). The problem with these registry entries is that if the device has ever been connected then it will exist and it doesn’t indicate that the device is currently connected so it’s back to the drawing board..

On further investigation, the [HKEY_LOCAL_MACHINE\System\CurrentContolSet\Services] key lists all the currently connected devices (all internal devices are listed in here so don’t be too surprised how many there are!). The difficult bit comes in determining how your USB device is enumerated. As a general rule all USB devices attached will probably be listed under keys beginning with USB. For example, the USB microphone I’m using is listed under the ..\Services\USBAudio subkey and a USB printer under the ..\Services\USBPrint subkey (more detail on hunting this information down might be a future blog post). In this instance I’m going to pick on the Samson C03U microphone and show you how can create a Device Zone in RES Workspace Manager that will allow you to set configurations options only when it’s connected. Looking in the registry in the [HKEY_LOCAL_MACHINE\System\CurrentControlSet\Serices\USBAudio\Enum] key exposes the following settings:

I know that my microphone is identified as USB\VID_17A0&PID_0100&MI_00\6&1895ccd4&0&0000. We can see that the Samson microphone is listed in here. When the device is unplugged the instance disappears like so:

So to create our Device Zone in RES Workspace Manager for detecting the presence of a Samson microphone all we need to is create a zone based on the presence of this particular registry setting? Nearly! By creating a zone on the information from the first screenshot, it will only be true for the microphone on my desk, not any Samson C03U mike that might be on someone else’s desk. Experience has shown that everything listed after the ‘&MI_00\6&’ is device specific, i.e. a serial number or unique identifer and can safely be ignored (unless you want to tie the zone to a unique device). Therefore, if we create a Zone based on the presence of the ‘USB\VID_17A0&PID_0100&MI_00&6*’ (note the wildcard) value it should work for all Samson C03U microphones.

Done? Almost (and you knew I was going to say that!). The value of ‘0’ (zero) in the first screenshot depicts the order in which a device is attached. Therefore, if I just so happened to have another USB audio device, the Samson mike might be listed as the second device under ‘1’ or the third device under ‘2’ etc. In order to ensure that we account for this we need to add multiple entries in like so:

If I had 4 USB audio devices, then depending on the order they were attached my Zone may fail to detect that the Samson microphone was attached. I could have added 10 or so entries but hopefully you get the idea! If you have varying models of devices, then it’s likely that the PID_ (product ID) portion of the values will change. In this case, you’ll need to make sure the rules also incorporate any variations.

It would be nice to have pattern matching on the registry keys\values like we have within RES Automation Manager. Perhaps it’s an enhancement request, but in all honesty, why can’t we natively select any hardware devices attached rather than being restricted to USB storage devices already? Perhaps I’ll take this up with product management.

Good luck! Iain

RES Workspace Manager Registry Import Bug

[UPDATE 01/08/2011 – RES have released a fixpack for RES Workspace Manager 2011 SR1 that resolves the issues highlighted in this post. I don’t have any word on whether this fix will be rolled into the next Service Release of RES PowerFuse 2010 (SR5?). I hope so as we can then remove this post. In the meantime, please contact RES Software support to obtain this fix (assuming you’re running WM 2011 SR1!)]

An issue has been discovered in RES Workspace Manager 2011 and earlier versions (e.g. PowerFuse) when importing .REG files. Ironically, this was discovered when converting existing Group Policy Objects via the Virtual Engine Toolkit (VET). RES Workspace Manager does not implement the removal/deletion of registry keys or values correctly. It has been reported to RES Software and they have acknowledged there is an issue. It is not an issue with the Virtual Engine Toolkit but a problem with any .REG file, i.e. one’s migrated from log on scripts etc. [UPDATE – for clarification purposes, a support ticket had been raised prior to this post and RES are working on a fix.]

The following snippet from a REG file (some entries removed for clarity) should toggle the removal of NoInternetIcon and NoNetHood values and also toggle the removal of the \Software\Policies\Microsoft\Windows\NetCache\AssignedOfflineFolders key.

Windows Registry Editor Version 5.00;
Created by the Virtual Engine Toolkit v0.9.7.0;
Creation date: 05-30-2011 17:04:08

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer]
“DisablePersonalDirChange”=dword:00000001
“NoDesktopCleanupWizard”=dword:00000001
“NoInternetIcon”=-
“NoNetHood”=-

-[HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\NetCache\AssignedOfflineFolders]

What actually happens is RES Workspace Manager sets the two DWORD vales to 1 and doesn’t even import the \Software\Policies\Microsoft\Windows\NetCache\AssignedOfflineFolders key as shown below. This results in complete unexpected behaviour and might impact any Proof of Concept or pilot deployments.

Hopefully this issue will get resolved promptly but in the meantime, please be vigil!

Iain

Add computers to a local group using RES Automation Manager

One of the many built in tasks that are provided in RES Automation Manager is the ablility to add users or groups to a local group on a server or desktop. The task does what it says on the tin but one unknown fact (well to me at least until I tried it) is you can also add computers to a local group too using the same task. A very simple example of this would be when using Remote Desktop Services and the Remote Desktop Connection Broker. In this example each Remote Desktop Session Host that is participating in the farm needs to be added to a local group on the Remote Desktop Connection Broker called “Session Broker Computers”.

To add a computer to the local group you simply need to add a dollar ($) after the computer name in the “User(s) and/or group(s) to add” field as you can see in the screen shot below.

It’s as simple as that!!

Nathan

Updating your XenApp farm using RES Automation Manager

When publishing an application across multiple servers in a XenApp farm one of the key elements to a trouble free environment is having consistency across the farm.  RES Automation manager can help with getting this right.

So, you have your software package imported into Automation Manager and you deploy to each XenApp server in turn.  Here I will go through a method of updating that software and running maintenance automatically with no outage and (most importantly) an easy life for everyone.

First we should set out exactly what we need to do.  There are several stages to this process:

  1. Disable access to the XenApp server
  2. Ensure users are not logged on.
  3. Switching to install mode.
  4. Check version and run Installation.
  5. Switching to execute mode.
  6. Enable logon to the server.

These stages can easily be broken down into 3 Automation Manager Modules; let’s take a look at how to get them setup.

Module 1
Disable logons, wait for the server to be free from users and switch to install mode.
We have three tasks to run here:

Task 1 – Disable logons
Click Add to create the first task of the module.  Select Remote terminal server logins (Change) from the configuration folder and select :
Disable remote logins

Make sure you name the task appropriately and you’re done.

Task 2 – Wait for the server to be free from users
Best practice when installing software on a XenApp server suggests that you should not have any user sessions on that server.  Therefore we need to wait for users to leave.  Now I’m not all that interested in sitting in front of the Access Management Console hitting refresh every 5 minutes until everyone is off, so let’s get AM to do that for us!

We need to make use of the PowerShell capabilities for this, so click Add and select Windows PowerShell Script (Execute) from the advanced folder.  One of the recent improvements to AM is the ability to input a script directly into the task, this helps keep the whole process in a single place.  You can also easily save the script as a resource and point the task to it (good if you have a single script being reused in multiple tasks).

On the settings tab select “Use Windows PowerShell script from “Script” tab” to enable the Script tab, or point the task to the .ps1 file if you have already saved it as a resource.

By default, the PowerShell scripts that you run using this Task need to be digitally signed.  Select Override execution policy for this Task to temporarily lower the PowerShell execution policy to “Unrestricted” and use an unsigned PowerShell script.  After execution of the Task, the PowerShell security will be reverted to the previous security level.

Next you will need to set the timeout.  Since you want the script to run until all users are logged off you need to set this to the maximum allowed, this being 300 minutes.  It would be nice if there was the option to disable the timeout if required, but there isn’t at the moment so 5 hours will have to do for now.

Once you’re done here you’ll need to click the script tab and add the script.

The Script

The script checks the session count on the XenApp server every 5 minutes.  If any user accounts are disconnected, it will log them off.  The script will loop until there are zero sessions or the timeout is reached.  This is key to allowing a RES AM job that waits for a condition to occur before moving on, since the AM native conditions will only be evaluated once at the start of the job.

Although you will not be able to view the output of the script while it is running, it is kept in the job history to help with diagnosis.

Script File (Zip)

Task 3 – Switch to install mode
The final Task in this Module will switch the XenApp server to install mode.  Using the Command (Execute) task from the advanced folder we will use the following command:

Change user /install

Make sure you tick Use Windows command interpreter.

This command is not strictly required anymore, XenApp is intelligent enough to recognise an install process and switch to install mode automatically, but since AM uses the system account and since you won’t always want to just run an msi or an exe it’s better to set it and be sure.

At this point your XenApp server is ready to accept whatever installs and modifications you need to apply.  You could use this set of tasks and finish with an email notification telling you the server is ready to manually have other modules run against it.  However, we are looking for a fully automated process.

Module 2
Check version and run Installation, configuration etc.

Module 2 is the set of tasks you want to run against the box.   Here I am going to use an MSI that I built using a combination of VSS and WixBuild to demonstrate a fully automated software update process.

To start with, I save the MSI as a resource.  The resource type should be “stored in datastore”, this way AM assigns a GUID to the resource, and I will explain why you need this GUID later.

Next I need to add a new Module and create the task, in my case this was a Windows Installer package task from the Provisioning folder.  On the settings tab click the Browse button next to the Filename field and select the resource, configure any other settings (such as Properties or Transforms on the Parameters tab) and click OK.

Note:  With MSI Installs I would always recommend using the Log tab to set the required level of logging and click “Remember as Default”.  This way you will have the installer log files available in the job history should you ever need to diagnose an issue.

Once you have added all the required tasks (including any reboots needed) you are almost ready, just one final module to create.

Module 3
Switch to execute mode and enable logon to the server

This is the final module.  All you need now is for the XenApp server to be made available.  For this you will need a module with 2 tasks (as in Module 1) with the following:

Task 1 – Command (Execute), but this time we will run “change user /execute”

Task 2 – Remote terminal server logins (Change).  This time we are going to Enable remote logins.

Run Book

Run Books are used to create a chain of jobs; each job can be run on a different agent in the same run book.

Add a Run Book, then on the Jobs tab add Module 1 (Disable logons, wait for the server to be free from users and switch to install mode), select the XenApp server as the agent and Click OK.  Repeat this for Module 2 (Check version and run Installation) and 3 (Switch to execute mode and enable logon to the server).

Once these are added the jobs can be cloned and the Agent name changed so that you have Modules 1, 2 and 3 running on each XenApp server in turn or use Teams and split the job to schedule the upgrades/updates in batches.

Schedule and New versions

This Run book can then be scheduled to run at an appropriate time using Job scheduling.

Once the schedule is in place all you need to do to update the Citrix farm is open up the resource files that Module 2 points to and update them.  Since AM has assigned a GUID to the resources the new files will automatically be associated with the task.  Next time the Job runs each Citrix server will disable logins, wait for each user to log off (or log off disconnected sessions) run the new MSI to update the software and re-enable itself.

You meanwhile, can sit back and relax.

If you want to avoid running the Modules during every schedule (as there may not have been an update to the software) then you can use a combination of evaluators and conditions to ensure that the specified tasks/modules do or do not run as required. Make sure the first task is an Installed programs query (found in the System state folder), configure an evaluator that checks for the latest version number and sets a parameter to “True” or “False”.  Once this is in you then set a condition on each individual task to run dependant on the evaluator.  Using this method you can quickly build up a single Run book that runs all your regular Citrix maintenance.

Dan