RES Workspace Manager – JETCMD Example Use Case

We recently announced the release of our new free Job Execution Tool (JET). If you haven’t had chance to read the announcement yet you can find it here. In this post, I’m going to detail one of the various use cases for JETCMD. This should give you an idea why we developed it in the first place and how powerful it can be.

My use case is going to focus around using JETCMD in conjunction with RES Workspace Manager to invoke a RES Automation Manager job at logoff time. The purposes of this job is to delete the locally cached user’s profile from the machine which the user has just logged off from. With the constant debate raging on whether we should be using mandatory or local (temporary) profiles, we now have a way of deleting local profiles at log off without fudging HKLM settings in the registry (perhaps another blog post?!). Regardless of how or why we should or shouldn’t be doing this, your immediate reaction at this point might be that RES Workspace Manager already integrates with RES Automation Manager! You would be correct, however we have the following restrictions:

  1. The integration only allows job scheduling at user login or when launching a managed application; not at logoff ;
  2. Run Books can’t be invoked – only Projects and Modules;
  3. You cannot dynamically pass job parameters at runtime, i.e. they are embedded into the Automation Task in RES Workspace Manager.

So how can JETCMD help us? As you’ve probably guessed JETCMD will allow us to both invoke the appropriate RES Automation Manager job at log off and pass the required parameter(s) to delete the relevant profile. The following are the outline steps involved to accomplish this task:

  1. Create a Module, Project or Run Book in the RES Automation Manager console that will be invoked by JETCMD (making note of the job GUID);
  2. As JETCMD is part of the free Virtual Engine Toolkit (VET) you’ll need to download from here and install it;
  3. Add the JETCMD.exe as a “Custom Resource” within RES Workspace Manager. Note: JETCMD.EXE can found in the “%ProgramFiles%\Virtual Engine\JETCMD” or “%ProgramFiles(x86)%\Virtual Engine\JETCMD”, depending whether it was installed on a 32 or 64-bit system;
  4. Optionally create “Environment Variables” within RES Workspace Manager, which will hold various command line parameters to pass to JETCMD such as TYPE, JOBGUID, USER and PASSWORD. Using this method makes the task you create in RES Workspace Manager extremely powerful and flexible;
  5. Create the “Execute Command” in RES Workspace Manager that runs JETCMD at logoff. Ultimately this will invoke and schedule the target RES Automation Manager job.

Now lets cover each step in slightly more detail:

STEP 1

I’m not going to cover how you create a Module, Project or Run Book in the RES Automation Manager console in detail. Rather, I am assuming that you have already created one that you would like to be invoked by JETCMD (and have noted down the relevant job GUID!). For the sake of my use case, I’m using a great tool called DelProf2 from Helge Klein (@HelgeKlein) within a RES Automation Manager module. I’ve used the following “Command (Execute)” task.

image

What you see above might look a little cryptic so I’ll briefly explain:

  1. The $Workspace{E2EBA027-FA6B-42AB-9358-C7656E99822E} reference is just a resource link that points to the DELPROF2.EXE that has been added as a resource within the RES Automation Manager console. When the RES Automation Manager task executes it will download the executable and run it;
  2. The /U switch will be passed to DELPROF2 and signifies that we would like it to run unattended, i.e. with no confirmation;
  3. To only delete a particular user profile, the /id switch is used to include only profile directories whose name matches this pattern;
  4. Used in conjunction with the /id switch, the $[UserName] is the RES Automation Manager parameter used to hold the user’s name of the profile I wish to delete (refer to JET_PARAMNAME in Step 5 for more details).

STEP 2

Nothing really to explain here other than download and install VET on your machine – simples! Smile.

STEP 3

Before we carry on with this step lets quickly describe what a “Custom Resource” is. This is an abstract from the RES Workspace Manager Administration Guide:

image

To import JETCMD as a custom resource, just follow these steps:

  1. In the RES Workspace Manager console, click on “Administration” in the navigation pane;
  2. Click on the “Custom Resources” node in the same left hand pane;
  3. Right click in the right hand pane, select “New” and browse to the location of the JETCMD.exe;
  4. The JETCMD.exe should now appear as a “Custom Resource” in the RES Workspace Manager console as shown below: image

STEP 4

Now while I did say this step was optional, I would highly recommend using “Environment Variables” to hold the parameters that JETCMD will use. This allows more visibility of these values and the potential to change them dynamically depending in the ACL’s used behind the “Environment Variables“. For example, utilising environment variables will allow you change the target RES Automation Manager job based on location or security group without having to create additional tasks!

  1. In the RES Workspace Manager console, click on “Composition” in the navigation pane;
  2. Click on the “Environment Variables” node in the same left hand pane;
  3. Right click in the right hand pane, select “New”;image
  4. Give the “Environment Variable” a suitable Name, Administrative Note and Value and change the ACLs etc. accordingly. In my instance I’ll accept the defaults;image
  5. I’m going to create 6 “Environment Variables” to make managing my “Execute Command” as simple and flexible as possible, as you can see below;image
  • JET_TYPE = RES Automation Job type to invoke i.e. Module, Project or RunBook;
  • JET_JOBGUID = The Module, Project or Run Book GUID that can be found in RES Automation Manager console;
  • JET_RESAMUSER = The RES Automation Manager console user to authenticate as;
  • JET_RESAMPWD = The password for the user as specified in JET_RESAMUSER;
  • JET_PARAMNAME = Parameter name(s) that’s specified with the RES Automation Manager job;
  • JET_PARAMVALUE = Parameter value(s) that’s used in-conjunction with JET_PARAMNAME (Refer to $[UserName] in Step 1).

STEP 5

The last step involves creating an “Execute Command” in the RES Workspace Manager console that will be executed upon logoff time.

  1. In the RES Workspace Manager console, click on “Composition” from the left hand pane;
  2. Click on the “Execute Command” node in the same left hand pane;
  3. Right click in the right hand pane, select “New”;image
  4. Adjust the “Execute Command” options so “Run Hidden” and “Run Task At Logoff” are selected as shown below:image
  5. The next most import factor here is obviously the command line specified. As I’m using “Environment Variables” my command line looks like this. [code]%RESCUSTOMRESOURCES%\JETCMD.EXE /type:%JET_TYPE% /jobguid:%JET_JOBGUID% /agent:LOCAL /user:%JET_RESAMUSER% /password:%JET_RESAMPWD% /encrypted /paramname:%JET_PARAMNAME% /paramvalue:%JET_PARAMVALUE%[/code]You’ll notice I’m also using the %RESCUSTOMRESOURCES%Environment variable” – which is used internally by RES Workspace Manager to resolve the location of the “Custom Resources” folder.

And there you have it, a common use case for JETCMD with RES Workspace Manager. I hope by now you can see how flexible and powerful JETCMD can be Smile.

We’d love to hear you comments!!.

Nathan.

RES Automation Manager Quick Tip – appending to existing registry values

I was recently asked (by one of our existing RES Automation Manager customers) how they go about adding to an existing registry value using RES Automation Manager. Well the answer is simple really – by using the @REGISTRY function. I’ll detail how you go about using this function in this blog post.

  1. Firstly start the RES Automation Manager console;
  2. Select “Modules” from the left hand pane, Right Click and select “Add”;
  3. Give the module a suitable name then select the “Tasks” Tab, Right Click and select “Add”.
  4. Select the task “Registry Setting (Apply,Query)” and select “Apply”.
  5. You will now be presented with a dialogue where you can select various methods to add the required registry value you wish to append too. In my example I’m going to APPEND a new string to the START of the existing USERINIT registry value. Select “HKEY_LOCAL_MACHINE” from the left hand pane, Right Click and select “Open HKEY_LOCAL_MACHINE…”.
  6. Browse to “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon” and select “Userinit”, this will add this value to the current dialog box. image
  7. Now we are going to add the @REGISTRY function to the Userinit value by Right Clicking on “Userinit” in the right hand pane and selecting “Modify”.
  8. In the “Value Data” field, Right Click and select “Insert Functions” >; “@[REGISTRY(;)]”.image
  9. RES Automation Manager now provides you with a nice GUI that allows you to browse to the registry value you wish to retrieve, when the job is executed on the agent. In my case this is going to be the registry value I selected in Step 6, as this is the value I’d like to append too.
  10. Now I simply add the new value that I wish to append, before the @REGISTRY function or after, depending where I’d like my value to appear – in my case this value is “MyNewValuetoAppend” [code]MyNewValuetoAppend,@[REGISTRY(HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon)][/code]
  11. The resulted registry value now looks like this, once the job has been scheduled and completed [code]MyNewValuetoAppend,C:\Windows\system32\userinit.exe[/code]

That’s all there is to it! Smile

Nathan

Introducing JETCMD

Complimentary to recently announced Job Execution Tool (JET) there is also a command line version. The purpose of this is tool is slightly different to the GUI version. Whilst the GUI version is all about passing parameters to modules, projects or run books chosen by the end user at run time, JETCMD is all about automation! Note: Just like JET, the RES Automation Manager (WMC.EXE) console must be installed to schedule jobs with JETCMD.

image

So how is JETCMD.EXE different from the standard WMC.EXE?! There are three big differences:

  1. To schedule a job via WMC.EXE you have to specify the target RES Automation Manager agent or team GUID. If you just want to schedule a job on the local agent there is no way to specify this. JETCMD allows you to simply specify the local agent without knowing its GUID or name in advance.
  2. There is no way to pass an encrypted password to WMC.EXE. JETCMD allows you to pass an encrypted password so it’s relatively safe to include in scripts.
  3. Parameters cannot be passed to a RES Automation Manager dispatcher without the use of a .CSV file. Therefore, it’s difficult to pass parameter values via the command line.

So why and where would you use it?! As always, we only develop additions to the Virtual Engine Toolkit when we or our customers need them! We primarily use JETCMD during Windows 7 (soon to be Windows 8) builds but I’m sure there are many other use cases…

As an example, during Windows 7 deployment we might wish to automatically schedule a RES Automation Manager module, project or run book post setup, e.g. in SetupComplete.cmd. Unfortunately, we have no way of specifying the job to run on the local agent via WMC.EXE as we don’t know its GUID – DOH! To get around this restriction we can just use JETCMD as a wrapper for WMC.

Another great example is when you’d like to schedule a RES Automation Manager module, project or run book as a logoff task in RES Workspace Manager i.e. delete any cached local profiles. The RES Automation Manager integration in RES Workspace Manager only allows for tasks to be scheduled at login or when launching managed applications – using JETCMD as an Execute Command at logoff can get around this drawback.

Here are some examples. To schedule a run book on the local agent using pass-through authentication we can use something like this (obviously the run book needs to exist!):

[code]JETCMD.EXE /type:runbook /jobguid:{5E5906A7-00D5-49E4-909B-CEB1810BF37} /agent:local[/code]

Or if you want/need to use RES Automation Manager authentication we could specify this instead:

[code]JETCMD.EXE /type:runbook /jobguid:{5E5906A7-00D5-49E4-909B-CEB1810BF37} /agent:local /username:RESAMUser /password:RESAMPassword[/code]

If you want to pass parameters to the same run book the command line would look something like this:

[code]JETCMD.EXE /type:runbook /jobguid:{5E5906A7-00D5-49E4-909B-CEB1810BF37} /agent:local /username:RESAMUser /password:RESAMPassword /paramname:”””MessageTitle”””,”””MessageBody””” /paramvalue:”””Job Successful”””,”””The target job has finished”””[/code]

You may ask why we don’t utilise team membership for this? The simple answer is if the RES Automation Manager agent is installed within an image (or how you identify RES AM agents), automatically invoking a job can be problematic. The detailed answer is that team membership rules won’t necessarily be triggered if the agent is not seen as a “new” agent. A classic example of this is when a machine is re-imaged and it’s already a member of the target team. Humans are rubbish at having to remember to remove a RES Automation Manager agent from a team before re-imaging (as well as removing a computer’s AD account!).

As we’re security conscious we also don’t like specifying the RES Automation Manager console username password in clear text if we can avoid it. Therefore, we allow you to obfuscate the password so users cannot manually open the RES Automation Manager console with credentials embedded in any scripts. Encoding the password is optional so the choice is entirely down to you.

If you do wish to obfuscate the password then you’ll need to use JETPWD.EXE (yet another tool!). This tool is used to generate obfuscated passwords for both JET and JETCMD.

These new tools will be included in the upcoming Virtual Engine Toolkit v1.2 release. Until then, if you’d like a copy please complete the Contact Us form or email me and we’ll happily send you a copy!

Introducing the Job Execution Tool (JET)

We’re pleased to announce that we have beta copies of the new RES Automation Manager Job Execution Tool (JET) and Job Execution Tool Command Line (JETCMD) available. The premise of these two new utilities is to overcome a particular shortcoming in the RES Automation Manager product suite; unattended job scheduling. These new tools overcome the following problems:

  1. When scheduling a job unattended you have to specify the target agent that you wish a job to run on as there is no way to flag the “local” machine.
  2. When integrating RES Automation Manager jobs with RES Workspace Manager you have to supply parameter values up front. There is no way to prompt the user for parameter values from RES Workspace Manager.
  3. When parameter values are required at run-time, jobs have to be scheduled via the RES Automation Manager GUI.

JET

The main JET utility is a graphical interface, dynamically built around an XML Jet-256pxconfiguration file. Its initial intention was to be able to schedule RES Automation Manager jobs via managed RES Workspace Manager managed shortcuts with the ability to prompt for parameter values. Many customers have asked whether we can provide access to Run Books without granting access to the RES Automation Manager console. Unfortunately for us, there is no way to prompt for RES Automation Manager parameters when integrated with RES Workspace Manager. We can integrate parameters, but they’re  a one-way operation and set in stone, i.e. they cannot be changed on the fly only configured up front.

JET overcomes this issue and permits execution of either Modules, Projects or Run Books on the local agent without having to know the local agent’s name, GUID or launch the RES Automation Manager console. If you don’t want the target job to run on the local agent, then it can all be predefined in the target Run Book.

Here are a couple of scenarios where we have used JET to-date:

  1. Invoking AD object creation tasks (users etc) via managed desktop shortcut without requiring access to the RES Automation Manager console. This provides a simple to access self-service menu and reduces training as Helpdesk staff having to understand another console.
  2. During imaging processes to allow the technicians to select the target department/application set for the end user. Otherwise, this would require the technician having to manually schedule a RES Automation Manager project on the target device. What is its name etc.

The graphical interface is fully customisable (within reason) so you can change all the wording and even the main graphic if required. This helps end-user adoption as it can be branded with the corporate imaging etc. Here’s an example we’ve used (customer artwork removed!) at the end of the imaging process. After deployment, the relevant departmental applications can be installed by selecting the department:

image

These new tools will be included in the upcoming Virtual Engine Toolkit v1.2 release. Until then, if you’d like a copy please complete the Contact Us form or email me and we’ll happily send you a copy!

Move Machine Based Context Menus to Per User (Part I)

WARNING! This post requires you to edit the registry. Using Registry Editor incorrectly can cause serious problems that might require you to reinstall your operating system. Virtual Engine cannot guarantee that problems resulting from the incorrect use of Registry Editor can be solved. Use Registry Editor at your own risk. Be sure to back up the registry before you begin. If this hasn’t scared you off keep reading….

imageIn my time installing and configuring applications for multi user environments like XenApp or RDS, I come across many applications that will create context menus within Windows explorer that can help a user quickly perform a function. The screen shot below shows how WINRAR has added context menus in Windows explorer that allows the user to easily create a .RAR file having selected file(s) or folder(s).

Generally these context menus are machine based, i.e. any user that logs in to a XenApp server will be able to see and use these context menus. On the face of things you might ask yourself why would this be a problem? Well suppose this application is strictly licensed for particular/named users. Therefore, you wouldn’t want anyone having the option to use them otherwise you would need to license the application for all users! In this case what you’d really like is to only have them available to users whom are licensed or whom you deem need them. A typical example of this might be Adobe Acrobat Professional that adds in a context menu to combine documents to single PDF.

The good news is there is a way of moving them from being machine based to per user with some fancy manipulation of various registry keys. So lets begin using our example of WINRAR and see how this can be done.

Firstly, we need to understand where context menus are located within the registry. From my experience when you right click on file(s) within windows explorer the context menus will be found in:

HKEY_CLASSES_ROOT\*\shellex\ContextMenuHandlers

Here’s an example with WinRAR:

SNAGHTML10a57999

When you right click on folder(s) within windows explorer the context menus will be found in one or both of the following registry locations:

HKEY_CLASSES_ROOT\Folder\ShellEx\ContextMenuHandlers
HKEY_CLASSES_ROOT\Directory\ShellEx\ContextMenuHandlers

Below is a screen shot showing these registry keys for WinRAR:

SNAGHTML10a90573

So now we know where they are located we should open up the registry editor (REGEDIT.EXE) and export the context menu registry keys that we would like to make per user to .REG files (saving them to a location for safe keepings should you need to revert it back!).

What we need to do next is take a copy of those same registry (.REG) files so we can edit them. Using those copies open them in say notepad and replace HKEY_CLASSES_ROOT with HKEY_CURRENT_USER\Software\Classes (this is where the equivalent registry keys are kept for a user). It should now look something like this; using WinRAR as the example. Once completed save and close the .REG file.

image

Now we get dangerous (well not really if you’re in the registry all time adding, deleting and generally tinkering – sound familiar?!?). The next step requires us to alter the permissions of those context menu registry keys located in:

HKEY_CLASSES_ROOT\Folder\ShellEx\ContextMenuHandlers
HKEY_CLASSES_ROOT\Directory\ShellEx\ContextMenuHandlers
HKEY_CLASSES_ROOT\*\shellex\ContextMenuHandlers

Again using WINRAR as the example I would open up REGEDIT.EXE and browse to the following locations:

HKEY_CLASSES_ROOT\Folder\ShellEx\ContextMenuHandlers\WinRAR
HKEY_CLASSES_ROOT\Directory\ShellEx\ContextMenuHandlers\WinRAR
HKEY_CLASSES_ROOT\*\shellex\ContextMenuHandlers\WinRAR

Modify the ‘Users’ permissions from ‘Read’ to ‘Deny’ on each registry key (as listed above) like so:

SNAGHTML55f270b

Having changed those permissions you have successfully removed the context menus from a per machine basis or more precisely denied access to users and administrators. I’m no fan of doing things manually so I try and automate where possible. My choice of tool to change the registry key permissions in that automated fashion would be to use RES Automation Manager which has a built-in task to manage registry key functions, e.g. registry permissions. Unfortunately there appears to be a bug – which has been logged with RES Support – in RES Automation Manager for this task when the registry key contains “*” i.e.

HKEY_CLASSES_ROOT\*\shellex\ContextMenuHandlers\WinRAR

So I turned to the ever reliable SetACL from Helge Klein (follow Helge on Twitter here) to set the required registry permissions and added the command line into my RES Automation Manager job. For any existing users of RES Automation Manager I’ve attached a handy building block (just click on the big red brick) that can be used and manipulated for your needs to change those permissions as described above.

[wpdm_file id=9]

In Part II of this blog post I’ll describe how you go about targeting these same context menus at specific users Smile.

Enjoy!

Nathan

Automating Citrix Provisioning Server Install with RES AM

Here is a blog post I put together on automating the build of Citrix Provisioning Services using RES Automation Manager 2012. Before we get into the details I thought I’d mention a few resources and solutions I found on the way which helped me out. A big thanks to:

Before you can begin you will need to make sure you have the following prerequisites in place:

  • Provisioning Server Software (PVS 6.1 used for this example);
  • Windows Server 2003 upwards (Windows 2008 R2 SP1 used in this example);
  • NET 3.5 or higher is installed;
  • RES Automation Manager 2012;
  • Use the latest Citrix Licensing server.

I’ve split the automated process in to two distinct parts; creating the PVS database and installing PVS to make it easier to digest. If you’re lazy or just want to crack on you can just download the building blocks and get going! Note: you will need to update the resource references to the PVS 6.1 installation files.

Creating the PVS Database

Before you can automate the PVS installation we need to have a database in place for the PVS servers to connect to. Unfortunately for us there’s not an easy way to accomplish this as we need to generate an SQL script with our required database values. As we’re invoking the creation process from RES Automation Manager 2012 we can utilise parameters so we can prompt the administrator for these values at run time.

To create the SQL script we first need to install the Provisioning Services software on a clean Windows 2008 R2 server or if you have an install already you can obtain from here. Once installed we can run C:\Program Files\Citrix\Provisioning Services\DBscript.exe to launch the Provisioning Services Database Script Generator. Exciting stuff I know !!!

image

If we complete the details with placeholders (as above) for the database name and farm name, DBscript will create the required .SQL script with values that we can use within our RES Automation Manager jobs. Click OK and it will create the CreateProvisioningServerDatabase.sql file in the path specified, complete with embedded placeholders.

We can now import this file as a resource into the RES Automation Manager console. Note: remember to tick the ‘Parse Environment variable and parameters’ checkbox. If you forget to do this we’ll attempt to create a database with a name of $[PVSDB] which probably won’t work (not that I’ve checked!).

To create the required SQL database we can utilise the CreateProvisioningServerDatabase.sql file with the built in RES Automation Manager database connector task(s) or via SQLCMD on the local Microsoft SQL instance. As we’re cheap and can’t assume that you’re licensed for the relevant connector, we’ve utilised SQLCMD in the building blocks. For more details on this, download them and have a look.

After the database has been created we need add SQL permissions to the database (if using a network user for the SOAP and STREAM services). This is achieved with a couple of SQL statements (see the building blocks for more information). If we’re using an Windows service account to run these services, the user will be configured later during the install… And now the fun begins;

Installing and Configuring PVS

Now that the database is created we can move on to installing the software, configuring and adding servers to the farm. Installing the software is no problem however configuring and adding servers to the farm is a bit more involved. The method I used for configuring the servers was by utilising the configwizard.ans file which holds all the configuration items. By running the %PROGRAMFILES%\Citrix\Provisioning Services\configwizard.exe /s the answer file is in turn created here C:\ProgramData\Citrix\Provisioning Services\configwizard.ans.

Once we have the configwizard.ans file we can edit it and embed our RES Automation Manager 2012 parameters within it. If you’d like to know what options can be configured in the answer file, run configwizard.exe /c. The configuration wizard will write a C:\ProgramData\Citrix\Provisioning Services\configwizard.out file. Again, all this information is in our building blocks.

I used two different answer files one for the first server joining the farm and the other for all subsequent servers. Below is an example of the first server configwizard.ans file:

[code]IPServiceType=$[IPServiceType]
PXEServiceType=$[PXEServiceType]
FarmConfiguration=2
DatabaseServer=$[DBSERVER]
DatabaseInstance= FarmExisting=$[PVSFARM]
ExistingSite=$[PVSSITE]
ADGroup=$[DOMAIN]/Builtin/Administrators
Store=$[PVSSTORE]
DefaultPath=$[STOREDRIVE]$[STORELOCATION]
UserName=$[SERVICEACCOUNTUSER]
UserPass=$[SERVICEACCOUNTUSERPASSWORD]
network=$[NETWORKACCOUNT]
Database=$[DBCONFIGUSER]
PasswordManagementInterval=7
StreamNetworkAdapterIP=$[STREAMINGSERVERIP]
IpcPortBase=6890
IpcPortCount=20
SoapPort=54321
BootstrapFile=C:\ProgramData\Citrix\Provisioning Services\Tftpboot\ARDBP32.BIN
LS1=$[STREAMINGSERVERIP],0.0.0.0,0.0.0.0,6910
AdvancedVerbose=0
AdvancedInterrultSafeMode=0
AdvancedMemorySupport=1
AdvancedRebootFromHD=0
AdvancedRecoverSeconds=50
AdvancedLoginPolling=5000
AdvancedLoginGeneral=30000[/code]

Once the answer file/files have been created and modified, import them into the RES Automation Manager resources. Note: remember to select the ‘Parse Environment variable and parameters’ checkbox!

Finally to automate the actual PVS install, we need to make sure we download these resources to the C:\ProgramData\Citrix\Provisioning Services\ directory on the target server. Then we kick off the configuration wizard which will apply the configuration, by running configwizard.exe /a. Once complete the services should start automatically and when you start the PVS console and connect you should be presented with the new farm, well hopefully anyway !!

Problems Encountered

If you do have problems using the answer file and the install fails the best place to start troubleshooting is under C:\ProgramData\Citrix\Provisioning Services\Log directory. If all goes wrong you will notice that there will be only one file here;  configwizard.log. And at the end of this file hopefully it should give you some meaningful reason as to the failure. If all works fine and the services start you should see around 8 Log files and have a big smile on your face :D.

I did have other issues whilst getting this to work. Here are a few notes in case they help:

  • No device License available when a new machine is booted using provisioning server you will see the error in the streamprocess log on the PVS server and also on the device a pop message will say “No device License currently available for this computer a system shutdown will be initiated in 96 hours. I found the resolution to this problem was to upgrade the license server to the latest build.
  • PVS Console install does not install via AM job – ensure that UAC is disabled and use a security context to run the job instead of the local System account.
  • After a server install I could not mount Vdisks on PVS server and might get an error similar to “Cannot mount Vdisk mapi error”. Looked at device manager and noticed that the Citrix virtual hard disk Enumerator driver was not installed correctly. To resolve this first remove the device and then go to %PROGRAMFILES%\Citrix\Provision Services\Drivers right hand click and install cfsdep2.inf and then go back to device manager and add legacy hardware and select “I have disk” and then point to same location and the file is cvhdbusp6.inf. It should then hopefully install this device without any issues. Or the Preferred option with RES AM create a module to download the following CFSDep2.cat, CFSDep2.inf and CFSDep2.sys to C:\windows\system32\drivers before installing provisioning server and all should be okay.
  • When using a service account make sure that this user is given the required permissions i.e read/write on the PVS store directory on the PVS servers / db_datareader and db_datawriter on the database although the latter can be done if you select configure user for database.

Building blocks now updated as there was a problem with the Service Account password passing through to the answer file, this should be resolved. Have also added a module to remove the answer file as the password is in plain text.

Hope this helps, Enjoy ! Smile Simon

[wpdm_file id=7]

Upgrading RES AM Linux Agents

There comes a time when RES Automation Manager Linux agents need upgrading. A typical example is with the GA release of RES HyperDrive. Now that RES Automation Manager 2012 SR1 has been released, there is a newer Linux agent that isn’t (currently) is the RES HyperDrive appliance.

If you’re like me, you’ll want to upgrade this. The Getting Started with RES Automation Manager Agent for Linux guide will point you in the right direction, but unless you’re a fairly competent Linux administrator you may struggle with certain aspects. For example, to upgrade the RES AM Linux agent all you need to do is:

1. Stop the currently installed RES Automation Manager Agent for Linux by using the command /etc/init.d/resamad stop.
2. Uninstall the RES Automation Manager Agent for Linux.
3. Install the new version of the RES Automation Manager Agent for Linux.
4. Start the new RES Automation Manager Agent for Linux.

So there you have it – simple!

I’ll actually take you through the individual steps to upgrade the Linux agent installed in a RES HyperDrive appliance. These steps are equally applicable to any Linux installation but this will no doubt be a common scenario. As an overview the steps required are:

  1. Find installed RES Automation Manager Agent for Linux version;
  2. Uninstall existing RES Automation Manager Agent for Linux;
  3. Copy new RES Automation Manager Agent for Linux;
  4. Extract RES Automation Manager Agent for Linux;
  5. Install RES Automation Manager Agent for Linux;
  6. Configure RES Automation Manager Agent for Linux;
  7. Start the RES Automation Manager Agent for Linux.

Connecting

Firstly you’ll need to connect to the RES HyperDrive virtual appliance via SSH (see Remotely Administering RES HyperDrive) or connect to the console session.

Uninstall Existing Version

To uninstall the existing RES Automation Manager Agent for Linux you’ll need to find the currently installed version before you can actually remove it. To find the existing version run:

[code]rpm –qa | grep –i res-am[/code]
This will display the current version. Make a note as you’ll need it in a minute or two! Here’s an example screenshot from the RC2 appliance:

image

To uninstall the agent run:

[code]rpm –e <res-am-agent-version>[/code]

The <res-am-agent-version> is listed in the first command, for example res-am-agent-6.5-0.102354. If successful the agent service should be stopped and the agent uninstalled.

Note: I have seen multiple agents installed in both the RC2 and GA releases. It looks like an oversight and the 6.4-2 version is not actually installed. If you want to remove both entries then the second rpm –e command may give you an error but it will be removed from the list.

Copy Agent Files

You will need to download the latest Linux agent version from the RES support portal as they’re not included in the management console like the Windows clients. Once you’ve downloaded the tarball, copy it to the RES HyperDrive appliance (see Transferring Files to RES HyperDrive) into the /home/hyperdrive directory.

From your SSH/console session run:

[code]mv /home/hyperdrive/res-am-agent-<version>.tgz /tmp[/code]

This will move the file into the /tmp directory. Note: If you don’t have permissions to do this run the ‘su –‘ command first, enter the root password and try again.

Extracting the Agent Installer

As the RES Automation Manager Agent for Linux is compressed it needs extracting before it can be installed. Change the working directory and extract the archive by running the tar command:

[code]cd /tmp
tar xvzf ./res-am-agent-<version>.tgz[/code]

This expands the files into the /tmp/AIX, /tmp/RedHat and /tmp/Suse directories. As CentOS is based on RedHat 5 we need to install this agent version. Install the new agent version by running:

[code]rpm –i /tmp/RedHat/Release5/x86_64/res-am-agent-<version>.x86_64.rpm[/code]

Configuring the Agent

To connect the RES Automation Manager Agent for Linux, we either need to enable auto discovery or specify a Dispatcher list. If you wish to enable auto discovery you can configure the agent with the following command:

[code]/usr/local/bin/resamad –d m[/code]

If you wish to specify a Dispatcher run this instead:

[code]/usr/local/bin/resamad –dd<Dispatcher>[/code]

For example, if your Dispatcher was called RESAMDISP01 (with an IP address of 192.168.0.100) you could either run

[code]/usr/local/bin/resamad –ddRESAMDISP01[/code]

or

[code]/usr/local/bin/resamad –dd192.168.0.100[/code]

Starting/Stopping the Agent

After the upgrade you’ll probably need to start the agent. To do this you can simply run:

[code]service resamad start[/code]

If you check the RES Automation Manager console you should see your agent online. The version shown below (6.00.111676) is the RES Automation Manager 2012 SR1 Agent for Linux.

image

If you need to restart the RES Automation Manager Agent for Linux run service resamad stop and then service resamad start. Why there is no service resamad restart command I don’t know! If I wasn’t lazy I’d create the required script but as the appliance is supposed to be “rip and replace” I don’t think I’ll bother 🙂

Phew – hopefully someone finds this useful? Iain

RES Automation Manager Emergency Patch Management

I previously covered the reasons why you probably wouldn’t use RES Automation Manager for patch management (see here). Max Ranzau (AKA @RESguru) made a great point that you can certainly use Automation Manager to push a patch out individual patches easily. With the release of the Microsoft RDP critical patch MS12-020 and an exploit apparently in the wild, this proves that RES Automation Manager certainly still has its place in your patch management strategy.

Assuming that you haven’t exposed port 3389 directly to the internet you may feel that you’re somewhat “safe.” I actually think that the greater risk comes from worms that will be run from within the corporate network firewalls. All it takes is for one machine to be compromised… How many desktops and servers do you have inside the corporate network that have RDP access enabled?

Microsoft provides some workarounds that will give you time to test the patch prior to deployment. Fortunately, RES Automation Manager gives you the following options in dealing with this exploit using the built-in Automation Manager tasks/tasklets:

    1. Deploy the patch within minutes and/or
    2. Disable RDP connections completely and/or
    3. Enable/modify the Windows firewall rules to block RDP connections and/or
    4. Enable Network Level Authentication for RDP connections.

One thing is for certain, you need to be acting and mitigating this risk now. I think it’s only a matter of time before things get interesting. Who remembers Slammer?! I know people who are still mentally scarred by its long lasting effects!

GPOs could help you with some of this, but nothing is going to be able to deploy any of (or a mixture of) the above workarounds within minutes. How will you be sure that your workarounds are in place on all machines? RES Automation Manager will give you near instant feedback on what tasks failed and provide you with the data to target those computers. Remember, if you use RDP/Remote Assistance for support then you’re probably limited to option #1 (or maybe #4).

If you don’t have RES Automation Manager today, you probably wish you did! You’ve been warned Smile with tongue out..

Iain

PVS Image Management

There has been a bit of banter on the Twittersphere about how people manage and document their PVS images. It was suggested (by more than just me Smile) that RES Automation Manager could be utilised for this task. This post is not a best practice guide as to how to create, update or document your images, rather a use case on how and why we use/recommend RES Automation Manager. Heaven forbid, you might even decide to do away with Provisioning Services as a result. Either way RES Automation Manager will play very nicely with or without PVS but I couldn’t fit it into 140 characters!

Provisioning Services Private Image Management

Maintaining the gold or private mode PVS image can be a complex task for a number of reasons. Simplifying any of these potential hurdles can only be a good thing, right?

  1. A certain level of skill is required to both create and maintain images. There are numerous tasks that need to be completed and in some cases, performed in a particular order. As a result, this task is typically left or assigned to the senior administrators.
  2. Application upgrades can taint or stain the master image. Some applications require an uninstallation of the old version and installation of the new product MSI. I don’t need to tell you that uninstallers are not always reliable or clean everything out when run!
  3. How are changes to the gold image documented to ensure that they’re incorporated into all other PVS images? It is typical that there will be more than one image for deployment. For example, hardware differences will typically require separate images.
  4. Ad-hoc and emergency changes can wreak havoc with your PVS images. How quick and easy is it to push an update out to 100 XenApp servers streamed from a central image? If we make changes whilst the servers are running then they’ll be lost when the write cache is erased meaning we either have to reapply this change after every reboot or update our gold image pronto! This will get a lot more interesting if the servers are rebooted on a nightly basis and the write cache cleaned!

RES Automation Manager

If know me by now, you probably know that I’m going to say that RES Automation Manager is the answer to all your prayers! Now whilst it can certainly address the above “issues” (and I would recommend it in conjunction with Provisioning Services any day of the week) there are other processes and solutions that may address one or more of the above and deploying RES Automation Manager won’t automagically fix them. A good example of this is documentation. If your internal processes mandate that all changes are documented and you bypass this process, there is nothing to stop you bypassing this process even if Automation Manager is installed!

What RES Automation Manager Won’t Do

Thought I’d better get this bit out of the way before you get all the way to the end and are disappointed! RES Automation Manager is a Run Book Automation tool and not an imaging/deployment tool. This means that we cannot (directly) deploy an Operating System from RES Automation Manager. Fortunately for us there are many technologies out there that can, e.g. Windows Deployment Services/Microsoft Deployment Toolkit which BTW can by combined with RES Automation Manager – take a look at this White Paper. Why reinvent the wheel?!

What RES Automation Manager Will Do

So once we have our Operating System deployed and the RES Automation Manager agent installed (we can do this with WDS/MDT as mentioned earlier) what benefits will this give us? Well, at a simplistic level, RES Automation Manager can automate the entire server configuration and application deployment process. This process can also include installing XenApp and XenApp Prep as well as any other applications. This obviously takes some additional time but gives us a clean, repeatable process for deploying a XenApp server from scratch. It’s a strategic decision and not a tactical one!

Why is this important? Typically it comes down to issues #2 and #3 so let’s take them one at a time..

Issue #1

RES Automation Manager can reduce this complexity by removing Provisioning Services altogether. I’m not suggesting that you remove this from your infrastructure. Not even for one minute. However, if you don’t need to have a clean image after every reboot getting shot of PVS maybe an option? We have automated the complete server deployment and can typically provision a new server in a few hours from start to finish; Operating System, XenApp and applications. OK it’s a few hours of time, but there is no user interaction required. I’m guessing that it’s probably not that often that you need to add a new server within 30 minutes?

This benefits the typical IT department as these are now regular servers. They’re supported in the same way as other servers and they have a proper OS install etc. There are downsides too. Now we need to patch and maintain multiple OS instances and not just one master image. Isn’t this part of the reason you deployed Provisioning Services in the first place?

Issue #2

By having a repeatable process for building our XenApp server(s) from scratch we can avoid tainting our image. If we need to cut a new image then we can deploy a completely clean server and deploy the required applications as required. We don’t need to uninstall and reinstall or upgrade applications. I’m not advocating this as a best practice, but I know lots of admins that are a lot happier with this process. It doesn’t need to be performed for all updates, but you now have an option as to whether you update the master image or cut a new one. If you have not automated the entire deployment and configuration process, recreating a new image from scratch probably doesn’t make you feel warm and fluffy inside!

When you finally get run over by a bus (it’s going to happen one day as everyone keeps saying it) pretty much anyone with ounce of intelligence can deploy a new server or reverse engineer the Modules and Tasks in the Run Book to discover how things are tied together.

Issue #3

By virtue of automating the entire configuration and deployment process with RES Automation Manager, you have actually documented every step in the process. RES Automation Manager includes the ability to create an Instant Report of any or all Run Books, Projects and/or Modules. These reports are very detailed (small example here) and typically run to 1,000+ pages. For us consultants, this feature alone is worth its weight in gold. Did I mention that it’s available in RES Workspace Manager too? Winking smile

Issue #4

Finding out when and by whom the changes were made. Whether they be changes made to the gold image or Ad-Hoc emergency it doesn’t matter the audit trail of the changes is vitally important especially with change management processes. Well would be surprised to hear that RES Automation Manager has an in-built Audit Trail which allows you to view all actions performed in RES Automation Manager – how handy is that when a witch hunt is on (Oh that never happens now does it!?).

Issue #5

As usual I’ve saved the best until last and you didn’t see number 5 coming! The “pièce de résistance” if you like. This might get a bit confusing so strap yourselves in ready…

Emergency changes to running PVS instances are pain. Depending on your configuration after a reboot changes may be lost and depending on your requirements, you may reboot nightly or even weekly. If there is a configuration change that needs to be made then ultimately we need to update the master image. We can implement the change on the running instances, but it will be lost at some point when the write cache is cleared. Until the master image is updated we will need to implement the change, potentially after every reboot.

Because RES Automation Manager is a Run Book Automation tool we can implement this change across all running instances within minutes. “WAIT!”, I hear you cry, “These changes will be lost after a reboot!” Correct. But now we have achieved two things; documented the change and can automate the update to the master image at some point in the future.

Why did I say at “some point in the future?” Fortunately for us there is a hidden gem within RES Automation Manager called Snapshot Intelligence. With a name like that it better be good right!?

As the RES Automation Manager database has a record of all jobs that have executed on a given agent it can detect a snapshot. Whether this is a virtual snapshot or a backup restoration, it makes no odds. In our PVS world, if RES Automation Manager jobs have been run on a machine and the PVS instance is reset back to our master image state (write cache cleared), RES Automation Manager will detect this as a snapshot. You with me so far..?!

Once a snapshot is detected, RES Automation Manager can automatically reapply the job history (I’ll pause whilst you take this in and wait for the penny to drop!).

So if we automate all the emergency or ad-hoc updates with Automation Manager we can automatically reapply these after every reboot? Yes. No need to update the master image for every change? Yes.

In fact it gets better than that. When we update our master image we can run the exact same job history (automatically if you wish) to update the gold image. If you want to cut a new image from scratch we’ve got that covered too. Above all, if everything is automated with RES Automation Manager it’s automatically documented too. Needless to say, you get all the usual audit logging and change history.

Summary

So, in summary, using RES Automation Manager in combination with Citrix Provisioning Services has huge benefits, but there’s obviously a cost associated. Would I recommend it? Absolutely! For all of the above reasons. Is it worth it? Unfortunately, I can’t tell you that as only you know your environment.

Can RES Automation Manager replace Provisioning Services? Not entirely as you’ll still need WDS/MDT (or equivalent) to deploy the OS. It also depends on your reasons for deploying PVS in the first place. If it’s for near instant deployment, remove local disks, reduce the storage footprint or a clean image on every reboot, you’ll probably be using it for a long while yet. If your reasons are purely for “single image” management then you could potentially replace PVS in favour of a “traditional” deployment. Would I recommend this? It depends!

I know we’ve been focused on Provisioning Services in this article but RES Automation Manager will help you with the rest of your infrastructure automation. Desktops, laptops, servers; Exchange and Active Directory etc. You may have XenDesktop, Quest vWorkspace or VMware View for your virtual desktops. The same principal applies and you may even be using PVS in combination with these. Anyway, I don’t need to preach to the converted!

I will say that it should be a strategic decision to deploy RES Automation Manager. Don’t underestimate the amount of time it takes to automate and test. But I guess you already spend a lot of time testing your images?

You can find some video overviews/introductions on RES Automation Manager on Citrix TV and RES Tutorials. If you don’t want to take the time to download, install and configure RES Automation Manager but want to take a quick look, you can always request access to the RES Showcase. Some background and example videos on the Showcase platform can be also found here.

I’ll get off my soapbox now and crawl back to whence I came! Please feel free to comment and I’d love to hear your thoughts. Iain

RES Automation Manager 2012 Global Variables

Unfortunately, this post is a mixture of both good and bad news. In my humble opinion, I feel that RES have missed a trick with their implementation of Global Variables in RES Automation Manager (AM) 2012 and here’s why.

In all the furore surrounding the RES AM 2012 release, Global Variables are supposed to herald the completion of multi-tenancy implementations. For example, multiple departments and/or customers can be co-located on the same database and share the platform without any visibility or potentially any knowledge of who else is utilising the infrastructure. If you’re after an introduction into the RES AM Global Variables I suggest you take a look at Rob Aarts’s article on RESguru or watch Grant Tiller’s demonstration on REStutorials.

Resources and Global Variables

It was my assumption (obviously incorrectly) that we would be able to use Global Variables with file server resources. In a multi-tenant implementation, I wouldn’t necessarily want all administrators uploading file resources to the database and bloating the tables with BLOBS. When we add files stored on a file share to the RES Automation Database, the UNC path is stored along with the entry in the database. This isn’t necessarily a problem, assuming that all RES Automation Manager agents can resolve this path. Unfortunately, in a multi-tenant environment this may not be the case.

Enter Global Variables. Wouldn’t it be a great idea if we could use a Global Variable in the UNC path of a file resource?! As long as we make sure that folder structure is the same for each “customer” site we could set the Global Variable to the customer’s file server at the Team or if needed, Agent level. Even within a single organisation, Global Variables would enable us to use local file servers without having to implement DFS-R etc.

Being RES Consultancy Partners we could also use this process when designing our Building Blocks. For example, we could upload the required resources for a XenApp build to a file server, import the RES Automation Building Blocks and change the Global Variable(s) to point to the customer’s file server instead. No longer would we need to either perform a mass “find and replace” within the Building Block files or upload 5GB of data into a database. Happy days Smile.

As you’ve probably guessed, this doesn’t work. DOH! When we attempt to insert the Global Variable by right-clicking the file path we’re not given the option:

image

Manually entering the Global Variable placeholder, e.g. ^[GlobalVariable] doesn’t work either. There is, however, a workaround.

Resources, Global and Environment Variables

Now that we know we can’t use Global Variables at the resource level, I do know that we can use Environment Variables. If we just so happen to use an environment variable and that environment variable just so happens to be set to a Global Variable’s value, it just might work…

Firstly we need to pick a variable to use and in this example I’ll use ’RESAMRESOURCES’ as it’s unlikely to clash with any other environment variables. We define the Global Variable and set the value to our file server’s share (you can always override this at a Team/Agent level or when importing Building Blocks where needed):

image

Next, when adding a file resource we can browse the target file and override the UNC path and enter an environment variable. In this example I’ll use the %RESAMRESOURCES% to point to the required file server.

image

All that’s left to do is assign the environment variable before any module that we want to use this resource. Fortunately, RES Automation Manager has a task to do just this. In my example I’ve created a job-based environment variable. We could always set this as a persistent machine-based variable via AM too.

image

Once we’re done, our completed module will look a lot like this. Note: the job-based environment needs to be set before we execute a task that references the file server resources, in our case, the Unattended Installation of Foxit Reader task.

image

When we export our Module as a Building Block we now have a fully portable module that can be imported into any environment without storing the resource(s) in the database! All we need to do know is use Global Variables to define the credentials used to connect to the file server..

Resources, Global Variables and Credentials

This is where the house of cards falls down around us.. We’ve managed to trick RES AM into using file resources with Global Variables. However, as the RES Automation Manager service runs under the Local System account, it has no access to file resources located on file servers. To overcome this issue, we need to embed the credentials in with the resources. Again, you would assume that you could use the Credentials type of Global Variables to achieve this.

image

I’ve tried unsuccessfully to get this work, even my manually specifying the ^[GlobalVariable] placeholder. Perhaps I’m the only one, but what about password changes? If we embed the credentials with the resource, using a Global Variable for this would make perfect sense. Currently, we don’t change the password associated with the RES Automation Manager resources as this requires us to update each individual resource. If they were based on a Global Variable we’d have a simple way to update the password, maintain security and pass an audit with flying colours!

I can only assume that this is either technically difficult to implement or is an oversight. As a result, we’re still left have to either do a mass “find and replace” in our Building Block files when implementing RES Automation Manager at customer sites or uploading large binaries into the database. Other than this, I think Global Variables are a brilliant edition and hopefully they will be coming to RES Workspace Manager too Smile with tongue out.

Many thanks for reading. Iain