Multi-Homing RES HyperDrive

In certain situations you may wish to install two network adapters into a RES HyperDrive appliance. For example, you may not want to route internal traffic via the same gateway interface as external traffic. In this scenario there are some things that you need to be aware of. The RES HyperDrive documentation intimates that the primary NIC is the internet facing interface.

This isn’t necessarily the case and either NIC can be used. What you do need to be aware of though, is that if you configure a default gateway on all NICs, CentOS 5.3 will use the highest interface gateway as the default route. Therefore, if you specify a default gateway on both NICs in multi-homed deployment, the eth1 gateway will be used for the default route. If you look closely at the example above you will notice (there are 2 x 10.0.1.1 firewalls!) that the eth1 interface has no gateway specified. I recommend that you do leave the internet facing NIC with the default gateway, but whether this is eth0 or eth1 is up to you.

If you wish to manually alter the IP addressing information you can find the configuration scripts in the /etc/sysconfig/network-scripts/ directory. There will be an ifcfg-eth0, ifcfg-eth1 for each attached NIC. Use your favourite text editor to update the appropriate file.

Once you have configured the correct IP address(es) and gateway you will need to add static routes to the “internal” network(s). The RES HyperDrive CentOS installation stores static routes in the route-<interface> file in the /etc/sysconfig/network-scripts/ directory. As an example, if our internal networks were 172.16.0.0/255.255.0.0 and 172.17.0.0/255.255.0.0 we would create the following entries in the /etc/sysconfig/network-scripts/route-eth1 file (assuming the internal gateway is actually 192.168.1.1 and not 10.0.1.1!):

[code]172.16.0.0/16 via 192.168.1.1 dev eth1
172.17.0.0/16 via 192.168.1.1 dev eth1[/code]

Once you’ve made all your changes you can restart the networking stack by running the service network restart or reboot the appliance. If you want to view the routing table, just run the route command. Simples!

FOR THE LOVE OF GOD…

I promised myself it would never come to this and I’m writing this against my better judgement. However, when my independence, professionalism and credibility are called into question I feel that it warrants a response. The root of this seems to stem from some tweets I sent last night as can be seen here:

image

In particular the “I think you’ll find it’s more one sided (as usual)” comment hit a nerve or two. I awoke this morning and nearly spat my coffee everywhere after reading this Direct Message (identity purposely removed):

image

I would not normally air this in public but I could not send a DM response back as they chose to “unfollow” me after sending the message. Therefore, I can only make this public response.

To set the record straight, I’m not on anyone’s side. Virtual Engine as a company do not sell licenses of any product. We deliver consultancy and implementation services of various products. We’re here to ensure that our customers get the right solution for their requirements and do recommend AppSense, Liquidware Labs, RES or any other product that fits. Each product suite has its strengths and weaknesses. Period.

Believe me, I am highly critical of the RES suite of products (just ask anyone attending one of my training courses or a member of Product Management!). The simple fact that they OEM’d some of the technology for HyperDrive and lead everyone to believe differently doesn’t sit well. I don’t understand the reasoning and surely they knew that this was going to be uncovered at some point? That is what the above tweets say (just not in so many words!).

What my “one sided” comment was referring to is the seemingly non-stop “bashing” of the competition from the boys in green. This “one-upmanship” and playground antics is tiresome.

I don’t understand what the purpose of this is and if anyone can enlighten me, I’ll gladly listen.

I can only perceive that it’s for one of two reasons; 1) increasing sales or 2) attempting to throw so much mud that it sticks and forces the company out of business. Now, I hope that it’s not the second option as competition is good for everyone; the end users and the vendors. It will probably never work and even if it did, I wouldn’t want that on my conscience.

If the purpose is to increase sales then I think this approach is ultimately flawed too. Constantly being negative will eventually turn the customers and the channel off. Sure keep a very close eye on your competitors. However, don’t constantly criticise their approach or their ill-informed decisions. Use these perceived misadventures to your advantage and outmanoeuvre them with a better solution! That’s what successful businesses are all about.

So How Can AppSense Fix This?

In my opinion it’s very simple; people would like to know why they should be buying AppSense’s products. What are the differentiators between their offerings and the competition? Some might call this good old fashioned marketing?! For example, to the majority it doesn’t matter that a product has OEM’d components/technologies or is written in native code etc. What people want is a product that works and does what they need to do.

Now I will go on the record and state again that the AppSense suite of products are great and they have some fantastic technology. There are new technologies coming down the line that our existing and potential customers can leverage so please do bang the drum (and very loudly too) about how great DataNow and the other products are. Do tell us why we should be buying them! Just please, please, for the Love of God, focus on the marketing of products and not spreading FUD.

Rant over! If you feel offended, then it’s not my intention and I’m happy to discuss any of this with anyone if you feel it’s off the mark or factually incorrect. You can contact me via the usual channels or leave a comment. Now lets start afresh and move on.. Iain

RES Automation Manager Emergency Patch Management

I previously covered the reasons why you probably wouldn’t use RES Automation Manager for patch management (see here). Max Ranzau (AKA @RESguru) made a great point that you can certainly use Automation Manager to push a patch out individual patches easily. With the release of the Microsoft RDP critical patch MS12-020 and an exploit apparently in the wild, this proves that RES Automation Manager certainly still has its place in your patch management strategy.

Assuming that you haven’t exposed port 3389 directly to the internet you may feel that you’re somewhat “safe.” I actually think that the greater risk comes from worms that will be run from within the corporate network firewalls. All it takes is for one machine to be compromised… How many desktops and servers do you have inside the corporate network that have RDP access enabled?

Microsoft provides some workarounds that will give you time to test the patch prior to deployment. Fortunately, RES Automation Manager gives you the following options in dealing with this exploit using the built-in Automation Manager tasks/tasklets:

    1. Deploy the patch within minutes and/or
    2. Disable RDP connections completely and/or
    3. Enable/modify the Windows firewall rules to block RDP connections and/or
    4. Enable Network Level Authentication for RDP connections.

One thing is for certain, you need to be acting and mitigating this risk now. I think it’s only a matter of time before things get interesting. Who remembers Slammer?! I know people who are still mentally scarred by its long lasting effects!

GPOs could help you with some of this, but nothing is going to be able to deploy any of (or a mixture of) the above workarounds within minutes. How will you be sure that your workarounds are in place on all machines? RES Automation Manager will give you near instant feedback on what tasks failed and provide you with the data to target those computers. Remember, if you use RDP/Remote Assistance for support then you’re probably limited to option #1 (or maybe #4).

If you don’t have RES Automation Manager today, you probably wish you did! You’ve been warned Smile with tongue out..

Iain

PVS Image Management

There has been a bit of banter on the Twittersphere about how people manage and document their PVS images. It was suggested (by more than just me Smile) that RES Automation Manager could be utilised for this task. This post is not a best practice guide as to how to create, update or document your images, rather a use case on how and why we use/recommend RES Automation Manager. Heaven forbid, you might even decide to do away with Provisioning Services as a result. Either way RES Automation Manager will play very nicely with or without PVS but I couldn’t fit it into 140 characters!

Provisioning Services Private Image Management

Maintaining the gold or private mode PVS image can be a complex task for a number of reasons. Simplifying any of these potential hurdles can only be a good thing, right?

  1. A certain level of skill is required to both create and maintain images. There are numerous tasks that need to be completed and in some cases, performed in a particular order. As a result, this task is typically left or assigned to the senior administrators.
  2. Application upgrades can taint or stain the master image. Some applications require an uninstallation of the old version and installation of the new product MSI. I don’t need to tell you that uninstallers are not always reliable or clean everything out when run!
  3. How are changes to the gold image documented to ensure that they’re incorporated into all other PVS images? It is typical that there will be more than one image for deployment. For example, hardware differences will typically require separate images.
  4. Ad-hoc and emergency changes can wreak havoc with your PVS images. How quick and easy is it to push an update out to 100 XenApp servers streamed from a central image? If we make changes whilst the servers are running then they’ll be lost when the write cache is erased meaning we either have to reapply this change after every reboot or update our gold image pronto! This will get a lot more interesting if the servers are rebooted on a nightly basis and the write cache cleaned!

RES Automation Manager

If know me by now, you probably know that I’m going to say that RES Automation Manager is the answer to all your prayers! Now whilst it can certainly address the above “issues” (and I would recommend it in conjunction with Provisioning Services any day of the week) there are other processes and solutions that may address one or more of the above and deploying RES Automation Manager won’t automagically fix them. A good example of this is documentation. If your internal processes mandate that all changes are documented and you bypass this process, there is nothing to stop you bypassing this process even if Automation Manager is installed!

What RES Automation Manager Won’t Do

Thought I’d better get this bit out of the way before you get all the way to the end and are disappointed! RES Automation Manager is a Run Book Automation tool and not an imaging/deployment tool. This means that we cannot (directly) deploy an Operating System from RES Automation Manager. Fortunately for us there are many technologies out there that can, e.g. Windows Deployment Services/Microsoft Deployment Toolkit which BTW can by combined with RES Automation Manager – take a look at this White Paper. Why reinvent the wheel?!

What RES Automation Manager Will Do

So once we have our Operating System deployed and the RES Automation Manager agent installed (we can do this with WDS/MDT as mentioned earlier) what benefits will this give us? Well, at a simplistic level, RES Automation Manager can automate the entire server configuration and application deployment process. This process can also include installing XenApp and XenApp Prep as well as any other applications. This obviously takes some additional time but gives us a clean, repeatable process for deploying a XenApp server from scratch. It’s a strategic decision and not a tactical one!

Why is this important? Typically it comes down to issues #2 and #3 so let’s take them one at a time..

Issue #1

RES Automation Manager can reduce this complexity by removing Provisioning Services altogether. I’m not suggesting that you remove this from your infrastructure. Not even for one minute. However, if you don’t need to have a clean image after every reboot getting shot of PVS maybe an option? We have automated the complete server deployment and can typically provision a new server in a few hours from start to finish; Operating System, XenApp and applications. OK it’s a few hours of time, but there is no user interaction required. I’m guessing that it’s probably not that often that you need to add a new server within 30 minutes?

This benefits the typical IT department as these are now regular servers. They’re supported in the same way as other servers and they have a proper OS install etc. There are downsides too. Now we need to patch and maintain multiple OS instances and not just one master image. Isn’t this part of the reason you deployed Provisioning Services in the first place?

Issue #2

By having a repeatable process for building our XenApp server(s) from scratch we can avoid tainting our image. If we need to cut a new image then we can deploy a completely clean server and deploy the required applications as required. We don’t need to uninstall and reinstall or upgrade applications. I’m not advocating this as a best practice, but I know lots of admins that are a lot happier with this process. It doesn’t need to be performed for all updates, but you now have an option as to whether you update the master image or cut a new one. If you have not automated the entire deployment and configuration process, recreating a new image from scratch probably doesn’t make you feel warm and fluffy inside!

When you finally get run over by a bus (it’s going to happen one day as everyone keeps saying it) pretty much anyone with ounce of intelligence can deploy a new server or reverse engineer the Modules and Tasks in the Run Book to discover how things are tied together.

Issue #3

By virtue of automating the entire configuration and deployment process with RES Automation Manager, you have actually documented every step in the process. RES Automation Manager includes the ability to create an Instant Report of any or all Run Books, Projects and/or Modules. These reports are very detailed (small example here) and typically run to 1,000+ pages. For us consultants, this feature alone is worth its weight in gold. Did I mention that it’s available in RES Workspace Manager too? Winking smile

Issue #4

Finding out when and by whom the changes were made. Whether they be changes made to the gold image or Ad-Hoc emergency it doesn’t matter the audit trail of the changes is vitally important especially with change management processes. Well would be surprised to hear that RES Automation Manager has an in-built Audit Trail which allows you to view all actions performed in RES Automation Manager – how handy is that when a witch hunt is on (Oh that never happens now does it!?).

Issue #5

As usual I’ve saved the best until last and you didn’t see number 5 coming! The “pièce de résistance” if you like. This might get a bit confusing so strap yourselves in ready…

Emergency changes to running PVS instances are pain. Depending on your configuration after a reboot changes may be lost and depending on your requirements, you may reboot nightly or even weekly. If there is a configuration change that needs to be made then ultimately we need to update the master image. We can implement the change on the running instances, but it will be lost at some point when the write cache is cleared. Until the master image is updated we will need to implement the change, potentially after every reboot.

Because RES Automation Manager is a Run Book Automation tool we can implement this change across all running instances within minutes. “WAIT!”, I hear you cry, “These changes will be lost after a reboot!” Correct. But now we have achieved two things; documented the change and can automate the update to the master image at some point in the future.

Why did I say at “some point in the future?” Fortunately for us there is a hidden gem within RES Automation Manager called Snapshot Intelligence. With a name like that it better be good right!?

As the RES Automation Manager database has a record of all jobs that have executed on a given agent it can detect a snapshot. Whether this is a virtual snapshot or a backup restoration, it makes no odds. In our PVS world, if RES Automation Manager jobs have been run on a machine and the PVS instance is reset back to our master image state (write cache cleared), RES Automation Manager will detect this as a snapshot. You with me so far..?!

Once a snapshot is detected, RES Automation Manager can automatically reapply the job history (I’ll pause whilst you take this in and wait for the penny to drop!).

So if we automate all the emergency or ad-hoc updates with Automation Manager we can automatically reapply these after every reboot? Yes. No need to update the master image for every change? Yes.

In fact it gets better than that. When we update our master image we can run the exact same job history (automatically if you wish) to update the gold image. If you want to cut a new image from scratch we’ve got that covered too. Above all, if everything is automated with RES Automation Manager it’s automatically documented too. Needless to say, you get all the usual audit logging and change history.

Summary

So, in summary, using RES Automation Manager in combination with Citrix Provisioning Services has huge benefits, but there’s obviously a cost associated. Would I recommend it? Absolutely! For all of the above reasons. Is it worth it? Unfortunately, I can’t tell you that as only you know your environment.

Can RES Automation Manager replace Provisioning Services? Not entirely as you’ll still need WDS/MDT (or equivalent) to deploy the OS. It also depends on your reasons for deploying PVS in the first place. If it’s for near instant deployment, remove local disks, reduce the storage footprint or a clean image on every reboot, you’ll probably be using it for a long while yet. If your reasons are purely for “single image” management then you could potentially replace PVS in favour of a “traditional” deployment. Would I recommend this? It depends!

I know we’ve been focused on Provisioning Services in this article but RES Automation Manager will help you with the rest of your infrastructure automation. Desktops, laptops, servers; Exchange and Active Directory etc. You may have XenDesktop, Quest vWorkspace or VMware View for your virtual desktops. The same principal applies and you may even be using PVS in combination with these. Anyway, I don’t need to preach to the converted!

I will say that it should be a strategic decision to deploy RES Automation Manager. Don’t underestimate the amount of time it takes to automate and test. But I guess you already spend a lot of time testing your images?

You can find some video overviews/introductions on RES Automation Manager on Citrix TV and RES Tutorials. If you don’t want to take the time to download, install and configure RES Automation Manager but want to take a quick look, you can always request access to the RES Showcase. Some background and example videos on the Showcase platform can be also found here.

I’ll get off my soapbox now and crawl back to whence I came! Please feel free to comment and I’d love to hear your thoughts. Iain

Virtual Engine Releases the Free RES Showcase

Virtual Engine announces the launch of the free RES Showcase, a cloud-hosted demonstration environment that gives RES resellers the ability to show customers and prospects the latest innovations in dynamic desktops, desktop management and IT automation. The new service will assist partners in the development of sales opportunities, as well as making it simple for Virtual Engine to deliver training on RES products.

Steve Jackson, Vice President Channels and Strategic Alliances, RES Software commented, “Virtual Engine has an extremely strong track record in helping our partners and customers to get the most from their RES Software implementations, both around management of the desktop and automation of IT processes. This new portal will provide our partners with an easy way to help customers evaluate how they can make use of dynamic desktop solutions within their own environments to reduce costs and improve the management of users across multiple desktop delivery platforms.”

The free RES Showcase from Virtual Engine is designed to allow you to evaluate and perform Proof of Concepts (PoC) in a pre-installed and pre-configured demonstration environment. Included in the RES Showcase are RES Workspace Manager, RES Automation Manager, RES Service Orchestration, RES VDX with Subscriber, Citrix XenApp, Microsoft App-V and Exchange 2010.

RES Workspace Manager VDX Options Explained

When delivering RES Workspace Manager training there can be some confusion over some of the settings available when integrating it with RES Virtual Desktop Extender (VDX). The purpose of this post is to attempt to clarify what option does what!

image

  1. This is the global option that will enable or disable the RES Workspace Manager and VDX integration, i.e. the ability to run applications as a “Workspace Extension”. Note: If this option is disabled and the RES VDX Engine is installed, there is potentially nothing from stopping this running, just you won’t get the RES WM integration, i.e. licensing etc.
  2. If this option is enabled then the VDX Engine process will be started. By disabling this option you will effectively be turning on the legacy RES Subscriber/Workspace Extender functionality only.
  3. If you have the VDX Engine installed, then this option will enforce the VDX settings to take preference over the RES Subscriber/Workspace Extender, e.g. options 7 – 13 and the Z-ordering improvements. Don’t disable this option if deploying RES VDX!
  4. Depending on the setting chosen the following taskbar behaviour will occur:
    1. Autodetect: The behaviour as defined in the Display Settings section will be honoured, i.e. the configuration of the remote session display.
    2. Yes: The taskbar of the local session will always be hidden regardless of the remote session display configuration.
    3. No: The taskbar of the local session will never be hidden. Note: with the RES Subscriber/VDX Shell, the taskbar is hidden by default.
  5. Depending on the setting chosen, the following client application pass-through behaviour will occur:
    1. Autodetect: If running client application window will be obscured by the remote session, it will automatically be displayed in the remote session.
    2. Yes: All running client applications will be automatically displayed in the remote session.
    3. No: No existing client applications/processes will be displayed in the remote session when it’s first started. Note: this does not prevent reverse seamless applications from being launched from the remote session!
  6. If selected, when a user logs off a remote session, all locally running client applications will be closed (or attempted to be closed!) before the remote session is ended.
  7. Enable this option if you which users to be able to access the system tray icon and enumerate the applications present in the user’s local client Start Menu.
  8. Enable this option if you which users to be able to access the system tray icon and enumerate the applications present in the user’s local client Desktop.
  9. Enable this option if you which users to be able to access the system tray icon and enumerate the applications present in the local client System Tray.
  10. Client applications can be excluded from the VDX seamless window integration by entering the process name, i.e. pnagent.exe.
    1. Multiple processes should be separated with semicolons.
    2. Processes can still be launched, but will not be displayed in the taskbar the remote session and Z-ordering will not be implemented.
  11. If you wish client applications to be unavailable via the System Tray integration, enter the process(es) here.
    1. Multiple processes should be separated with semicolons.
    2. If a client process is excluded, it will not be displayed in System Tray Start Menu or Desktop folders.
  12. Allows you to override the default VDX pop-up balloon title (a).
  13. Allows you to override the default VDX pop-up balloon text (b).
    1. This text is also displayed at the top of the VDX system tray Client Start Menu/Desktop window (c).

image

SNAGHTML6c0f49e

Display Settings

The following information has been taken from the RES VDX User Guide and is used when configuring the Local Client Taskbar integration in option #4 above:

VDX supports several display scenarios. Each scenario may require specific settings. For an overview of these settings, see Setting up the Behavior of the RES VDX Engine (on page 10). Some examples of supported scenarios are:

A single display

In this scenario, both the local desktop and the remote desktop are run on the same display. The remote desktop is the visible desktop, showing both remote applications and local applications. The local taskbar is obscured if the remote session is maximized. With the remote session at a smaller size, the client taskbar is shown twice, once in its original position on the client and once integrated in the remote session.

A dual display setup with the remote desktop spanning both displays

In this scenario, both the local desktop and the remote desktop span both displays. The remote desktop is the visible desktop, showing both remote applications and local applications. The local taskbar is obscured if the remote session is maximized on the first monitor or if both sessions are maximized. 

A dual display setup with the local desktop running on the primary display and the remote desktop running on the secondary display

Both taskbars are displayed. 

A dual display setup with one remote desktop running on the primary display and another remote desktop running on the secondary display

Both remote desktop taskbars are displayed. Client windows can be moved from desktop to desktop.

I hope this helps clarify things for someone! Iain

Replacing the default XenServer WSS Certificate

Something a little bit different from the normal RES related posts this time. During the deployment of the Demo Showcase platform we needed to replace the SSL certificate used by the XenServer Web Self Service (WSS) servers. Reviewing the WSS documentation revealed very little about how to achieve this. As you can see the user and installation guides offer very little guidance!

image

Much to my surprise, I couldn’t locate a web resource that details how to do this, i.e. generate the required ssl.crt and ssl.key files. There are lots of snippets of information but no simple post that details the requirements nor the steps to perform. This is my attempt to rectify this situation!

Pre-requisites

Before you begin there is the assumption that you have the following prepared/installed:

  1. The required SSL certificate has been exported into .PFX format (and you know the private key password!);
  2. You have OpenSSL is installed;
  3. WinSCP (or other SCP client) is installed.

Converting the Certificate to a .CRT and .KEY Pair

The WSS appliance expects the certificate and private key to be provided as two separate files rather one as contained in the .PFX (or .PEM) file. We can generate the correct files by utilising the OpenSSL tools. The secret to this part is to ensure that the generated .KEY file is not encoded with a password. If there is, you’ll receive an error when attempting to start the web service on the WSS appliance.

To export the certificate (.CRT) component from the .PFX file run the following OpenSSL command: openssl pkcs12 -in <ssl-certificate.pfx> -clcerts -nokeys -out <ssl.crt>

To export the private key (.KEY) without a password, run the following OpenSSL command: openssl pkcs12 -in <ssl-certificate.pfx> -nodes -nocerts -out <ssl.key>

Transferring the Certificate Files to the WSS Appliance

Once you have the required .CRT/.KEY file pair, you’ll need to copy them to the Web Self Service appliance. This is a fairly straightforward process but requires enabling the SSH daemon on the appliance first. To do this you’ll need to connect to the WSS appliance console via XenCenter. Once you’ve logged onto the console, issue the following command: service sshd start

You’ll also want to stop the Web Self Service process by running the following command: service webss stop

After the SSH service has started and WSS services are stopped, you can now copy the .CRT and .KEY files to the /root/sse/conf directory via WinSCP (or your tool of choice). Note: you might want to rename the original .CRT and .KEY files before copying the replacements in!

Restart the WSS services by executing: service webss start

All being well, you should receive no errors and when browsing to the WSS homepage you should not be warned about the SSL certificate! Here’s an example using a certificate with the Common Name set as the default sse-https-server.

image

Simples! I hope someone finds this useful one day! Iain

RES Automation Manager 2012 Global Variables

Unfortunately, this post is a mixture of both good and bad news. In my humble opinion, I feel that RES have missed a trick with their implementation of Global Variables in RES Automation Manager (AM) 2012 and here’s why.

In all the furore surrounding the RES AM 2012 release, Global Variables are supposed to herald the completion of multi-tenancy implementations. For example, multiple departments and/or customers can be co-located on the same database and share the platform without any visibility or potentially any knowledge of who else is utilising the infrastructure. If you’re after an introduction into the RES AM Global Variables I suggest you take a look at Rob Aarts’s article on RESguru or watch Grant Tiller’s demonstration on REStutorials.

Resources and Global Variables

It was my assumption (obviously incorrectly) that we would be able to use Global Variables with file server resources. In a multi-tenant implementation, I wouldn’t necessarily want all administrators uploading file resources to the database and bloating the tables with BLOBS. When we add files stored on a file share to the RES Automation Database, the UNC path is stored along with the entry in the database. This isn’t necessarily a problem, assuming that all RES Automation Manager agents can resolve this path. Unfortunately, in a multi-tenant environment this may not be the case.

Enter Global Variables. Wouldn’t it be a great idea if we could use a Global Variable in the UNC path of a file resource?! As long as we make sure that folder structure is the same for each “customer” site we could set the Global Variable to the customer’s file server at the Team or if needed, Agent level. Even within a single organisation, Global Variables would enable us to use local file servers without having to implement DFS-R etc.

Being RES Consultancy Partners we could also use this process when designing our Building Blocks. For example, we could upload the required resources for a XenApp build to a file server, import the RES Automation Building Blocks and change the Global Variable(s) to point to the customer’s file server instead. No longer would we need to either perform a mass “find and replace” within the Building Block files or upload 5GB of data into a database. Happy days Smile.

As you’ve probably guessed, this doesn’t work. DOH! When we attempt to insert the Global Variable by right-clicking the file path we’re not given the option:

image

Manually entering the Global Variable placeholder, e.g. ^[GlobalVariable] doesn’t work either. There is, however, a workaround.

Resources, Global and Environment Variables

Now that we know we can’t use Global Variables at the resource level, I do know that we can use Environment Variables. If we just so happen to use an environment variable and that environment variable just so happens to be set to a Global Variable’s value, it just might work…

Firstly we need to pick a variable to use and in this example I’ll use ’RESAMRESOURCES’ as it’s unlikely to clash with any other environment variables. We define the Global Variable and set the value to our file server’s share (you can always override this at a Team/Agent level or when importing Building Blocks where needed):

image

Next, when adding a file resource we can browse the target file and override the UNC path and enter an environment variable. In this example I’ll use the %RESAMRESOURCES% to point to the required file server.

image

All that’s left to do is assign the environment variable before any module that we want to use this resource. Fortunately, RES Automation Manager has a task to do just this. In my example I’ve created a job-based environment variable. We could always set this as a persistent machine-based variable via AM too.

image

Once we’re done, our completed module will look a lot like this. Note: the job-based environment needs to be set before we execute a task that references the file server resources, in our case, the Unattended Installation of Foxit Reader task.

image

When we export our Module as a Building Block we now have a fully portable module that can be imported into any environment without storing the resource(s) in the database! All we need to do know is use Global Variables to define the credentials used to connect to the file server..

Resources, Global Variables and Credentials

This is where the house of cards falls down around us.. We’ve managed to trick RES AM into using file resources with Global Variables. However, as the RES Automation Manager service runs under the Local System account, it has no access to file resources located on file servers. To overcome this issue, we need to embed the credentials in with the resources. Again, you would assume that you could use the Credentials type of Global Variables to achieve this.

image

I’ve tried unsuccessfully to get this work, even my manually specifying the ^[GlobalVariable] placeholder. Perhaps I’m the only one, but what about password changes? If we embed the credentials with the resource, using a Global Variable for this would make perfect sense. Currently, we don’t change the password associated with the RES Automation Manager resources as this requires us to update each individual resource. If they were based on a Global Variable we’d have a simple way to update the password, maintain security and pass an audit with flying colours!

I can only assume that this is either technically difficult to implement or is an oversight. As a result, we’re still left have to either do a mass “find and replace” in our Building Block files when implementing RES Automation Manager at customer sites or uploading large binaries into the database. Other than this, I think Global Variables are a brilliant edition and hopefully they will be coming to RES Workspace Manager too Smile with tongue out.

Many thanks for reading. Iain

VET v1.1 Released!

Virtual Engine are pleased to announce the general availability of version 1.1 of the Virtual Engine Toolkit (VET). The latest Windows installer and documentation is available for download now on the Virtual Engine web site.

We’ve put together a short overview video demonstrating each new feature. In the “What’s New” video we cover the following features:

  • Conversion of Group Policy Objects to RES Workspace Manager building blocks;
  • Conversion of Active Directory published printers and site definitions to RES Workspace Manager building blocks;
  • Direct import into the RES Workspace Manager console;
  • Multiple profile updates with the Profile Update Utility (PuU);
  • Ad-hoc registry changes in the Profile Update Utility (PuU).

For more videos on the Virtual Engine Toolkit, please check out our YouTube channel.