Virtual Engine Receives RES Recognition

Virtual Engine are honoured to have been highlighted by RES Software in their 5 year milestone of the RES Software Valued Professional programme. Having two RSVP members is in itself a privilege, but to have received recognition for our efforts with the Virtual Engine Toolkit is also important to us.

Needless to say we are already working on the next major version of VET which incorporates a complete overhaul of the user interface. Thank you to Bob de K and the RES team for publically recognising the time and effort we contribute to the community.

The Anywhere Home Drive

I thought I’d put pen to paper to discuss a topic that was raised during the most recent UK Citrix User Group meeting. Discussions in the afternoon session were focused around user data, mobility and possibly the end of the user home drive as we know it. There has been lots of talk about how insecure DropBox is and the desire for businesses to implement an on premise solution. Obviously there are other commercial and open source services/products in market today that were not covered, e.g. Box, Google Drive and SkyDrive etc.

Ultimately we had AppSense, Citrix and RES in the same room demonstrating their respective products; DataNow, ShareFile and HyperDrive.

The ways in which these products are implemented varies significantly and they’re seemingly targeted at different markets or are trying to solve slightly different problems (some of which I’m not convinced really exist). For example, AppSense take a data aggregation perspective whereas RES are outwardly targeting DropBox with their current release.

From a UEM perspective there is obviously the desire for both AppSense and RES that the users’ personalisation is moved to a solution that also enables ubiquitous access within their respective UEM solutions. Synchronisation of favourites and signatures etc. regardless of device would be a massive boon for corporate users. Microsoft are doing some of this with Windows 8 today via SkyDrive.

My feeling is that regardless of what solution is implemented the vendors need to make using their product easier to use than “free” consumer products. I’m sure the majority of corporate users will happily utilise an alternative solution if it’s as easy or easier to use then their existing tool of choice. Key to this process is ensuring that a user does not have to move their data to a new solution and that their existing workflows continue to work. How many linked spread sheets are entrenched within businesses today?

Out of the solutions demonstrated the other week AppSense appear to have a head start. DataNow will currently (only) make a user’s existing home drive/directory available via the DataNow client on iOS and Android devices. This will be extended to other SMB shares and SharePoint in the future with the DataNow Enterprise product. You never know, it might actually make using SharePoint bearable!

Contrary to the DataNow product, the HyperDrive appliance is targeted directly at replacing DropBox with an on premise alternative. The caveat here is that data still needs to be moved from its existing location to a secured area managed by HyperDrive. Now don’t get me wrong; DropBox users do this already so this should not be huge undertaking. However, moving and duplicating data is not a long term solution.

Citrix announced at Synergy in Barcelona that the “Control Plane” that federates authentication and management for it’s ShareFile product is now available within the EU (Germany). For European organisations that was always going to be necessary requirement. One can only assume that this will be extended to further geographies in the near future?

The introduction of Storage Zones for ShareFile enabling on-premise data storage is something of a (welcome) surprise considering the relationship Citrix have with both AppSense and RES. OK; I’m not really surprised but it has happened quicker than I thought it would. Unfortunately for Citrix, until the Control Plane is able to run on-site it is likely there will still be some reluctance from enterprises as you’re still reliant on Citrix data centres for access to your data.

In summary all three of the products still need some development before they’re adopted by enterprises en masse, if at all.

  • The DataNow Enterprise product isn’t currently shipping (the Essentials version is) so it’s hard to recommend but looks promising.
  • HyperDrive is functional although a “little rough around the edges” but this should be expected for a v1 product. As a direct replacement for DropBox it is workable.
  • ShareFile has some fantastic opportunities with the Citrix Receiver integration but you’ll need both ShareFile Enterprise and CloudGateway Enterprise to the get the most out it. Would a smaller “point” solution be cheaper and easier to implement?

So who can deliver first…?

Virtual Engine will be presenting at IP EXPO 2012 – London

Virtual Engine’s founder and co-owner Iain Brighton (@Iainbrighton) will demonstrate how using RES Software can help transform the lifecycle of your users in this practical, under the hood demonstration titled “Transforming the lifecycle of a user – from on-boarding to delivery of services and finally off-boarding”.

Please join us there on Thursday 18th October, 11:30 – 12:15 in the Desktop Lab Theatre.

PVS Image Management

There has been a bit of banter on the Twittersphere about how people manage and document their PVS images. It was suggested (by more than just me Smile) that RES Automation Manager could be utilised for this task. This post is not a best practice guide as to how to create, update or document your images, rather a use case on how and why we use/recommend RES Automation Manager. Heaven forbid, you might even decide to do away with Provisioning Services as a result. Either way RES Automation Manager will play very nicely with or without PVS but I couldn’t fit it into 140 characters!

Provisioning Services Private Image Management

Maintaining the gold or private mode PVS image can be a complex task for a number of reasons. Simplifying any of these potential hurdles can only be a good thing, right?

  1. A certain level of skill is required to both create and maintain images. There are numerous tasks that need to be completed and in some cases, performed in a particular order. As a result, this task is typically left or assigned to the senior administrators.
  2. Application upgrades can taint or stain the master image. Some applications require an uninstallation of the old version and installation of the new product MSI. I don’t need to tell you that uninstallers are not always reliable or clean everything out when run!
  3. How are changes to the gold image documented to ensure that they’re incorporated into all other PVS images? It is typical that there will be more than one image for deployment. For example, hardware differences will typically require separate images.
  4. Ad-hoc and emergency changes can wreak havoc with your PVS images. How quick and easy is it to push an update out to 100 XenApp servers streamed from a central image? If we make changes whilst the servers are running then they’ll be lost when the write cache is erased meaning we either have to reapply this change after every reboot or update our gold image pronto! This will get a lot more interesting if the servers are rebooted on a nightly basis and the write cache cleaned!

RES Automation Manager

If know me by now, you probably know that I’m going to say that RES Automation Manager is the answer to all your prayers! Now whilst it can certainly address the above “issues” (and I would recommend it in conjunction with Provisioning Services any day of the week) there are other processes and solutions that may address one or more of the above and deploying RES Automation Manager won’t automagically fix them. A good example of this is documentation. If your internal processes mandate that all changes are documented and you bypass this process, there is nothing to stop you bypassing this process even if Automation Manager is installed!

What RES Automation Manager Won’t Do

Thought I’d better get this bit out of the way before you get all the way to the end and are disappointed! RES Automation Manager is a Run Book Automation tool and not an imaging/deployment tool. This means that we cannot (directly) deploy an Operating System from RES Automation Manager. Fortunately for us there are many technologies out there that can, e.g. Windows Deployment Services/Microsoft Deployment Toolkit which BTW can by combined with RES Automation Manager – take a look at this White Paper. Why reinvent the wheel?!

What RES Automation Manager Will Do

So once we have our Operating System deployed and the RES Automation Manager agent installed (we can do this with WDS/MDT as mentioned earlier) what benefits will this give us? Well, at a simplistic level, RES Automation Manager can automate the entire server configuration and application deployment process. This process can also include installing XenApp and XenApp Prep as well as any other applications. This obviously takes some additional time but gives us a clean, repeatable process for deploying a XenApp server from scratch. It’s a strategic decision and not a tactical one!

Why is this important? Typically it comes down to issues #2 and #3 so let’s take them one at a time..

Issue #1

RES Automation Manager can reduce this complexity by removing Provisioning Services altogether. I’m not suggesting that you remove this from your infrastructure. Not even for one minute. However, if you don’t need to have a clean image after every reboot getting shot of PVS maybe an option? We have automated the complete server deployment and can typically provision a new server in a few hours from start to finish; Operating System, XenApp and applications. OK it’s a few hours of time, but there is no user interaction required. I’m guessing that it’s probably not that often that you need to add a new server within 30 minutes?

This benefits the typical IT department as these are now regular servers. They’re supported in the same way as other servers and they have a proper OS install etc. There are downsides too. Now we need to patch and maintain multiple OS instances and not just one master image. Isn’t this part of the reason you deployed Provisioning Services in the first place?

Issue #2

By having a repeatable process for building our XenApp server(s) from scratch we can avoid tainting our image. If we need to cut a new image then we can deploy a completely clean server and deploy the required applications as required. We don’t need to uninstall and reinstall or upgrade applications. I’m not advocating this as a best practice, but I know lots of admins that are a lot happier with this process. It doesn’t need to be performed for all updates, but you now have an option as to whether you update the master image or cut a new one. If you have not automated the entire deployment and configuration process, recreating a new image from scratch probably doesn’t make you feel warm and fluffy inside!

When you finally get run over by a bus (it’s going to happen one day as everyone keeps saying it) pretty much anyone with ounce of intelligence can deploy a new server or reverse engineer the Modules and Tasks in the Run Book to discover how things are tied together.

Issue #3

By virtue of automating the entire configuration and deployment process with RES Automation Manager, you have actually documented every step in the process. RES Automation Manager includes the ability to create an Instant Report of any or all Run Books, Projects and/or Modules. These reports are very detailed (small example here) and typically run to 1,000+ pages. For us consultants, this feature alone is worth its weight in gold. Did I mention that it’s available in RES Workspace Manager too? Winking smile

Issue #4

Finding out when and by whom the changes were made. Whether they be changes made to the gold image or Ad-Hoc emergency it doesn’t matter the audit trail of the changes is vitally important especially with change management processes. Well would be surprised to hear that RES Automation Manager has an in-built Audit Trail which allows you to view all actions performed in RES Automation Manager – how handy is that when a witch hunt is on (Oh that never happens now does it!?).

Issue #5

As usual I’ve saved the best until last and you didn’t see number 5 coming! The “pièce de résistance” if you like. This might get a bit confusing so strap yourselves in ready…

Emergency changes to running PVS instances are pain. Depending on your configuration after a reboot changes may be lost and depending on your requirements, you may reboot nightly or even weekly. If there is a configuration change that needs to be made then ultimately we need to update the master image. We can implement the change on the running instances, but it will be lost at some point when the write cache is cleared. Until the master image is updated we will need to implement the change, potentially after every reboot.

Because RES Automation Manager is a Run Book Automation tool we can implement this change across all running instances within minutes. “WAIT!”, I hear you cry, “These changes will be lost after a reboot!” Correct. But now we have achieved two things; documented the change and can automate the update to the master image at some point in the future.

Why did I say at “some point in the future?” Fortunately for us there is a hidden gem within RES Automation Manager called Snapshot Intelligence. With a name like that it better be good right!?

As the RES Automation Manager database has a record of all jobs that have executed on a given agent it can detect a snapshot. Whether this is a virtual snapshot or a backup restoration, it makes no odds. In our PVS world, if RES Automation Manager jobs have been run on a machine and the PVS instance is reset back to our master image state (write cache cleared), RES Automation Manager will detect this as a snapshot. You with me so far..?!

Once a snapshot is detected, RES Automation Manager can automatically reapply the job history (I’ll pause whilst you take this in and wait for the penny to drop!).

So if we automate all the emergency or ad-hoc updates with Automation Manager we can automatically reapply these after every reboot? Yes. No need to update the master image for every change? Yes.

In fact it gets better than that. When we update our master image we can run the exact same job history (automatically if you wish) to update the gold image. If you want to cut a new image from scratch we’ve got that covered too. Above all, if everything is automated with RES Automation Manager it’s automatically documented too. Needless to say, you get all the usual audit logging and change history.

Summary

So, in summary, using RES Automation Manager in combination with Citrix Provisioning Services has huge benefits, but there’s obviously a cost associated. Would I recommend it? Absolutely! For all of the above reasons. Is it worth it? Unfortunately, I can’t tell you that as only you know your environment.

Can RES Automation Manager replace Provisioning Services? Not entirely as you’ll still need WDS/MDT (or equivalent) to deploy the OS. It also depends on your reasons for deploying PVS in the first place. If it’s for near instant deployment, remove local disks, reduce the storage footprint or a clean image on every reboot, you’ll probably be using it for a long while yet. If your reasons are purely for “single image” management then you could potentially replace PVS in favour of a “traditional” deployment. Would I recommend this? It depends!

I know we’ve been focused on Provisioning Services in this article but RES Automation Manager will help you with the rest of your infrastructure automation. Desktops, laptops, servers; Exchange and Active Directory etc. You may have XenDesktop, Quest vWorkspace or VMware View for your virtual desktops. The same principal applies and you may even be using PVS in combination with these. Anyway, I don’t need to preach to the converted!

I will say that it should be a strategic decision to deploy RES Automation Manager. Don’t underestimate the amount of time it takes to automate and test. But I guess you already spend a lot of time testing your images?

You can find some video overviews/introductions on RES Automation Manager on Citrix TV and RES Tutorials. If you don’t want to take the time to download, install and configure RES Automation Manager but want to take a quick look, you can always request access to the RES Showcase. Some background and example videos on the Showcase platform can be also found here.

I’ll get off my soapbox now and crawl back to whence I came! Please feel free to comment and I’d love to hear your thoughts. Iain

RES Automation Manager 2012 Global Variables

Unfortunately, this post is a mixture of both good and bad news. In my humble opinion, I feel that RES have missed a trick with their implementation of Global Variables in RES Automation Manager (AM) 2012 and here’s why.

In all the furore surrounding the RES AM 2012 release, Global Variables are supposed to herald the completion of multi-tenancy implementations. For example, multiple departments and/or customers can be co-located on the same database and share the platform without any visibility or potentially any knowledge of who else is utilising the infrastructure. If you’re after an introduction into the RES AM Global Variables I suggest you take a look at Rob Aarts’s article on RESguru or watch Grant Tiller’s demonstration on REStutorials.

Resources and Global Variables

It was my assumption (obviously incorrectly) that we would be able to use Global Variables with file server resources. In a multi-tenant implementation, I wouldn’t necessarily want all administrators uploading file resources to the database and bloating the tables with BLOBS. When we add files stored on a file share to the RES Automation Database, the UNC path is stored along with the entry in the database. This isn’t necessarily a problem, assuming that all RES Automation Manager agents can resolve this path. Unfortunately, in a multi-tenant environment this may not be the case.

Enter Global Variables. Wouldn’t it be a great idea if we could use a Global Variable in the UNC path of a file resource?! As long as we make sure that folder structure is the same for each “customer” site we could set the Global Variable to the customer’s file server at the Team or if needed, Agent level. Even within a single organisation, Global Variables would enable us to use local file servers without having to implement DFS-R etc.

Being RES Consultancy Partners we could also use this process when designing our Building Blocks. For example, we could upload the required resources for a XenApp build to a file server, import the RES Automation Building Blocks and change the Global Variable(s) to point to the customer’s file server instead. No longer would we need to either perform a mass “find and replace” within the Building Block files or upload 5GB of data into a database. Happy days Smile.

As you’ve probably guessed, this doesn’t work. DOH! When we attempt to insert the Global Variable by right-clicking the file path we’re not given the option:

image

Manually entering the Global Variable placeholder, e.g. ^[GlobalVariable] doesn’t work either. There is, however, a workaround.

Resources, Global and Environment Variables

Now that we know we can’t use Global Variables at the resource level, I do know that we can use Environment Variables. If we just so happen to use an environment variable and that environment variable just so happens to be set to a Global Variable’s value, it just might work…

Firstly we need to pick a variable to use and in this example I’ll use ’RESAMRESOURCES’ as it’s unlikely to clash with any other environment variables. We define the Global Variable and set the value to our file server’s share (you can always override this at a Team/Agent level or when importing Building Blocks where needed):

image

Next, when adding a file resource we can browse the target file and override the UNC path and enter an environment variable. In this example I’ll use the %RESAMRESOURCES% to point to the required file server.

image

All that’s left to do is assign the environment variable before any module that we want to use this resource. Fortunately, RES Automation Manager has a task to do just this. In my example I’ve created a job-based environment variable. We could always set this as a persistent machine-based variable via AM too.

image

Once we’re done, our completed module will look a lot like this. Note: the job-based environment needs to be set before we execute a task that references the file server resources, in our case, the Unattended Installation of Foxit Reader task.

image

When we export our Module as a Building Block we now have a fully portable module that can be imported into any environment without storing the resource(s) in the database! All we need to do know is use Global Variables to define the credentials used to connect to the file server..

Resources, Global Variables and Credentials

This is where the house of cards falls down around us.. We’ve managed to trick RES AM into using file resources with Global Variables. However, as the RES Automation Manager service runs under the Local System account, it has no access to file resources located on file servers. To overcome this issue, we need to embed the credentials in with the resources. Again, you would assume that you could use the Credentials type of Global Variables to achieve this.

image

I’ve tried unsuccessfully to get this work, even my manually specifying the ^[GlobalVariable] placeholder. Perhaps I’m the only one, but what about password changes? If we embed the credentials with the resource, using a Global Variable for this would make perfect sense. Currently, we don’t change the password associated with the RES Automation Manager resources as this requires us to update each individual resource. If they were based on a Global Variable we’d have a simple way to update the password, maintain security and pass an audit with flying colours!

I can only assume that this is either technically difficult to implement or is an oversight. As a result, we’re still left have to either do a mass “find and replace” in our Building Block files when implementing RES Automation Manager at customer sites or uploading large binaries into the database. Other than this, I think Global Variables are a brilliant edition and hopefully they will be coming to RES Workspace Manager too Smile with tongue out.

Many thanks for reading. Iain