Thursday, February 7, 2008

Thin Client Computing


A recent article in CIO magazine got me thinking about this subject.  This is a subject that seems to resurface that, honestly, from a technology management position, somewhat baffles me....

By thin client computing (I may refer to it as TCC), I'm referring to the main computing and desktop image being hosted on a server and users have something slightly more elaborate than a terminal that connects over the network to the server and is the primary interaction point for the user.  Documents are typically accessed over the network and in some instances can be accessed via removable media connected to the client terminal.

There are countless applications for thin client desktop computing, particularly where the applications are known and standardized, don't require local peripherals necessarily, and they can be deployed in quantities.  The notion first started gaining momentum around the dot com time in the late 1990s but the growth was stalled by the rapidly dropping prices in desktop PCs.  It wasn't long before the prices of the full-blown PCs were close to the price of the terminal devices.  From an immediate cost perspective, it seemed like the PCs were a much better value.  Of course, that doesn't consider the support and maintenance costs or life expectancy of the equipment either.

Another factor that I've seldom seen argued in favor of thin client computing is that of the convenience.  If I want to access my system/desktop, I can access it from any computer or terminal on the network or from home (assuming remote access is enabled) and I have the exact same desktop everywhere I go.  Icons are in the same place, the mail profile is always the same, everything!  It is what I consider utility computing.

There are drawbacks and other considerations when it comes to running thin-client computing.  Even in this day and age, many software vendors have software that can't deal with more than one instance of their application running simultaneously and several don't have licensing models for thin client computing.  Some software requires tuning of permissions because it may need root-level write access to a directory to run.  Hardware considerations include the decision to allow remote printing and reverse mapping of hard drives and/or local storage devices (such as portable hard drives or USB thumb drives).  These are decisions that should involve the customer but these are decisions that should be made for a full desktop computer as well.
There are times when it's not the right model too, such as when you need big horsepower to run big applications.  For instance, Adobe Acrobat is fine on a terminal server, however Photoshop probably isn't in an enterprise environment.  If users are seldom on the network and do lots of offline work, it's probably not the right thing.

When I worked for a university, I met with considerable apprehension for several months when discussing bringing one up.  We had some equipment available so I put one up myself as a proof-of-concept.  I made sure the core applications (Office, Acrobat, Firefox, etc.) were ready and working correctly and the appropriate port was opened up on the firewall (more on that later).  I put it up, shared the connection information with a few people and let them start using it.  Within a week or two I started getting requests for others to have access and soon after more software.  After running with the base system for a few months we saw that it was rapidly becoming the favored way to connect remotely (as opposed to using OWA for mail and transporting files).  In just a few months with nothing more than word of mouth we had over 50 users accessing the server on a regular basis so we proceeded as a formal project with a heavy-duty server.  Some departments ran all of their applications through the terminal server and their desktop PCs were nothing more than devices for playing CDs.

One of the other neat but often overlooked features of running Microsoft's Terminal Services is the fact that the default nature is to encrypt the login information.  Through a couple of group policy settings in Active Directory you can adjust it to require all data be encrypted as well, which is what we did.  So by simply opening the one port (default is 3389) for TCP/UDP traffic you can fairly safely deploy it to the world and anyone with a Mac, Windows, or Linux computer can get to it with an RDP client.  In running Microsoft's Terminal Services at a public university with over 30 MB WAN connectivity, we never suffered any worms, malicious attacks, etc. on this system.  Users loved it because they could get to all their network files (which also taught them to save things to their network drives instead of local drives) from any location with network connectivity.  We found that its performance on a slow connection, even dial-up, was better than using Exchange's OWA interface!

I've also deployed TCC in a couple of mid-sized organizations where they were reluctant to upgrade PCs (though just the service I'd performed over the years on fixing old computers would likely have bought several new computers).  With TCC, you eliminate any reliance on the local computer.

So, getting back to the introduction, why the reluctance by even technology people to deploy TCC?  Maybe it's outside their comfort level or perhaps there is an inherit fear for job security--I'm not sure.  For a resource-limited organization or one where identical desktops and remote access are desirable, it's hard to beat.  I can see practical application in public facing labs and terminals, administrative computing, and, if for no other purpose, secure remote access.

No comments: