Back in the computing stone age, there used to be something called “time-sharing”. Before the PC, computer neanderthals used dumb terminals to access computers where all the computing was done; the terminals were teletype machines used to enter commands and print out results – no local storage or computing – nada.
OK – not quite but it sure feels like we’re coming full circle:
Google CEO Eric Schmidt:
“It starts with the premise that the data services and architecture should be on servers. We call it cloud computing – they should be in a ‘cloud’ somewhere. And that if you have the right kind of browser or the right kind of access, it doesn’t matter whether you have a PC or a Mac or a mobile phone or a BlackBerry or what have you – or new devices still to be developed – you can get access to the cloud.”
The hardware required to access the “cloud” is pretty simple – basically a stripped laptop – you don’t need too much when all the power is in the network.
All this is not lost on Microsoft and there will be an intensive effort to be the cloud of choice. However, there are others who have as much interest in the cloud and are jumping into the fray with cloud products – one comparison on TechCrunch (“A Comparison of Live Hotmail, Gmail and Yahoo Mail”) captures some of the main combatants.
“These new consolidated online services … are widely said to be Microsoft’s answer to compete against fast-growing competitors such as Google. The majority of these services are web apps and this type of technology is thought by many to be the future of computing because the services and user data are available anywhere with web access, without installing an application.”
What’s interesting to me about all this is how these trends impact PC hardware – seems to me it de-emphasizes the PC horsepower race as we trend back to the “dumb terminal – smart network” model. While I’m not suggesting that the need for high horsepower local computing is a thing of the past, it does suggest that the PC market will see more stripped, low-cost laptop-like devices to access web-apps.
It also empowers more Linux-based solutions – for example, HP recently bought a thin client desktop company called Neoware
“…as part of HP’s strategy to expand in growth markets and further its leadership in personal computing. HP made a particular point of stating that acquiring Neoware is intended to accelerate the growth of HP’s thin client business by boosting its Linux software, client virtualization and customization capabilities…”
All this suggests to me that computing choices will be much more robust and more widespread than currently, as the cost of computing trends even lower. It also suggests that web-based apps are rapidly emerging as credible alternatives to locally stored programs. Microsoft faces increased competition from deep pocket rivals, although Windows OS dominance is not in any jeopardy.
I enjoyed your article on Cloud and Thin Client computing. It really is an interesting time for what you can do in this model, and though at first glance the current environment sounds all too similar to the Mainframe dinosaurs which predate my existence, there are some things happening that make this pretty exciting.
You have 20 CS reps that all need the same OS image and line of business applications to do their jobs. Rather than spend 12 grand buying them all desktops, spend 10 grand, set a thin client on their desks and buy great hardware to run a VMware cloud on the backend with 20 identical desktop images and capacity to expand further.
The environment allows flexibility down the road – if more CS reps need to be hired, there’s no waiting for procurement to get new machines and configuring applications on the new workstations – just add another VM into the cloud and put another thin client on the desk. Each thin client would have their own remote VM ware node to login to. Keep an extra thin client or two on hand and your uptime will be great in case of hardware failure. Systems management, centralized on the server, becomes easier by leaps and bounds.
VMware ESX server does this cool thing called memory sharing, and it’s described pretty clearly by the image below:
What makes this so cool is that it can drastically reduce the memory demands from your cloud. If you have 20 identical workstations running the same applications in your VMware cloud, a lot of the items the VMs need from memory are going to be identical. So rather than having the same resource in machine memory 20 separate times for each VM, you save the resource to memory once and the ESX server is smart enough to direct each VM node to the same instance in the clouds machine memory, as pictured by “Item B”.
I haven’t worked in such an environment, or even talked to VMware about it yet… but the potential is there and from a support side, it sounds like a definite step in the right direction to get away from babysitting desktop users.