No Thin Clients Anytime Soon

Google is quickly coming to realize that they're not in the services business at all. All of their various services -- news, maps, mail etc -- are just thinly veiled attempts to collect, store and control data. Of course, even search itself falls under this paradigm. Search is, after all, the final form of managing data. And now, with Google Base, Google has has made it very clear that they don't intend to be the world's search engine, they intend to be world's database. In this endeavor Google is unique only in their ambition. Every other "web 2.0" company operates on this same innate desire to control user's data. This is my primary concern about the entire Web 2.0 phenomenom: all of these companies intend to build walled gardens around various important sets of data and then expose tiny bits of the data through strictly controlled, proprietary APIs. - Google and the Tyranny of Data

The talk has been in the air about a return to the Thin Client model, although with changing technology the "thin" has changed. A "Thin Client" used to be merely a graphical terminal that ran everything on remote server; a high-class 486, decent graphics card, and a bit of custom hardware could have managed this. A more modern conception of a thin client is something running a web browser; call it a 1GHz machine with at least 512MB of RAM, probably backing to a flash storage device with no hard drive.

I had started a post about why this is still infeasible because of unavoidable network latency, etc., but the fact is that if you are willing to stipulate the downloading of moderate amounts of code and data, you can avoid most round-trip latency even now with just Javascript or Flash. So the performance or technical issues really won't be stoppers.

The killer lies in the fact that the companies won't be able to resist trying to lock you in, and I don't care how awesomely clever Google is, they can't write an entire software ecosystem from top to bottom. They especially can't write it from top to bottom with the limitation of often downloading the app when I want to use it. I want my data to be able to flow out and I want to be able to run whatever programs I want, and both of those are going to be very hard for a thin client vendor to deal with allowing.

The promise of the Semantic Web is built upon the ability to seamlessly blend multiple sources of data to create something where the whole is worth more than the pieces, but the companies who own the pieces are going to want a cut. The process of taking that cut is going to impose a lot of friction on the process, probably enough to prevent the transaction from happening at all. And you can't even blame the companies, since providing these services aren't free anyhow, what with the programming and the hosting and the bandwidth.

No matter how awesome a company you are, if you won't let my data out to play, you've started off by stripping my data of the majority of its value, and while it may take some time to work itself out, you'll be beaten by people who don't strip my data of all its value. And that's going to look more like a desktop app than a Web 2.0 gated garden, even if it's actually on the web, in that it's going to store my data locally.

Make it easy to back up over the internet, sure. Offer me whatever extensions that make sense in a webbrowser, go for it. Heck, even treat the remote copy as the primary and the local copy as the backup. But give me a full local copy. My new whiz-bang video editor is going to want to see my photos and music. My new productivity app needs to be able to see my calendar data. My nifty new n-th generation weblog system needs to be able to see my IM status information. I don't need to be nickled-and-dimed to death every time I do something like that, nor do I need to be limited to going through a sub-optimal API, assuming the API is even strong enough to give me the access I need.

Is there a market for the new Thin Clients? Probably. But I think it's smaller than generally predicted by PunditsTM, and I question whether it will be profitable. I don't think the value proposition has gotten that much better since WebTV, and that has not exactly displaced the desktop computer. And what do you do when your user wants to edit a video on their Thin Client?

Unless something radical changes in browserland, I think we're going to see a slight swing back to desktop applications that happen to be more aware of the Internet. I've noticed that we seem to be starting to hit some of the fundamental architectural problems with browser technologies like Javascript, and even relatively simple things like the Slashdot AJAX comment browsing is starting to hurt my browser. (In particular, the complete lack of threading is really problematic, and I could go on for a while about some other fundamental flaws. The result is that you really can't build a complicated and responsive app very easily. And the browser is currently completely unprepared for the multicore world, even more so than ancient C.) Google Earth is the future of applications, blending the best of the resources a local computer can bring to bear (like 3D graphics) with the best of networked computing (on-demand loading from gigantic data sets, seamless updates, etc).