I just watched this interesting interview with Hugo Barra, director of product management at Google (G), talking about the convergence between mobile net devices and cloud computing. He is mainly answering questions on G plans for the next 2-5 years but a couple of long-term ideas seep through. First, they are thinking sensors and massively redundant cloud data-centers, and they are thinking of them as part of a constant feedback process for which low latency is the key. In other words, your phone’s camera and microphone talk directly to the G data-cloud on a latency of under 1 second – whatever you film on your camera you can voice-recall on any device within 1 second flat. The implications are huge, because G is effectively eliminating the need for local data storage. Second, to get there, they are rolling out real-time voice search by the end of next year. Real time voice search allows you to query the cloud in, well, under 1 second. Third, they are thinking of this whole process as ‘computer vision’ – a naming tactic which might seem plain semantics, but nevertheless reveals a lot. It reveals that G sees stationary computers as blind, that for them mobile computers are first and foremost sensors, and that sensors start truly seeing only when there is low latency feedback between them and the cloud. How so? The key of course is in the content – once storage, processing power and speed get taken care of by the cloud, the clients – that is, us – start operating at a meta level of content which is quite hard to even fully conceptualize at the moment (Barra admits he has no idea where this will go in 5 years). The possibilities are orders of magnitude beyond what we are currently doing with computers and the net.
A related video, though with a more visionary perspective, is this talk by Kevin Kelly on the next 5000 days of the net. I show this to all my media students, though I don’t think any of them truly grasp what all-in-the-cloud implies. The internet of things. More on this tomorrow.