February 12, 2007

...Learn TDD with Codemanship

Computing in 2047

Yesterday's announcement from chip maker Intel, who have created a single processor sporting a whopping 80 processing cores which is capable of a mind-boggling 1 trillion floating point operations a second - that's about 999,999,999,999.9 more than me - gives us reason to believe that Moore's Law will hold for a good few years yet.

At dinner with family last week, I entertained my nephew with a tall tale about how his mobile phone contains more computing power than all the computers in the world put together 30 years ago. If I do the maths, that's obviously not true. But it's kind of in the ballpark, though. It would almost certainly be true if we went back 40 years.

And Intel's announcement signals the very real possibility that, in another 40-50 years' time - probably less, judging by the current pace of development - I may be able to hold all of today's computing power in the palm of my hand. (Although the ubiquity of today's computers makes that a much grander claim).

That is to say, computers will be millions of times more powerful than they are today. (Here's a back of an envelope calculation if you don't believe me: if processing power doubles every 18 months, then it will double 26 times in 40 years, so computers will be 2 to the power of 26 times faster in 2047 - about 60 million times faster!)

Imagine a laptop with a 120 Peta hertz processor (about 120,000,000 Ghz!): what would we do with all that processing power? Graphics and gaming are one area that can always benefit from more FLOPS, since reality has seemingly endless levels of detail that can be simulated to create evermore realistic artificial realities. I have little doubt that the virtual 3D realities of 2047 will be - to look at, at least - totally convincing. Our eyes, and our brains, won't be able to tell the difference just by looking.

But what about applications? What will I be using my 120 Phz PC for at the office? Well, there's a strong argument that most of the processing power on today's business computers is wasted. Word processing simply doesn't need it. Neither do spreadsheets. These were the killer office applications of the 1970's and 80's. Tools like PowerPoint, you could also argue, are largely unnecessary. Most presentations given using PowerPoint could be just as effectively communicated with hand-outs and hand-drawn flip charts. Indeed, you could argue that if we didn't have PowerPoint there would be a lot less presentations, and then we could all get on with our jobs. (Assuming, of course, that your job isn't to create PowerPoint presentations.)

Office computing has been stuck in a rut for a couple of decades. And I strongly suspect that in 2047 we'll still be writing reports nobody's going to read and creating presentations for nobody who cares. I know that I haven't used a new feature in Word since 1995, for example. And yet, I strangely feel the need to upgrade my business laptop every couple of years...

And what about mobile computing? The average smart phone is almost as powerful as a desktop computer these days - well, it's in a similar ballpark, at least. I'm ashamed to admit - because I've consulted for mobile phone software companies - that I actually, genuinely have no use for a smart phone. I want to make and receive telephone calls. I only send texts to people who send me texts. If nobody sent me texts, i wouldn't bother. I certainly don't want or need mobile email or Internet access - yet... And I find the handsets too awkward and fiddly to use for any serious computing needs I might have.

In 2047, smart phones will be 60,000,000 times more powerful than they are now. Will we still make phone calls in 2047? Or will I put my phone on a desk and have it project a 3D virtual workspace around me, so I can collaborate with people from around the world as if we're all in the same room? I must admit, I do rather like the idea that the need for physical co-location to do "knowledge work" like software development will be well and truly a thing of the past in 2047. This may be a necessity, as travel may be about to become prohibitively expensive.

And what about the promise - the perennial promise - from AI researchers that a computer as intelligent as a human being is "just around the corner"? Firstly, I'm very skeptical of these claims. I think the AI community is barking up the wrong tree, for a start. I don't believe our minds are the products of classical computers that use 1's and 0's and ANDs and XORs to do their thinking. I suspect our minds are the products of quantum computation: there does seem to be evidence for this, at least. And in 2047 I suspect Quantum AI will still be in its infancy. Even with 60 million times more processing power, computers will probably still be - compared to us, and monkeys and cats and dogs - relatively dumb. Certainly in the last 40 years, advances in AI have been - let's be kind - "modest", to say the least.

Which means that in 2047 we will have 60 million times the computing power we have today, but computers will probably still need to have everything explained to them in explicit logical languages like C# and Java. The pace of language development is very slow, as we can see when we look back ove the decades and realise that we're still bumbling along with crappy 3GL's like Java and C++. "Programming with pictures" - which should be no big leap - still hasn't really happened, even though it's a tiny molehill compared to the mountainous challenge of creating computers that can understand natural languages as well as humans do. UML is my proof that we will still be doing pretty much what we're doing now in 2047, just as today we do pretty much what Dijkstra, Hoare and Knuth were doing back in 1967.
Posted 13 years, 10 months ago on February 12, 2007