February 23, 2007
Teaching Robots Tests Our Real UnderstandingIn an article I wrote for the now defunct satire site objectmonkey.com called Naughty Robot, I described what we do for a living as the act of teaching robots to do useful tricks. Our customers don't speak the language of the robots; who are very, very dumb indeed and only have a very basic vocabulary relating to very low-level tasks. Our job - in a nutshell - is to translate the customer's demands into robot-speak.
In that sense, what we do for living could be thought of as manual natural language processing. We take the fuzzy, hand-wavy, touchy-feely stuff the customer asks for and create formal, executable instructions that a computer will know how to perform.
Teaching robots to walk tests our true understanding of walking. Teaching computers to program will test our understanding of software development
Imagine employing a total idiot - someone so stupid that George W. Bush would have to talk down to them. Imagine you've hired this total moron to build you a house. There's no point in telling him "I'd like something modern, but classical, and big enough for a family of five with two dogs and space for friends to come stay". He has no idea what you're talking about. If you tell him to "put that brick on top of that other brick" or to "dip your paint brush in that pot of paint and apply it to that wall until the wall is completely covered in the paint" then he knows exactly what you mean. Otherwise he just stares blankly back at you.
Well that, folks, is what it's like working with computers. You want your computer to tell you where to get the best ice cream, but your computer really only knows how to manipulate numbers and text. Someone's gonna have to explain to your computer how to find the best ice cream purely in terms of low-level things a computer knows how to do.
And to make matters worse, your computer will do exactly what you tell it to. It can't read between the lines. It can't think "oh, she really means 'go left' here" if the instructions you give it don't explicitly say 'go left'.
There's no wonder computer programming is such an intellectual challenge. It's like explaining to someone how to walk.
To better understand this process - this journey from the creative, touchy-feely right brain to the logical, literal left brain - we could see how things are progressing in automated natural language processing. I do believe that the acid test of any understanding is whether we can build a machine to do it. I think if we can crack the problem of computers carrying out spoken instructions given in natural, everyday language, then we've cracked a significant chunk of the software development process.
Of course, what it probably doesn't do is answer the question of organising our code. We know what instructions might be needed to complete a task, but where should this code go? What classes should they be in? What packages should those classes be in? How should these organisational units - components, if you like - interact?
Again, here I think we have the beginnings of an understanding of how code should be organised. Design principles give us a set of guidelines and heuristics from which these design decisions could be made. Just as it is with automated natural language processing, perhaps one day the process of deciding how the code should be organised will be automated, too. And that would surely be the acid test of how well we really understand software design.
In an old post, I blogged about the possibility of automating the process of software evolution. We would select a design quality metrics - or a suite of metrics - that represents our design goals. We would create some working code - code that does what we want (which we know because it passes the tests we agreed with the customer). Then we instruct the computer to apply random refactorings to the code. After each refactoring it measures design quality using the metrics we selected. If quality goes up, it retains the change. If not, it reverts back the previous version. (With each refactoring, it would, of course, run the tests, too!)
This refactorbot, I propose, would be a test of our understanding of design principles. It will do exactly what we tell it to do. The challenge will be to explain our fuzzy, hand-wavy, touchy-feely understanding of good software design in words a computer can understand.
And so we go full circle :-)
Posted 14 years, 5 months ago on February 23, 2007