October 26, 2006
Evolutionary Re-EngineeringIn my last post I illustrated how an evolutionary approach could be taken to optimising the design of code, provided that the problem is not irreducibly complex. I used the example of package coupling and a metric called the Normalised Distance from The Main Sequence (D') to show how an evolutionary process might work.
And this little example got me thinking: could this approach be applied for real, on real code using everyday programming tools like Eclipse or Visual Studio? And I think the answer could be "yes". It could work something like this:
You are asked to clean up some code. You agree some design quality goals, and select or design some code metrics to measure progress. You then ask your tool - whatever that may be - to come up with a sequence of refactorings that will meet or exceed (or get as close to as possible) your multiple design goals. For each metric - each measure of fitness, if you like - the tool understands what kinds of mutations - in the shape of pre-defined refactorings - will change the value of that metric. For example, it might understand that moving classes between packages can change the value of D'. So it takes the existing code, and performs a single refactoring - Move Class. Then it runs the tests. If the tests pass (i.e., behavuour has been preserved), then it calculates D'. If D' is better than it was, it keeps the changes it has made to the code. If D' is not improved, it discards the changes and reverts back to the last version. It then makes another random refactoring, and the process repeats as many times as it takes to satisfy the design goals, or until it cannot find a better design.
The danger with this approach might be that, without human intervention, the resulting code might be difficult to maintain, since nobody was involved in redesigning it. It might be better to generate a list of refactorings so that a human could look through it and accept or reject the refactoring strategy. The original code would be kept and the tool would work on a local copy to try out different strategies and come up with the optimum sequence of refactorings.
Could it be done? I'm not 100% sure. But I think there's a very tantalising possibility here. If it can be done, then it could prove to be an immensely valuable technology. At the very least, just imagining how it could work could give us some very useful insights into evolutionary design and also our practical understanding of design principles. The design of the metrics, and the refactorings, would - without human intervention in the re-engineering process - test our understanding of design principles, in much the same way that designing totally autonomous robots tests our understanding of human intelligence.
Posted 14 years, 2 months ago on October 26, 2006