March 22, 2017
Digital Strategy: Why Are Software Developers Excluded From The Conversation?I'm running another Twitter poll - yes, me and my polls! - spurred on by a conversation I had with a non-technical digital expert (not sure how you can be a non-technical digital expert, but bear with me) in which I asked if there were any software developers attending the government's TechNation report launch today.
As a software developer, do you feel your work and status as a professional is valued?— Codemanship (@codemanship) March 22, 2017
This has been a running bugbear for me; the apparent unwillingness to include the people who create the tech in discussions about creating tech. Imagine a 'HealthNation' event where doctors weren't invited...
The response spoke volumes about how our profession is viewed by the managers, marketers, recruiters, politicians and other folk sticking their oars into our industry. Basically, strategy is non of our beeswax. We just write the code. Other geniuses come up with the ideas and make all the important stuff happen.
I don't think we're alone in this predicament. Teachers tell me that there are many events about education strategy where you'll struggle to find an actual educator. Doctors tell me likewise that they're often excluded from the discussion about healthcare. Professionals are just there to do as we're told, it seems. Often by people with no in-depth understanding of what it is they're telling us to do.
This particular person asserted that software development isn't really that important to the economy. This flies in the face of newspaper story after newspaper story about just how critical it is to the functioning of our modern economy. The very same people who see no reason to include developers in the debate also tell us that in decades to come, all businesses will be digital businesses.
So which is it? Is digital technology - that's computers and software to you and me - now so central to modern life that our profession is as critical today as masons were in medieval times? Or can any fool 'code' software, and what's really needed is better PR people?
What's clear is that highly skilled professions like software development - and teaching, and medicine - have had their status undermined to the point where nobody cares what the practitioner thinks any more. Except other practitioners, perhaps.
This runs hand-in-hand with a gradual erosion in real terms of earnings. A developer today, on average, earns 25% less then she did 10 years ago. This trend plains against the grain of the "skills crisis" narrative, and more accurately portrays the real picture where developers are seen as cogs in the machine.
If we're not careful, and this trend continues, software development could become such a low-status profession in the UK that the smartest people will avoid it. This, coupled with a likely brain drain after Brexit, will have the reverse effect to what initiatives like TechNation are aiming for. Despite managers and marketers seeing no great value in software developers, where the rubber meets the road, and some software has to be created, we need good ones. Banks need them. Supermarkets need them. The NHS needs them. The Ministry of Defence needs them. Society needs them.
But I'm not sure they're going about the right way of getting them. Certainly, ignoring the 400,000 people in the UK who currently do it doesn't send the best message to smart young people who might be considering it. They could be more attracted to working in tech marketing, or for a VC firm, or in the Dept of Trade & Industry formulating tech policy.
But if we lack the people to actually make the technology work, what will they be marketing, funding and formulating policy about? They'd be a record industry without any music.
Now, that's not to say that marketing and funding and government policy don't matter. It's all important. But if it's all important, why are the people who really make the technology excluded from the conversation?
March 12, 2017
Why Products Don't Matter To Tech Investors, But Hype Does
They say 'you get what you measure', and in the world of tech start-ups this is especially true.
Having lived and worked through several tech bubbles, I've seen how the promise of winning mind-bogglingly big ("boggly"?) has totally skewed business models beyond the point where many tech start-ups are even recognisable as businesses.
The old notion of a business - where the aim is to make more money than you spend - seems hopelessly old-fashioned now. Get with it Granddad! It's not about profit any more, it's about share prices and exit strategies.
If your tech start-up looks valuable, then it's valuable. It matters not a jot that you might be losing money hand-over-fist. As long as investors believe it's worth a gazillion dollars, then it's worth a gazillion dollars.
For those of us who create the technologies that these start-ups are built on, this translates into a very distorted idea of what constitutes a 'successful product'. I have, alas, heard many founders state quite openly that it doesn't much matter if the technology really works, or if the underlying code has any long-term future, What matters is that, when it comes time to sell, someone buys it.
Ethically, this is very problematic. It's like a property developer saying 'it doesn't matter if the house is still standing 10 years from now, just as long as it's standing when someone buys it'.
Incredibly, very few tech start-up investors do any technical due diligence at all. Again, I suspect this is because you get what you measure. They're not buying it because it works, or because the technology has a future. They're buying it so that they, too, can sell it on for an inflated price. It could be a brick with "tech" painted on it, and - as long as the valuation keeps going up - they'll buy it.
You see, investors want returns. And that's pretty much all they want. They want returns from tech businesses. They don't care if the tech works, or if the business is viable in the long term. Just like they want returns from luxury waterside apartments that no ordinary person could afford to live in. They don't care if nobody lives in them, just as long as they keep going up in value.
Just like the property bubble, the tech bubble runs chiefly on hype. A £1,000,000 2-bed apartment is worth £1,000,000 because an estate agent or someone in a glossy colour supplement said so. An expectation of value is set. Similarly, Facebook is worth twelvety gazillions because the financial papers say it is. That is the expectation for tech start-ups. They can be valued at billions of dollars with a turnover of peanuts by comparison.
What makes tech businesses worth so much is their potential for expansion. Expanding your chain of coffeeshops requires you to overcome real physical barriers like finding premises and staff and buying more coffee machines and so on. Cost-wise, for a software business, there's not a huge difference in outlay between 1,000 users and 1,000,000 users. If only Costa could host their shops on the cloud!
Tech start-ups are just lottery tickets, where the jackpot can be truly astronomical. And what matters most in this equation is that the technology sounds good.
Exhibit A: the recent Articificial Intelligence bubble. News journalists have latched on to this idea - carefully crafted and lovingly packaged by tech industry PR - that real AI, the kind we saw in the movies, is just around the corner. What, again?
The true promise of AI has been 30 years away for the last 50 years. In reality, chatbot sales support sucks, websites designed by AI are crummy, and recommendations made by ad taregting engines are as asanine as they always were ("we see you bought a lawnmower; would you like to buy another lawnmower?") And as for self-driving cars...
But it doesn't matter that AI isn't as far advanced as they'd have us believe, just as long as we do believe it - for long enough to dip our hands in our pockets and buy some shares in the companies hyping it. And we'll do it, because there's a good chance those shares will go up in value and we'll be able to sell them for a profit. This is why few people are willing to say that the Emperor has no clothes. Too many own shares in the company that manufactured those clothes.
As a software developer, though, I struggle to find job satisfaction or to find any professional pride in making invisible clothes for silly Emperors. Yes, there are those of us - sadly, too plentiful - who'll apparently do anything as long as the money's right. But I'd like to think that's a different profession to the one I'm in.
July 16, 2016
Taking Agile To The Next LevelAs with all of my Codemanship training workshops, there's a little twist in the tail of the Agile Software Development course.
Teams learn all about the Agile principles, and the Agile manifesto, Extreme Programming, and Scrum, as you'd expect from an Agile Software Development course.
But they also learn why all of that ain't worth a hill of beans in reality. The problem with Agile, in its most popular incarnations, is that teams iterate towards the wrong thing.
Software doesn't exist in a vacuum, but XP, Scrum, Lean and so forth barely pay lip-service to that fact. What's missing from the manifesto, and from the implementations of the manifesto, is end goals.
In his 1989 book, Principles of Software Engineering Management, Tom Gilb introduced us to the notion of an evolutionary approach to development that iterates towards testable goals.
On the course, I ask teams to define their goals last, after they've designed and started building a solution. Invariably, more than 50% of them discover they're building the wrong thing.
It had a big influence on me, and I devoted a lot of time in the late 90s and early 00s to exploring and refining these ideas.
Going beyond the essential idea that software should have testable goals - based on my own experiences trying to do that - I soon learned that not all goals are created equal. It became very clear that, when it comes to designing goals and ways of testing them (measures), we need to be careful what we wish for.
Today, the state of the art in this area - still relatively unexplored in our industry - is a rather naïve and one-dimensional view of defining goals and associated tests.
Typically, goals are just financial, and a wider set of perspectives isn't taken into account (e.g., we can reduce the cost of manufacture, but will that impact product quality or customer satisfaction?)
Typically, goals are not caveated by obligations on the stakeholder that benefits (e.g., the solution should reduce the cost of sales, but only if every sales person gets adequate training in the software).
Typically, the tests ask the wrong questions (e.g., the airline who measured speed of baggage handling without noticing the increase in lost of damaged property and insurance claims, and then mandated that every baggage handling team at every airport copy how the original team hit their targets.)
Now, don't get me wrong: a development team with testable goals is a big improvement on the vast majority of teams who still work without any goals other than "build this".
But that's just a foundation on which we have to build. Setting the wrong goals, implemented unrealistically and tested misleadingly, can do just as much damage as having no goals at all. Ask any developer whose worked under a regime of management metrics.
Going beyond Gilb's books, I explored the current thinking from business management on goals and measures.
First, we need to identify goals from multiple stakeholder perspectives. It's not just what the bean counters care about. How often have we seen companies ruined by an exclusive focus on financial numbers, at the expense of retaining the best employees, keeping customers happy, being kind to the environment, and so on? We're really bad at considering wider perspectives. The law of unintended consequences can be greatly magnified by the unparalleled scalability of software, and there may always be side-effects. But we could at least try to envisage some of the most obvious ones.
Conceptual tools like the Balanced Scorecard and the Performance Prism can help us to do this.
Back in the early 00s, I worked with people like Mike Bourne, Professor of Business Performance Innovation, to explore how these ideas could be applied to software development. The results were highly compatible, but still - more than a decade later - before their time, evidently.
If business goals are post-conditions, then we - above all others - should recognise that many of them will have pre-conditions that constrain the situations in which our strategy or solution will work. A distributed patient record solution for hospitals cannot reduce treatment errors (e.g., giving penicillin to an unconscious patient who is allergic) if the computers they're using can't run our software.
For every goal, we must consider "when would this not be possible?" and clearly caveat for that. Otherwise we can easily end up with unworkable solutions.
Just as with software or system acceptance tests, to completely clarify what is meant by a goal (e.g., improve customer satisfaction) we need to use examples, which can be worked into executable performance tests. Precise English (or French or Chinese or etc) just isn't precise enough.
Let's run with my example of "improve customer satisfaction"; what does that mean, exactly? How can we know that customer satisfaction has improved?
Imagine we're running a chain of restaurants. Perhaps we could ask customers to leave reviews, and grade their dining experience out of 10, with 1 being "very poor" and 10 being "perfect".
Such things exist, of course. Diners can go online and leave reviews for restaurants they've eaten at. As can the people who own the restaurant. As can online "reputation management" firms who employ armies of paid reviewers to make sure you get a great average rating. So you could be a very highly rated restaurant with very low customer satisfaction, and the illusion of meeting your goal is thus created.
Relying solely on online reviews could actively hurt your business if they invited a false sense of achievement. Why would service improve if it's already "great"?
If diners were really genuinely satisfied, what would the real signs be? They'd come back. Often. They'd recommend you to friends and family. They'd leave good tips. They'd eat all the food you served them.
What has all this got to do with software? Let's imagine a tech example: an online new music discovery platform. The goal is to amplify good bands posting good music, giving them more exposure. Let's call it "Soundclown", just for jolly.
On Soundclown, listeners can give tracks a thumbs-up if they like them, and thumb's down if they really don't. Tracks with more Likes get promoted higher in the site's "billboards" for each genre of music.
But here's the question: just because a track gets more Likes, does that mean more listeners really liked it? Not necessarily. As a user of many such sites, I see how mechanisms for users interacting with music and musicians get "gamed" for various purposes.
First and foremost, most sites identify to the musician who the listener that Liked their track is. This becomes a conduit for unsolicited advertising. Your track may have a lot of Likes, but that could just be because a lot of users would like to sell you promotional services. In many cases, it's evident that they haven't even listened to the track that they're Liking (when you notice it has more Likes than it's had plays.)
If I were designing Soundclown, I'd want to be sure that the music being promoted was genuinely liked. So I might measure how many times a listener plays the track all the way through, for example. The musical equivalent of "but did they eat it all? and "did they come back for more?"
We might also ask for a bit more proof than clicking a thumbs-up icon. One website I use keeps a list of my "fans", but are they really fanatical about my music? Judging by the tumbleweed when I alert my "fans" to new music, the answer is evidently "nope". Again, we could learn from the restaurant, and allow listeners to "tip" artists, conveying some kind of reward that costs the listener something somehow.
Finally, we might consider removing all unintended backwards marketing channels. Liking my music shouldn't be an opportunity for you to try and sell me something. That very much lies at the heart of the corruption of most social networks. "I'm really interested in you, Now buy my stuff!"
This is Design. So, ITERATE!
The lesson I learned early is that, no matter how smart we think we've been about setting goals and defining tests, we always need to revisit them - probably many times. This is a design process, and should be approached the same we design software.
We can make good headway using a workshop format I designed many years ago
Maintaining The Essence
Finally, but most important of all, our goals need to be expressed in a way that doesn't commit us to any solution. If our goal is to promote the best musicians, then that's our goal. We must always keep our eyes on that prize. It takes a lot of hard work and diligence not to lose sight of our end goals when we're bogged down in technical solution details. Most teams fail in that respect, and let the technical details become the goal.
April 23, 2016
Does Your Tech Idea Pass The Future Dystopia Test?One thing that at times fascinates and at times appals me is the social effect that web applications can have on us.
Human beings learn fast, but evolve slowly. Hence we can learn to program a video recorder, but living a life that revolves around video recorders can be toxic to us. For all our high-tech savvy, we are still basically hominids, adapted to run from predators and pick fleas off of each other, but not adapted for Facebook or Instagram or Soundcloud.
But the effects of online socialisation are now felt in the Real World - you know, the one we used to live in? People who, just 3-4 years ago, were confined to expressing their opinions on YouTube are now expressing them on my television and making pots of real money.
Tweets are building (and ending) careers. Soundcloud tracks are selling out tours. Facebook viral posts are winning elections. MySpace users are... well, okay, maybe not MySpace users.
For decades, architects and planners obsessed over the design of the physical spaces we live and work in. The design of a school building, they theorise, can make a difference to the life chances of the students who learn in it. The design of a public park can increase or decrease the chances of being attacked in it. Pedestrianisation of a high street can breath new life into local shops, and an out-of-town shopping mall can suck the life out of a town centre.
Architects must actively consider the impact of buildings on residents, on surrounding communities, on businesses, on the environment, when they create and test their designs. Be it for a 1-bed starter home, or for a giant office complex, they have to think about these things. It's the law.
What thought, then, do software developers give to the social, economic and environmental impact of their application designs?
Having worked on "Web 2.0" sites of all shapes and sizes, I have yet to see teams and management go out of their way to consider such things. Indeed, I've seen many occasions when management have proposed features of such breath-taking insensitivity to wider issues, that it's easy to believe that we don't really think much about it at all. That is, until it all goes wrong, and the media are baying for our blood, and we're forced to change to keep our share price from crashing.
This is about more than reliability (though reliability would be a start).
Half-jokingly, I've suggested that teams put feature requests through a Future Dystopia Test; can we imagine a dark, dystopian, Philip K Dick-style future in which our feature has caused immense harm to society? Indeed, whole start-up premises fail this test sometimes. Just hearing some elevator pitches conjures up Blade Runner-esque and Logan's Run-ish images.
I do think, though, that we might all benefit from devoting a little time to considering the potential negative effects of what we're creating before we create it, as well as closely monitoring those effects once it's out there. Don't wait for that hysterical headline "AcmeChat Ate My Hamster" to appear before asking yourself if the fun hamster-swallowing feature the product owner suggested might not be such a good thing after all.
This blog post is gluten free and was not tested on animals
September 22, 2015
Why Tech Entrepreneurs Need To Be ProgrammersI worry - I really do - about the trend for the X Factor-ization of software development that start-up culture has encouraged.
Especially troubling is how the media is now attempting to woo impressionable young people into the industry with promises of untold riches, often creating the impression that "no coding is required". No need for skillz: we can use the VC money to pay some code monkey to do all that trivial stuff like actually making ideas happen.
Far from empowering them to take their destinies into their own hands with "tech", this pushes them in the exact opposite direction. If you cannot write the code for your idea, you have to pay someone to do it for you, and that means you have to raise a lot of money just to get to the most basic working prototype.
This means that you will need to go far into the red just to figure out if your idea is feasible. And here's the thing about start-ups: you'll probably need to try many, many ideas before you find The OneTM. If trying one idea comes at such a high cost... well, you do the maths.
Whereas a young person - or an older person, let's not forget - with the right programming skillz, a cheap laptop and a half-decent Internet connection could do it themselves in their own time, as thousands upon thousands of software developers are doing right now.
What makes software so special is that the economic and social barriers to entry are so low. You don't need hundreds of thousands of pounds and special connections from that posh school you went to to start writing programs. Whereas, all the evidence suggests, you do need to be in a priveleged position to get funding to pay someone else to do it. Take a long hard look at our most successful tech entrepreneurs; not many kids from the wrong side of the tracks among that demographic.
Of course, programming is hard. It takes a long time to learn to do it well enough to realise even moderately ambitious ideas. Which is why such a small proportion of the people who try it end up doing it for a living. What makes software so special is that the personal and intellectual barriers to entry are so high. No wonder kids are relieved to hear you can be a tech rockstar without all that fiddly business of learning to play guitar.
But anyone with a programmable computer and Internet access can create applications and have them hosted for a few dollars on cloud services that - should they get lucky - can scale pretty effortlessly with demand. Anyone with a programmable computer can create applications that could end up listed on app stores and let computing giants take care of the retail side at any scale imaginable. It's not like selling widgets: a million orders for a new widget could put a small business in real cashflow trouble. A million sales of your iOS app needs neither extra manufacturing nor storage capacity. That's the beauty of digital.
The reality, though, is that the majority of software start-ups never reach such dizzying heights of commercial success. Which is why the vast majority of professional programmers work for someone else. And in my humble opinion, someone who learns how to make a good living writing point-of-sale systems for a supermaket is every bit as empowered as someone who's trying to get their POS-as-a-Service start-up off the ground.
And that's why I'm worried; for every successful tech entrepreneur, there are thousands more doing just fine thank you very much writing software for a living, and having as much fun and being as much fulfilled by it. By all means, reach for the stars, but let's not lose sight of the much wider set of opportunities young people might be missing if their gaze is 100% focused on The Big Shiny Success.
And even if you harbour such grand ambitions - and someone has to, at the end of the day - your odds of getting an idea off the ground can be massively improved if you can create that all-important working prototype yourself. Because, in software, a working prototype is the real thing. Making shit real is a very undervalued skill in tech.
April 15, 2015
Reality-Driven Development - Do We Need To Train Ourselves To Tune Into Real-World Problems?The world is full of problems.
That is to say, the world is full of people who are hindered in their efforts to achieve their goals, no matter how small and everyday those goals might be.
I'm no different to anyone else in that respect; every day I hit barriers, overcome obstacles, and occasionally give up trying. It might be something as prosaic as being unable to find an item in the shops when I'm in town, or wondering if there's a café or a pub nearby that has seating available at that specific moment in time. Or it could be something as astronomically important as wondering if aliens have ever visited our solar system, and have they left any hardware behind.
Problems, problems, problems. The real world's full of interesting problems.
And so it comes as some surprise that when my apprentice and I take a moment to think about ideas for a software project, we struggle to think of any real-world problems that we could have a go at solving.
It struck me in that moment that maybe, as a species, we software bods are a bit crap at noticing problems that have nothing to do with developing software. Typically, when I ask developers to think about ideas for a project, the suggestions have a distinctly technical bent. Maybe a new testing tool, or a plug-in for Visual Studio, or Yet Another MVC Framework, or something that does something to do with builds (and so on.)
A lifetime of minor frustrations, some of which software might be able to help with, pops clean out of our heads. And I ask: are we just not tuned in to real-world problems?
There are people who walk into a room, and notice that the scatter cushions compliment the carpet. I don't. I'm just not tuned in to that sort of thing. And if you give them some money, and say "improve this room", they'll have all sorts of ideas about furniture and bookshelves and paintings and curtains and pot plants, while I would buy a Massive F***ing TVTM and a Playstation 4. I'm tuned into technology in ways they're not, and they're tuned into home furnishings in ways I'm not.
Do we need to mindfully practice noticing everyday problems where software might help? Do we need to train ourselves to tune in to real-world problems when we're thinking about code and what we can potentially do with it? Do we need to work at noticing the colour of the scatter cushions when we walk into a room?
I've suggested that Will and I keep notes over the next two weeks about problems we came up against, so when we next pair, there might be some inspiration for us to draw from.
Which gives me an idea for an app....
January 21, 2015
My Solution To The Dev Skills Crisis: Much Smaller TeamsPutting my Iconoclast hat on temporarily, I just wanted to share a thought that I've harboured almost my entire career: why aren't very small teams (1-2 developers) the default model in our industry?
I think back to products I've used that were written and maintained by a single person, like the guy who writes the guitar amp and cabinet simulator Recabinet, or my brother, who wrote a 100,000 line XBox game by himself in a year, as well as doing all the sound, music and graphic design for it.
I've seen teams of 4-6 developers achieve less with more time, and teams of 10-20 and more achieve a lot less in the same timeframe.
We can even measure it somewhat objectively: my Team Dojo, for example, when run as a one day exercise seems to be do-able for an individual but almost impossible for a team. I can do it in about 4 hours alone, but I've watched teams of very technically strong developers fail to get even half-way in 6 hours.
People may well counter: "Ah, but what about very large software products, with millions of lines of code?" But when we look closer, large software products tend to be interconnected networks of smaller software products presenting a unified user interface.
The trick to a team completing the Team Dojo, for example, is to break the problem down at the start and do a high-level design where interfaces and contracts between key functional components are agreed and then people go off and get their bit to fulfil its contracts.
hence, we don't need to know how the spellcheck in our word processor works, we just need to know what the inputs and expected outputs will be. We could sketch it out on paper (e.g., with CRC cards), or we could sketch it out in code with high-level interfaces, using mock objects to defer the implementation design.
There'll still be much need for collaboration, though. It's especially important to integrate your code frequently in these situations, because there's many a slip 'twixt cup and microservice.
As with multithreading (see previous blog post), we can aim to limit the "touch points" in component-based/service-oriented/microservice architectures so that - as much as possible - each component is self-contained, presents a simple interface and can be treated as a black box by everyone who isn't working on its implementation.
Here's the thing, though: what we tend to find with teams who are trying to be all hifalutin and service-oriented and enterprisey-wisey is that, in reality, what they're working on is a small application that would probably be finished quicker and better by 1-2 developers (1 on her own, or 2 pair programming).
You only get an economy of scale with hiding details behind clean interfaces when the detail is sufficiently complex that it makes sense to have people working on it in parallel.
Do you remember from school biology class (or physics, if you covered this under thermodynamics) the lesson about why small mammals lose heat faster than large mammals?
It's all about the surface area-to-volume ratio: a teeny tiny mouse presents a large surface area proportional the volume of its little body, so more of its insides are close to the surface and therefore it loses heat through its skin faster than, say, an elephant who has a massive internal volume proportional to its surface area, and so most of its insides are away from the surface.
It may be stretching the metaphor to breaking point, but think of interfaces as the surface of a component, and the code behind the interfaces as the internal volume. When a component is teeny-tiny, like a wee mouse, the overhead in management, communication, testing and all that jazz in splitting off developers to try to work on it in parallel makes it counterproductive to do that. Not enough of the internals are hidden to justify it. And so much development effort is lost through that interface as "heat" (wasted energy).
Conversely, if designed right, a much larger component can still hide all the detail behind relatively simple interfaces. The "black box-iness" of such components is much higher, in so much as the overhead for the team in terms of communication and management isn't much larger than for the teeny-tiny component, but you get a lot more bang for your buck hidden behind the interfaces (e.g., a clever spelling and grammar checker vs. a component that formats dates).
And this, I think, is why trying to parallelise development on the majority of projects (average size of business code base is ~100,000 lines of code) is on a hiding to nowhere. Sure, if you're creating on OS, with a kernel, and a graphics subsystem, and a networking subsystem, etc etc, it makes sense to a point. But when we look at OS architectures, like Linux for example, we see networks of "black-boxy", weakly-interacting components hidden behind simple interfaces, each of which does rather a lot.
For probably 9 our of 10 projects I've come into contact with, it would in practice have been quicker and cheaper to put 1 or 2 strong developers on it.
And this is my solution to the software development skills crisis.
July 9, 2014
What Problem Does This Solve?It seems that, every year, the process of getting started with a new application development becomes more and more complicated and requires ever steeper learning curves.
The root of this appears to be the heterogenity of our development tools, which grows exponentially as more and more developers - fuelled by caffeine-rich energy drinks and filled with the kind of hubris that only a programmer seems to be capable of - flex their muscles by doing what are effectively nothing more than "cover versions" of technologies that already exist and are usually completely adequate at solving the problem they set out to solve.
Take, for example, Yet Another Test Automation Tool (YATAT). The need for frameworks that remove the donkey work of wiring together automated tests suites and running all the tests is self-evident. Doing it the old-fashioned way, in the days before xUnit, often involved introducing abstractions that look very xUnit-ish and having to remember to write the code to execute each new test.
Tools like JUnit - which apply convention over that kind of manual configuration - make adding and running new tests a doddle. Handy user-friendly test-runner GUIs are the icing on the cake. Job done now.
For a bit of extra customer-centric mustard, add on the ability to suck test data for parameterised tests out of natural language descriptions of tests written by our customers. We cracked that one many moons ago, when heap big Turbo C++ compilers roamed the earth and programmer kill many buffalo etc. Ah yes, the old "merge the example data with the parameterised test" routine...
Given that the problem's solved, and many times over, what need, asks I, to solve it again, and again? And then solve it again again?
The answer is simple: because we can. Kent Beck learns new programming languages by knocking up a quick xUnit implementation in it. Pretty much any programmer beyond a certain rudimentary ability can do it. And they do. xUnit implementations are the Stairway To Heaven of programming solutions.
Likewise, MVC frameworks. They demonstrate a rudimentary command of a programming language and associated UI frameworks. Just as many rock guitar players have at some point a few weeks into learning the instrument mastered "The Boys Are Back In Town", many developers with an ounce of technical ability have gone "Look, Ma! I done made a MVC!" ("That's nice, dear. Now run outside and rig up an IoC container with your nice friends.")
But most cover versions of Stairway To Heaven (and The Boys Are Back In Town) are not as good as the originals. And even if they were, what value do they add?
Unless you're embuing your xUnit implementation with something genuinely new, and genuinely useful, surely it's little more than masturbation to do another one?
Now, don't get me wrong: masturbation has a serious evolutionary purpose, no doubt. It's practice for the real thing, it keeps the equipment in good working order, and it's also enjoyable in its own right. But what it's not any good for is making babies. (Unless it's immediately proceeded by some kind of turkey baster-type arrangement.)
It's actually quite satisifying to put together something like an xUnit implementation, or an MVC framework, or a Version Control System, or a new object oriented programming language that's suspiciously like C++.
The problems start when some other developers say "Oh, look, a new shiny thing. Let's ditch the old one and start using this one that does exactly the same thing and no better, so we shall."
Now, anyone looking to work with that team has got to drop X and start learning X', so they can achieve exactly what they were achieving before. ("But... it's got monads...")
And thusly we find ourselves climbing a perpetually steepening learning curve, but one that doesn't take us any higher. I shudder to think just how much time we're spending learning "new" technologies just to stand still.
And, yes, I know that we need an xUnit implementation for x=Java and x=C# and x=Object Pascal and so on, but aren't these in themselves self-fulfilling prophesies? A proliferation of sort-of-similar programming languages giving rise to the need for a proliferation of Yet Another 3rd Generation Programming Language xUnit ports?
Genuinely new and genuinely useful technologies come by relatively rarely. And while there are no doubts tweaks and improvements that could be made to make them friendlier, faster, and quite possibly more purple, for the most part the pay-off is at the start when developers find we can do things we were never able to do before.
And so I respectfully request that, before you inflict Yet Another Thing That's Like The Old Thing Only Exactly The Same (YATTLTOTOETS - pronounceed "yattle-toe-totes"), you stop and ask yourself "What problem does this solve? How do this make things better?" and pause for a while to consider if the learning curve you're about to subject us to is going to be worth the extra effort. Maybe it's really not worth the effort, and the time you spend making it and the cumulative time we all spend learning it would be better spent doing something like - just off the top of my head - talking to our customers. (Given that lack of customer involvement is the primary cause of software development failure. Unless you've invented a tool that can improve that. and, before anybody says anything, I refer you back to the "sucking customer's test data into parameterised tests" bit earlier. Been there. Done that. Got a new idea?)
Brought to you by Yet Another Blog Management System Written In PHP That's Not Quite As Good As The Others
January 20, 2014
Rain Dances For Start-upsI'm reading an email from one of those "entrepreneurs" that they have nowadays. In it, she explains how her business idea will succeed because she has successfully identified the key factors that made other start-ups wildly successful.
I LOL, of course. A business has many, many factors, and it's easy with the benefit of hinsdsight to pick out details that you believe made a difference.
The bottom line, though, is that nobody knows why a small (well, tiny) proportion of start-ups perform so much better than the majority. Entrepreneurs rarely duplicate a success, even with all the contacts, expertise, advertising and influence their money can buy them. Even when you're Apple or Microsoft, new products are a lottery.
There are those, though, who claim to know. Tellingly, they do not come with a portfolio of successful start-ups to back up their claims. But, to our shame, as an industry we still too easily buy into them.
Like the medicine men of old, they teach us the dance that will make the rains come and our crops flourish. Like I said, with the benefit of hindsight, it's easy to cherry-pick from the infinite complexities of a technology start-up a handful of factors you claim made the rains come, or a handful of key mistakes you claim angered the gods and killed the harvest.
Sure, if the CEO takes all the money in the bank and spends it on marzipan and Pokemon cards, failure is likely to follow. But most start-ups aren't run so badly as to rule out success, or so well as to guarantee it.
The reality, borne out by mountains of statistics on tech start-ups, is that it really is a crapshoot. There are important things you need to do to stay in the game long enough, but nothing you can do that guarantees you success like a Facebook or a Twitter.
The enlightened farmer doesn't rely on rain dances. The enlightened farmer knows that the rain may never come, and makes other plans.
January 4, 2014
What is Nonovation?One of the more pernicious problems that dogs the software community is that of continual - and accelerating - technology churn.
Examine closely many of the "new" ideas that appear on a daily basis, and you'll notice that most of them - almost all, in fact - are not new ideas at all. As I get older, I recognise it more and more. I've seen several major trends in the way we develop software over my 21-year career, but also hundreds of minor fashions and fads come and go.
Most "innovations" we see emerging in software are little more than cosmetic tweakings of existing ideas. No real value is added, no new problems are solved. It's the technology equivalent of painting a chair a different colour and claiming you've invented the "seatier". (Cue hordes of trendy young tech hipsters rolling their eyes and exclaiming "God, are you still using chairs? That's just so last year!"
I call this process of appropriating old ideas and dressing them up as new ideas nonovation. It's innovation that isn't.
It's a concept that's possibly best summarised by the late great (and genuinely innovative) Douglas Adams in The Hitchhiker's Guide To The Galaxy. To set the scene, Arthur Dent and Ford Prefect have hitched a ride aboard a giant space "ark" carrying the useless third of the population of the planet Golgafrinchan - all the marketing executives and middle managers and telephone sanetisers - which is programmed to crash into it's destination.
After the crash, Arthur and Ford explore the planet they've arrived on and eventually return to the main camp of the surviving crew of the B-Ark, who have been busy "inventing things".
Yes, and, and, and the wheel. What about this wheel thingy? Sounds a terribly interesting project to me.
Er, yeah, well we’re having a little, er, difficulty here…
Difficulty?! It’s the single simplest machine in the entire universe!
Well alright mister wise guy, if you’re so clever you tell us what colour it should be!
This is nonovation in a nutshell. By focusing on the purely cosmetic (e.g., "It's, like, MVC, but for Ruby, so it's, like, completely new...") we can create the illusion of meaningful invention and innovation.
My own theory is that, as the long tail of history grows ever shorter, and our collective memory becomes fixated on stuff that happened today and whatever's trending on Twitter, we easily forget what's gone before, and fail to recognise old ideas when we see them.
I also believe we're growing increasingly shallow in our attempts to understand things, and rarely bother to scratch beneath the surface to examine ideas below their cosmetic outer appearance.
This is evidenced by a recent clear-out I had of my office. I've been a prodigious book-buyer over the years, amassing hundreds of books on all aspects of software development. After rifling through my collection, throwing out the books that had become obsolete - I'm pretty sure I won't be doing any Delphi 2.0 development any time soon - I found myself with one bookshelf of titles that are still relevant. 90% of my books were about fashions and fads, little more than documentation for 20+ years of technology churn. The books that remain are largely computer science (data structure and algorithms, discrete maths, languages and compilers etc), and what I consider to be "classic" works on software development, like Extreme Programming Explained and the Catalysis book.
I haven't bought a book about a specific programming language, framework or tool in years. I find that stuff online when I need it, and it changes so often that I've probably subconsciously decided it's not worth investing in a hard copy.
I'm well aware that there is rarely anything genuinely new in any "new" programming language, framework or tool. I'm just relearning how to sit in a chair that's now called a "seatier".
(And, yes, I have just made up a new word for "technology churn"...)