Quality

Thursday 25 April 2013 at 20:11 BST

Update: I've answered a few comments I had in another post, entitled Quality, Revisited. Please check it out after you've read this one.


"Tests take too long. We don't have time for that."

Good one, Bob.

Let's talk about TIM Group, a company for which I used to work.

Imagine a few different teams within the IT department of your local investment bank. The developers at TIM Group are kind of like those guys, except without all of the bureaucracy. The "technology" department at TIM Group is the biggest part of the organisation, and they have complete control of the technical portion of the products. Product teams figure out what needs to be done, but the developers build the product and the infrastructure team are responsible for deploying it as they see fit. What this means is that new products (classified as "anything built after we heard about continuous deployment") are deployed immediately, as soon as all the tests pass, and old, feature-laden products are maintainable and generally quite stable. There is the odd regression or bug, but it's usually something minor.

Continuous delivery or deployment is the norm, backed by a large cluster of Jenkins and Selenium nodes that ensure the software is always functional. As soon as a test case fails, its name flips from green to red on the massive TV screen, in view of every developer, as well as on pretty much every dev's screen. Tom Denley, who works at TIM Group, built CI-Eye for exactly this purpose: making sure that when something's broken, everyone knows it.

CI-Eye in
action

So how does the development process work? Basically, they use a modified version of scrum with elements from lean (and quite a few things that were invented in house, specific to the developers around). There's a daily standup, either first thing in the morning or right after lunch. They use a universally detested (but fairly functional) online kanban to list the work for the sprint (usually one or two weeks) and allocate work in pairs. Pair programming is highly encouraged—to the point that you should have a good reason not to pair.

The developers are split across the London and Boston offices. Teams tend to have approximately five developers and a business analyst, but one or two of them are divided between the offices, which presents some interesting challenges. When I left, our "Atlantic" team was pairing by sharing screens and by using the Eclipse plugin Saros, which allows you to share editors within Eclipse. This was also the technique used by our two remote developers in France.

So how do they afford to put so much effort into pairing, remote development, testing and all of that agile mumbo-jumbo? Doesn't it cost a fortune? It turns out, not so much. Central to the development practices at TIM Group is the mentality that the whole "price, speed, quality: pick two" mentality is bullshit. Quality is the priority, and speed will follow, driving down the cost of development. In their case, quality means end-to-end acceptance tests, unit tests, pairing so you have four eyes on everything, and tasks broken down so that each individual piece of work takes no more than a day or two for a pair. There is a server farm dedicated to running the acceptance tests. Maintaining all of this is a shitload of work, but it results in a product with which customers are very happy.

Of course, it's very easy to do a great job on entirely the wrong thing. Steve Freeman talks about "doing the right thing" vs. "doing the thing right". Both are important, but you should be asking whether you're doing the first one before you worry about the second. Talking about the product vision is way out of the scope of this article, but there's also a lot the developers can do, in the form of retrospectives. This involves all huddling up in a room at the end of the iteration (basically, every one to two weeks) and talking about what went well, what went badly, and how things can get better. The important thing here is to come out with actions, which allow you to actually fix things, rather than whining about them week after week.

In the long run, the hundredth feature to a project costs less with all of this baggage than it does without it. Without it, you have to test you didn't break the other ninety-nine manually. Chances are fairly high that it's going to take forever, you'll get sloppy, and you'll miss a regression that should have been caught. Research states that that throughout each phase of development, the cost of a bug increases by an order of magnitude. Catching a bug during development costs a lot more to fix it than catching it during the design phase (which could just be thinking, discussing and whiteboarding). If you find it during manual regression testing, it's much more than that. The cost of your customers finding a bug in production isn't just development time, test time and release time, it's your reputation. When something that easy to lose goes on the line, the numbers tell you you're much better off taking the time up front.

With practices so important to the continued success of the teams, recruitment becomes very important. Developers can't just accept feature requests and pump out code. They need to be heavily involved with the business, rejecting features and suggesting alternatives if necessary to avoid a slow, dysfunctional, hard-to-use application. This means they need to understand the business requirements as well as how to write code in the latest, greatest programming languages. In addition, commitment to good practices is necessary. A single person avoiding test cases will destroy confidence in the infrastructure surrounding the project—if new things aren't covered, we have to resort to manual testing. And then, finally, they have to be competent at their job—to write code that is functional, understandable, flexible, and maintainable.

The important thing is not the methodology, it's the mindset. Figure out what your team's priorities are and build your processes around them. How you do client meetings, design, development, hiring or anything else isn't as important as whether it fits in well with the goals of your team and the future of your product or service. Your goal is the most important thing, and everything else should be constructed in aid of it. If your goal is to deliver something tomorrow, hack it out. But if it's to deliver a great product that makes people enjoy handing over their wallets, then maybe it's time to start doing things right, as well as doing the right thing.


Thanks to Edward Wong for asking me this question via email, therefore prompting this blog post. Thanks doubly for the criticism after I'd written it once. Sorry it took so long to finish, Ed.


If you enjoyed this post, you can subscribe to this blog using Atom.

Maybe you have something to say. You can email me or toot at me. I love feedback. I also love gigantic compliments, so please send those too.

Please feel free to share this on any and all good social networks.

This article is licensed under the Creative Commons Attribution 4.0 International Public License (CC-BY-4.0).