How many development shops do you know that complain about having too much time on their hands? Man, if only we had more to do. Then we wouldn’t feel bored between completing the perfect design and shipping to production … said no software shop, ever. Software proliferates far too quickly for that attitude ever to take root.
This happens in all sorts of ways. Commonly, the business or the market exerts pressure to ship. When you fall behind, your competitors step in. Other times, you have the careers and reputations of managers, directors, and executives on the line. They’ve promised something to someone and they rely on the team to deliver. Or perhaps the software developers apply this drive and pressure themselves. They get into a rhythm and want to deliver new features and capabilities at a frantic pace.
Whatever the exact mechanism, software tends to balloon outward at a breakneck pace. And then quality scrambles to keep up.
Software Grows via Predictable Mechanisms
While the motivation for growth may remain nebulous, the mechanisms for that growth do not. Let’s take a look at how a codebase accumulates change. I’ll order these by pace, if you will.
- Pure maintenance mode, in SDLC parlance.
- Feature addition to existing products.
- Major development initiatives going as planned.
- Crunches (death marches).
- Copy/paste programming.
- Code generation.
Of course, you could offer variants on these themes, and they do not have mutual exclusivity. But nevertheless, the idea remains. Loosely speaking, you add code sparingly to legacy codebases in support mode. And then the pace increases until you get so fast that you literally write programs to write your programs.
The Quality Conundrum
Now, think of this in another way. As you go through the list above, consider what quality control measures tend to look like. Specifically, they tend to vary inversely with the speed.
Even in a legacy codebase, fixes tend to involve a good bit of testing for fear of breaking production customers. We treat things in production carefully. But during major or greenfield projects, we might let that slip a little, in the throes of productivity. Don’t worry — we’ll totally do it later.
But during a death march? Pff. Forget it. When you slog along like that, tons of defects in production qualifies as a good problem to have. Hey, you’re in production!
And it gets even worse with the last two items on my bulleted list. I’ve observed that the sorts of shops and devs that value copy/paste programming don’t tend to worry a lot about verification and quality. Does it compile? Ship it. And by the time you get to code generation, the problem becomes simply daunting. You’ll assume that the tool knows what it’s doing and move on to other things.
As we go faster, we tend to spare fewer thoughts for quality. Usually this happens because of time pressure. So ironically, when software grows the fastest, we tend to check it the least.
The Quality Tools at Our Disposal
So far, I’ve painted somewhat of a bleak picture. Of course, some of you may call that reality, particularly if you’ve worked in shops perpetually in “firefighting” mode. When you need to go faster, you tend to get sloppier.
While that’s true, it’s not as though the industry has simply resigned itself to this fate. We do a lot of things to help with quality, and we try to do them as efficiently as possible. This blogger cites the Steve McConnell book Code Complete, which details defect reduction strategy. These include, non-exhaustively, the following:
- Unit tests
- Regression tests
- Integration tests
- Beta tests
- Code reviews
- Design reviews
Let’s put these into a couple of buckets. First, we have activities that examine the runtime behavior of software. And secondly, we have activities that take a look at the form of the source code. Interestingly enough, we heavily emphasize automating the former. With the latter? Not so much. But I’ll come back to that.
The Value of Runtime and Build Time Quality Strategies
As Kevin Burke points out in his take on McConnell’s book, no single approach to quality will prove adequate. You need to mix and match. So you have unit tests and integration tests both. On top of that, you may have various code review workflows. You might even adopt pair programming, which we could consider as continuous, informal code inspection.
Whatever the details, you probably attack the quality problem from several angles. And as you do this, you probably cover both runtime and build time concerns. In other words, you consider both the behavior of the software and the source code itself.
And so should you. (If you don’t do both, you should start). Evaluating runtime behavior jumps out at anyone as obvious. If nothing else, you should probably see what your application does before you ship it. This means some nominal form of quality assurance. Of course, you should probably do a lot more than that — exploratory testing, regression testing, unit testing, smoke testing, etc.
But you shouldn’t ignore build time concerns — looking at your code itself. In the first place, this sort of inspection can help you catch many defects, both existing and potential. Secondly, this addresses the maintainability of the code. Will your team have an easy time maintaining and modifying this code? Or do you have a confusing bramble bush of code that developers modify at their own peril (and at the peril the software)?
I don’t know exactly how automated test suites got their start. I imagine it went something like this: “Hey, this testing we’re doing is great, but it’s pretty repetitive, and I bet we could automate it.” Except this happened across years, industries, and thousands of teams. As software developers, we learned that we could automate simple functional tests and free up quality assurance pros for exploratory tests and other more exotic things.
Indeed, the industry has come to place great emphasis on automated testing. If you look at McConnell’s chart, you probably picture automated tests for everything he lists, with the exception of beta testing. But what about the build time concerns? Informal code review? Personal desk checking of code? Informal design reviews? Formal code reviews/design reviews? Do you picture automation for any of that stuff? I sure don’t, as described.
As an industry, we’ve endlessly chased test automation while sort of ignoring code inspection automation.
The Peril of Manual-Only Code Review
When you look more deeply at the prominence of test automation, you’ll see the itch that gets scratched. Move from the occasional patch to death march situations and quality concerns go out the window. At least, they go out the window when folks take time. But the beauty of an automated test suite lies partially in the fact that it executes quickly and without consuming human time. Invest in the test suite when you do have time, and it serves you well when you don’t.
But we don’t, as an industry, apply this reasoning to code inspection. When the pressure hits and teams scramble, pair programmers break into individuals and code review goes out the window. The faster we build software, the less time we have to offer feedback to others on how they build software. And that represents a huge gap.
Don’t look for the pace of software growth to slow anytime soon. Software is still eating the world. To keep up, you can’t afford to rely on manual processes only. Just as you do with your testing strategy, you need to supplement your manual code reviews with automated ones.