- Continuous deployment.
- Tell a good change from a bad change quickly
- Revert a bad change quickly
- Work in small batches (at IMVU, large batch = 3 days worth of work)
- Break large projects down into small batches
- Have a cluster immune system
- Run tests locally. Everyone gets a complete sandbox
- Continuous integration server – tests to ensure all features that worked before still works
- Incremental deploy – reject changes that move metrics out of bounds
- Alerting and predictive monitoring – wake somebody up if metric goes out of bounds. Use historical trends to predict acceptable bounds.
- Conduct rapid split tests: A/B testing is key to validating hypotheses
- Follow the AAAs of metrics: actionable, accessible and auditable
But for heaven sakes! Nothing on this list is particularly insightful or new. All these things were true in 1980. Have we learned nothing? Is the schooling around software development so weak that these are news to people? This list ignores the much more fundamental question of when those rules get broken. Hint: about half your time will be spent in that state. But what’s key is knowing when that is a good witch v.s. a bad witch. The only customer/user input or feed back loop in that list is A/B testing. That is particularly bogus!
Feeling cranking. The old joke for the 1970s about the software industry: “I am blessed to have stood on the toes of giants!”
I once introduced Ralph Johnson to the head of Motorola Research. This was around 1985, back when OO programming was getting to be a serious fad. The head guy said that he’d been doing OO in assembler 20 years ago. I think that was technically true – OO is nothing but switch statements operating on type tags – but practically false. There is such a qualitative difference between coding in assembler and using the Smalltalk browser that it amounts to a quantitative difference.
The same is true here, I think. Consider the first two bullet points. The incredible cheapness of machines these days changes thinking about deployment. Staging servers? – Trivial to cons up a jillion of them. The speed and capacity of individual machines allows some people to run their microtests continuously. When they get to a place where they want reassurance, they just look at the possibly-already-finished previous run. I’m old-fashioned in that I pause every couple of minutes to explicitly hit F4 and wait five seconds for confirmation. Both of those feel supremely – quantitatively – different from my unit testing of N years ago.
Around 1999, I was thinking the same as you, that software development wasn’t materially different from when I started in 1981 (environment: Unix, C, vi and then shortly emacs). Today I think it is.