- Continuous deployment.
- Tell a good change from a bad change quickly
- Revert a bad change quickly
- Work in small batches (at IMVU, large batch = 3 days worth of work)
- Break large projects down into small batches
- Have a cluster immune system
- Run tests locally. Everyone gets a complete sandbox
- Continuous integration server – tests to ensure all features that worked before still works
- Incremental deploy – reject changes that move metrics out of bounds
- Alerting and predictive monitoring – wake somebody up if metric goes out of bounds. Use historical trends to predict acceptable bounds.
- Conduct rapid split tests: A/B testing is key to validating hypotheses
- Follow the AAAs of metrics: actionable, accessible and auditable
But for heaven sakes! Nothing on this list is particularly insightful or new. All these things were true in 1980. Have we learned nothing? Is the schooling around software development so weak that these are news to people? This list ignores the much more fundamental question of when those rules get broken. Hint: about half your time will be spent in that state. But what’s key is knowing when that is a good witch v.s. a bad witch. The only customer/user input or feed back loop in that list is A/B testing. That is particularly bogus!
Feeling cranking. The old joke for the 1970s about the software industry: “I am blessed to have stood on the toes of giants!”