Another fun item from Chandler Howell’s blog about how people manage the risk. People try to get what they percieve to be the right amount of risk into their lives, but they do this on really really lousy data. So there are all kinds of breakdowns.
For example you get unfortunate scenarios where actors suit up in safety equipment, this makes them feel safer so they take more risks and after all is said and done the accident rates go up. Bummer!
I’ve written about how Jane Jacobs offers a model for why Toronto overtook Montreal as the largest city in Canada. After the second world war Toronto was young and niave with a large appetite for risk; while Montreal was more mature and wise. To Toronto’s benefit and Montreal’s distress the decades after the second world war were a particularly good time to take risks and a bad time to be risk adverse.
I’ve also written about how limited liablity is a delightful scheme to shape the risk so that corporations will take more of it. All based on a social/political calculation that risk taking is a public good that we ought to encourage.
What I hadn’t apprecated previously is how this kind of thinking is entriely scale free. Consider the fetish for testing in many of the fads about software development. The tests are like safety equipment, they encourage greater risk taking. Who knows if the total result is a safer development process?
Was it about risk? Or was it about the availability of cheap, flat, easily-buildable land for suburban tract development? Post-WWII was all about suburbia.
I mean, was Long Island less “risk averse” than Manhattan? Or, did it just have more farmland available to be turned into Levittowns?
While there is certainly a good story to tell about suburbs and risk, the story Jane Jacob’s tells is about commerce. In financial terms the Montreal investors had a higher hurdle rate than Toronto’s.
The suburbs are have a strong risk story that fits nicely the perversity insight above. You can only respond to percieved risk, which isn’t actual risk. The developers built out from the cities in part because that let the avoid the regulatory tangles in the city. They then packaged the product as a low risk, raising that perception. Ironically they encumbered their products with restrictive rules that would put an urban zoning board to shame. Even more ironically the regulations the developers were fleeing existed were the urban societies response to actual risks.
At the same time a reason people move to cities is to get the diversifies pool of options which lower their risk. Suburbs don’t deliver that.
As usual, I can’t tell if my difference of opinion (DOO(?)) is due to semantic hair splitting or actual difference.
> Who knows if the total result is a safer development process?
Literal ‘safety’ is more of a physical world concern, maybe a bit slippery to map conceptually to software development.
Trying ‘If it goes bad, we get hurt’ as an operational definition in both worlds (hurt with respect to business concerns in SD, hurt with respect to physical/financial/etc. concerns in the real world), then I can see the analogy 100% – who knows if the tests really will flag breakage with 100% fidelity? Who knows if the policy really does cover all those injuries i consciously or unconsciously fear?
So by this def., determining whether the result ‘is a safer development process’ comes down to evaluating before/after (was there no tests before? what test coverage exists after?) in light of not just the pain but the surprise of the pain (‘holy crap, we broke *that*?). Even more so, what does the delta *enable*. Tests afford the perception of not just risk-averse, but also cost-effective change. The more effective the testing, the more fit the perception to reality. With no tests, possibly the perception is we must do all this up front code analysis of the change, discuss the results, see if the ROI is worth it, etc. With effective tests, we can make the change (with attendant lighter weight local analysis afforded by confidence in the tests) and empirically see if there’s pain. Ahh, now I see where I’m going. Done reasonably, tests take cost and time out of the equation.
I was going to say that it comes down to the quality of the safety equipment and the degree to which the equipment is relied on vs. the reality of whether it should, not whether safety equipment in general makes things safer.
In which case the answer to your query, for me, is a long way of getting around to the truth we already knew: ‘it depends’.
‘Yes’, for the canonical reasonable folk, ‘No’, for the pointy haired archetypes, ‘who really knows but somewhere in between’ for the carbon based remainder.
Turns out my DOO was about ‘does this question illuminate anything new?’, or *merely* get a fresh, thought-provoking perspective on something familiar. And now that i’m here of course i realize it doesn’t matter.
But now i see the answer, for “reasonable folk”, is yes.
Argggh! As usual, you tricked me into thinking. Thanks a lot, Mr. Hyde.
One perspective: there are situation where you want the development team to take more risk. A gloss of testing might give them just the false sense of security you need to achieve that.
Hmph. That was disappointing. Let me try again.
When I read your comment ‘fetish for testing’, I inferred some variant of TDD, rather than just testing in general.
TDD (or general testing, for that matter) is just another weapon in the software development toolkit, along with all the rest – big tools and small, ancient techniques as well as the new new things.
It seems like you’re pointing out, with respect to testing and the issue of risk, a specific instance of the general truism – like any technique or design approach, it can be performed adequately for the task at hand, or not so much. It depends on the skill of the practitioners and the context.
So far, so what.
(Besides coming at it from a unique and interesting perspective as you usually do, I mean.)
I chimed in because I felt there was a ‘general good’ to TDD ‘properly performed’ (like Mr. Stewart’s def. of pornography, ‘we know it when we see it’) above and beyond risk mitigation that was being ignored – it’s productivity (time to market, total cost of ownership). Same rules apply as above – it needs to be done ‘appropriately’, which for human endeavors depends more on context than anything else. But there’s some other major business value freebies in there besides risk mitigation that is often missed when discussing it.
> One perspective: there are situation where you want the development team to take more risk.
I’m guessing the ‘you’ is the customer or perhaps manager of the development team.
I doubt that’s how that would be phrased – perhaps you’d want them to get more done in the same amount of time for the same amount of money, and would be willing to accept more risk in the process.
If they have to take risk to do that, then hopefully you are having the whole conversation and not just focusing on ‘are we being risky enough?’
Summed up: you can get ‘a false sense of security’ with pretty much any SDLC technique or tool.
‘Appropriate’ TDD (assuming that is at least close to what you were describing) pays off
not just in risk mitigation but also orthogonally in time and cost.