Over the last few years I have noticed a growing trend in software developers to constrain their thinking to static models.
Static thinking is a great way of envisioning a system and we have many tools to support modelling systems in this way - class, deployment, network diagrams all fall into this category. But all these models are about a single state of the system - either historical, real (current) or imaginary (future).
However many projects involve more difficult thinking. Typically these difficulties involve moving from one state to another and the most complex of these is migrating from one version of a system to another.
It is at this point that I struggle to find an effective model that captures this transition. How do we model and capture the upgrade of a database, application and infrastructure requirements. And further these transitions occur at different times and are typically not instantaneous which can mean that the service is not available. How do we model this sequence of events so we can understand the effect of our evolving software design on the rolling out the new software.
Quite often these rollout plans are written down (impact analysis, rollout network models, sequence diagrams, flow charts) and held in the minds of the project team. But these models are not easily tested. Organisations with a significant investment in their live environment often struggle to replicate that environment to allow the model and plans to be tested before hitting the production environment.
I think finding an effective tool to model and execute software updates will be one of the key challenges for this decade - as it has for the last two.
The Existential Terror of Battle Royale
2 weeks ago