Evolutionary Database Design is a slightly counterintuitive approach to managing a database. On the one hand, we think of our databases as towers that we must defend from any old change and must deploy to carefully. On the other hand, if we can get very good at deploying incremental changes to a database, and keeping track of them, then we can reasonably keep a production environment in sync with our latest database structure, and we can use the change log to create a fresh database for any test environment we wish to spin up.
And that’s what we did.
And they all lived happily ever after.
Notes On How
Here are the technologies I used along the way:
- Liquibase – the community edition, which runs on the JVM and does a very good job
- MySQL – in particular MySQL running in docker – the live databases were AWS Aurora MySQL dialect, running in RDS
- Some shell scripts
- YML dialect for all the liquibase files
- AWS CodeBuild to execute test builds of the latest database scripts
- AWS ECR to host the docker images
- AWS CodePipeline to deploy the database changes
- AWS Cloudformation to construct the RDS resources managed by the scripts
Changing the Database
Let’s assume the above is all in place and walk through a typical database change.
- Modify the YML files in the DB repository to add the changes
- Run the database build script to test this new change
- The script would build a fresh database out of the sum of the YML files
- The script would then import a docker image of the last known good version of the database and attempt to use this build’s view of liquibase to upgrade the previous database
- Assuming everything was successful, package a docker image which serves as both a MySQL server with the latest schema in, AND a tool which can deploy that schema to any other database of the same family
- Push a branch with the change on
- Get a green build on that branch (running the same scripts, but in CodeBuild, independent of a developer maching)
- Merge the branch to
- Have everything built and published
- Trigged the pipeline to use the liquibase tool in each environment to upgrade the database
- Not forgetting the test steps in that pipeline to ensure the app still functions
Don’t Make Breaking Changes
If you make a breaking change with this approach, then it’s costly. It’ll bring down your live system, lose data, put you into a loop where you can’t fix forwards OR backwards. It’s awful.
So we avoid breaking changes.
Breaking changes are usually caused by adding mandatory things before the app’s ready, or removing things the app depends on.
Find a way to sympathetically add things so the app isn’t broken. Step the app up to use the new pathway, and then remove the old. Or make the app go first, with the database following when it’s ready, depending on the change.
Treat it Like it’s Hot
With this approach, you need to tread more carefully with your changes… but that doesn’t necessarily make things bad.
It should be said that there should be ways of running any old DB version locally to see the impact of a change before pushing it to the source code repository.
It’s also worth nothing that making small thoughtful changes to the database, which is the success factor for this process, is also a successful strategy ANYWAY.
We’ve been using this on a couple of projects recently and it’s been an unqualified success. The tools are simple and basic, yet the process works powerfully well.
Owing to the way liquibase uses a master file to govern the changes, there’s good isolation of concurrent change when you used trunk based development – essentially, you sort it out at the merge conflict end.
Similarly, with the cycle of build, upgrade test, deploy, and real-world test, issues are revealed earlier. And the production of a MySQL image that can be used for local testing, as a vital output of the process, is really handy too.
I recommend this approach.