At GuideSmiths we’ve faced the challenge of migrating our customers away from their legacy platforms many times. On every single project we’ve completed we seemed to end up using the same key ally: the very same legacy system we were trying to replace.
Keeping the old system running and providing value whilst the new platform is being built from the ground up is essential for a few reasons:
- The first and most obvious one is that you don’t want to stop your business from regular operations just because the old system needs a rebuild. Keep things as they used to be before they can get any better. Big Bang changes could create a lot of anxiety and operational overhead.
- You will also count on a running and operational backup system as a fallback strategy should the new system fail to start working appropriately. What we try to avoid at all costs is to have to plan for a horrible backwards-compatible migration to the old system if the new platform failed for some reason. Keeping the old system running is typically way more cost-effective, less complex and therefore easier to reason about.
- Another reason to keep the legacy system running is that you will have a reference point for what it is that you are actually replacing. If I stopped the legacy system from processing our latest data, how could I ensure the new data model corresponds to its old meaning if I have no way to compare it?
Generally speaking, I think legacy migrations are really fulfilling. When you build an entirely new product for the market (that you don’t know whether it is going to be used or not), you need to be prepared for quick turnarounds, make potential big changes in the data model and create a lot of throw-away code. However, I find legacy migrations rather satisfactory since you know a few things upfront about the old system:
- Business logic
- Number of users
- Amount of data
- Structure of the data
- Amount of transactions to support
- Current limitations and bottlenecks (poor performance, lack of tests, dead features, manual deployments, non-responsive version, really bad UX…)
- Users’ demands for new features
So provided we could maintain the previous data (highly likely designing a simpler and better data model), implement the same business behaviour and support the current amount of requests, we would be in an excellent position to add huge value to our rebuild by:
- Removing any technical limitations to let the system scale up vertically
- Making automated deployment and live releases on a daily/weekly basis to ship value fast and often
- Creating automated tests and QA checks to ensure sustainable growth
- Improving visual appearance by creating a better UX, a responsive version, or whatever was needed
- Implementing any new features and ignore unused ones
- Improving security and data management
However, as I said initially, this would be a much harder mission without using the legacy system as your ally. Add it to your strategy to increase your success odds dramatically.
We’ve recently interviewed a number of technical leaders and produced this report explaining the most common issues when facing a legacy migration project and recommendations on how to navigate them.