Articles
Migration to Microservices Lessons Learned, Part 3
May 2, 2018

We are now in the third series discussing migration to microservices lessons learned. Today, we’ll discuss data migration. In case you missed the previous posts, here they are:

Think About Data Migration

Are you migrating an existing monolithic solution to microservices?
If so, then there are only a few data migration options that you should be considering.
Let’s review each of them in more detail.

Option 1

A new solution will start with new domain models, new storages, and no data. This is a relatively rare case, in reality, to be honest. If you have one, you are pretty lucky.

Option 2

A new solution will start with new domain models and storages, and all data are going to migrate from a monolithic solution before a release. Afterward, no data will be inserted via the old system, and the new one will be a single source of truth. This is a preferable scenario, but usually still not the most common.

If you are following this path, then you are focusing on a big bang migration tool that might execute during downtime window. At first glance, it might sound trivial, and engineers tend to neglect its importance and postpone it to the latter stage when all services and infrastructure are in place. But in reality, you need to migrate all data from one big monolithic storage (usually RDBMS) to the set of independent storages (both SQL, NoSQL, or even binary files) that are currently managed separately.
That might lead to such technical complexities:

  • A domain model between them might differ significantly. This means that it is not simple field-to-field mapping anymore. As a result, the migration tool has its business logic that should be specified, verified and tested.
  • Validation rules might significantly vary, which will lead to the specific corner cases and shortcuts that should be discussed one by one with product managers.
  • Statements that used to form a single transaction in the old system are now fully distributed, so you need to think about how to maintain data consistency between storages.
  • Execution time matters. As downtime window is usually short, you will spend a lot of time from the initial version of a migration tool (that satisfies correctness) getting to a version that is also quite performant.

Option 3

Two systems might co-exist. Two scenarios are possible here:

  1. Two systems are keeping their data fully separately, so new functionality (that is no longer supported in old solution) is using a new domain model.
  2. Two systems are keeping their data separately, but synchronizing to some extent (which is obviously more complex). This scenario is quite common since, for the old solution, there can be a dependent process (such as reporting) that can work with the old model only, so data should eventually reside in the old data storage somehow.

The first scenario is just a smaller version of a big bang migration tool. So, all statements mentioned above apply here, too.
In the case of the second scenario, it might get even more complex. Imagine a scenario when all changes made via new service (add data inserted using it) should migrate in near real-time to the old database used by a monolithic application. That leads us to even more complex implications:

  • Some service, let’s call it a data loader, has to be implemented as a mediator between those two solutions. The data loader might interact with the monolithic database directly or use API exposed by the monolithic application (that is, of course, more preferable, as we could reuse validation rules, transactions support, etc.).
  • Communication between new services and the data loader should be asynchronous — for instance, via events — which leads us to infrastructure changes, too, as we need to add a persistent pub/sub mechanism or other alternatives like Apache Kafka.
  • The data loader should take care of concurrent execution to guarantee that changes made in particular order by microservices have to be propagated in the same order to an old system.
  • One transaction in a microservice might be transformed into multiple calls from the data loader to an old system. The question about eventual data consistency is being raised one more time in this case.
  • The data loader should deal with all corner cases and alert if anything is wrong while performing data migration.
  • The data loader should be fault tolerant, i.e., be able to start processing old events after a crash.
  • Comprehensive logging, auditing, and monitoring should be implemented to guarantee data consistency and no data loss between the two systems.

As you can see, data migration is quite a complicated part of overall delivery. It would be recommended to agree at least on the option that you are choosing as soon as possible (knowing all technical implications) and to start implementing a data migration tool or a data loader service when you have a few business services already in place, but not when you are close to a release.