Migrating and modernizing legacy applications using Migrate for Anthos
It is a very common scenario these days that most of the enterprises have moved or are planning to move their workloads to the public cloud and they want to containerize their applications to leverage all the benefits provided by Kubernetes – but what if you have an extensive collection of applications that are based on legacy frameworks? Containerizing these applications will be a very tedious and time-consuming task.
Migrate for Anthos provides a way to migrate and modernize these sorts of legacy applications into containers in Google Kubernetes Engine (GKE). I will not go deep into the benefits provided by Migrate for Anthos as you can take a look at the excellent blog written by Brent Ashley here describing in detail how Migrate for Anthos can save not only your time but also effort in the migration of these workloads.
If you have your applications running inside virtual machines on Vmware (on-premises) or different public cloud platforms like AWS, Azure or Google Compute Engine, then you can use Migrate for Anthos to move these applications to containers in GKE. For containerizing the virtual machines Migrate for Anthos creates a wrapper container image from the source VM, the OS is replaced with the one supported by GKE, the filesystem is analyzed by parsing fstab and then a Persistent Volume is mounted using Migrate for Anthos Container Storage Interface (CSI) driver which streams the data from the source VM. The diagram below shows you the architecture of Migrate for Anthos.
Firstly, you will need some sort of IP connectivity between your source cloud platform and Google Cloud; you can either use Cloud VPN for secure communication or Interconnect. Migrate for Anthos (M4A) depends on Migrate for Compute Engine (M4CE), therefore as a prerequisite you have to set up Migrate for Compute Engine as well. On the Google Cloud Platform (GCP) you have to deploy Migrate for Compute Engine Manager which is available through Google Marketplace. Then, using the web interface of M4CE Manager, you create credentials for the source cloud platform in use and then define your source cloud which can be either VMware, Azure, or AWS. Then, again using M4CE Manager, you create a cloud extension which is sort of a channel for VM storage between two cloud platforms such as:
- On-prem and GCP
- AWS and GCP
- Azure and GCP
A cloud extension uses dual node (active/passive) configuration for high availability, whereby each node serves its own workloads while providing backup for others. If you have VMware as a source cloud platform, then you install Migrate for Compute Engine backend and Migrate for Compute Engine vCenter plugin as well. In the case of public cloud platforms like AWS and Azure, there’s no need for any components to be installed in these environments.
Migrate for Anthos currently supports most of the common Linux distributions for the workloads (see a complete list here). Support for Windows-based workloads is under development and will be available soon.
Migration Journey to GKE
There are several phases involved in the migration of a workload to GKE, so let’s discuss the details of each phase.
Phase I - Discovery
In the first phase you gather information needed for a migration by understanding your application and its dependencies. This includes an inventory of:
- Your virtual machines
- Your application’s required network ports and connections
- Dependencies across application tiers
- Service name resolution or DNS configuration
Phase II - Migration and planning
During the second phase, applications are divided into services and information collected during the discovery step is translated into the Kubernetes model. The application environment, topologies, namespaces, and policies are centralized in Kubernetes YAML configuration files.
Phase III - Landing Zone Setup
In this phase, GKE environment is prepared to receive migrated workloads which involves:
- Creation of GKE cluster to host migrated workloads
- deploy Migrate for Anthos from Google Marketplace to the cluster
- Creation of VPC network rules and policies for application
- Applying Kubernetes service definitions
- Making Load Balancing choices
- Configuration of DNS
Phase IV - Migration and validation
After the GKE environment is set up and it is ready to host the workloads, application configuration YAMLs are applied which will create the Persistent Volumes and Staefulsets for the application. GKE will start the workload when starting up a container. Migrate for Anthos CSI driver will stream the storage from the source platform which enables your workloads to start in minutes instead of waiting for all of the storage to be transferred. You can quickly test and verify your workload in GKE.
Once you are satisfied with the workload, you can finalize the migration by exporting storage from streaming persistent volumes to persistent disks using Storage Exporter. This will create a single persistent volume from all of your disks and removes dependency on source storage and Migrate for Anthos CSI driver. The persistent volume can be backed by any supported GKE storage provider.
Phase V - Operate & Optimize
Finally, you can leverage tools provided by Anthos and the larger Kubernetes ecosystem to operate and optimize your application. You can add policies, encryption, and authentication using Istio, monitoring and logging using Google Cloud Operation suite, all through just changing configuration and not by rebuilding your application.
I hope this blog has given you a good idea about the technical aspects of Migrate for Anthos. You can also take a look at the cool video blogs by Kyle Basset giving an overview of Migrate for Anthos and a live demo of migrating an application from VMware (on-prem) to GKE.