It’s tough to browse much of today’s microservices landscape without stumbling upon or thinking about a service mesh. Istio brings service mesh, service discovery, and visibility to microservices architectures which of course includes Kubernetes. During a recent event I built a demo showcasing an Istio-based service mesh that stretches across two different environments leveraging nothing but Istio Ingress Gateway services in GKE (Google Kubernetes Engine) and GKE On-Prem (Google’s new On-Premise offering).
In order to showcase the setup and test connectivity across the clusters I chose to use the following demo repository from Google. The application is an artifical online store made up of multiple individual microservices intended to showcase deploying and monitoring in Kubernetes.
Istio in a Shared Control Plane Across GKE and GKE On-Prem Clusters
The demo environment is based on the Shared control plane deployment published by the Istio team. The TLDR of this deployment model is to build a multicluster topology via gateways only. Istio can then route traffic to the appropriate endpoint via location-aware service routing.
In this deployment the cluster on the left will be running on GKE On-Prem, while the second cluster on the left will be running on GKE. Following this deployment the GKE On-Prem cluster will have the full Istio stack deployed whereas the remote GKE cluster will only have an Ingress gateway, Citadel, and the Sidecar injector implemented. The GKE cluster will be managed and controlled from an Istio perspective completely from the full deployment On-Prem.
Let’s jump right into the deployment!
At least 2 Kubernetes clusters (GKE and GKE On-Prem in our case)
Admin access to the clusters to deploy Istio
kubectl installed locally for management
IP address for the Load Balancer service for Istio (only needed for GKE On-Prem clusters)
IP address on the primary cluster accessible to the secondary cluster
IP address on the secondard cluster accessible to the primary cluster
(Optional) A front-end certificate for the Istio Ingress gateway on the primary cluster (Can use cert-manager if desired)
External sub-domain and DNS entries completed accordingly. (This will vary based on your deployment)
In our demo environment we have a sub-domain with a wildcard DNS entry that resolves to a routable IP address that is used to host the Istio Ingress Gateway in the primary cluster. This can be customized accordingly.
Install and Prep Istio in GKE On-Prem (Local Cluster)
Perform the Istio deployment against the GKE On-Prem cluster (Cluster 1 in the diagram) as follows:
Install and Prep Istio in GKE (Remote Cluster)
Perform the Istio deployment against the GKE cluster (Cluster 2 in the diagram) as follows:
Joining the Two Clusters
At this point Istio is installed in both clusters, and in Cluster 2 (our GKE cluster) the Ingress Gateway pods for Istio will not become ready. This is due to the primary cluster not being configured to accept requests from the secondary cluster. In this part we will configure the primary cluster to accept connections and join the two configurations to create a single mesh.
Perform these steps against the GKE On-Prem cluster (Cluster 1 in the diagram):
At this point the Gateway pods in the secondary cluster should become ready as they are able to join the mesh.
Deploying the Hipster-Store Demo Application
Below is a video demonstrating the full deployment of the demo application on the GKE On-Prem cluster, while only the frontend web services are running in the Remote cluster. This showcases the front-end service connecting back to
the primary cluster for all services required for the front-end.
Deploy the demo app in the primary cluster
Deploy the frontend to the secondary cluster
Here is a demo video showing the environment built along with the traffic routing between GKE and GKE On-Prem through the service mesh:
Reach out to learn more about Istio, GKE, Anthos etc