The concept of service mesh continues to become increasingly popular over the last year or so as we shift to a more containerized approach for our application workloads. With this, I’ve acquired a keen interest in trying out a bunch of them to see what they each bring to the table. In previous blogs, I have covered configuring mTLS with Istio and also have covered HashiCorp’s Consul Ingress Gateway feature. In this blog, as the title suggests, I wanted to show how to initially set up Kuma, a control-plane that offers service mesh for both Kubernetes and VM-based workloads. On-top of this, I will also be deploying Kong which is an ingress controller that will allow us to get external traffic to our applications.
I’ve personally just started working with Kuma and Kong but so far the experience has been very good, especially when working with non-containerized workloads that sit on virtual machines. Kuma has done a great job at being platform agnostic and can be deployed on any Linux-based machine.
Prerequisites
For this blog, I am going to use a three-node Google Kubernetes Engine (GKE) but you can use any type Kubernetes cluster you prefer. Below is a capture of the cluster that I am going to be using.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-arctiqjacob-default-pool-5a0b2c40-4c26 Ready <none> 87s v1.15.12-gke.6
gke-arctiqjacob-default-pool-5a0b2c40-rwf2 Ready <none> 87s v1.15.12-gke.6
gke-arctiqjacob-default-pool-5a0b2c40-vv9x Ready <none> 87s v1.15.12-gke.6
Deploying Kuma
First we are going to deploy Kuma so that we have our service mesh deployed before deploying our application and the Kong ingress gateway. Deploying Kuma on Kubernetes is super straightforward and can be done easily, as shown below, with kumactl
.
# install Kuma
$ curl -L https://kuma.io/installer.sh | sh -
# change into the bin directory
$ cd kuma-0.6.0/bin
# use kumactl to install the Kuma control-plane
$ ./kumactl install control-plane | kubectl apply -f -
# verify the Kuma control-plane pod and service comes up
$ kubectl get pods,svc -n kuma-system
NAME READY STATUS RESTARTS AGE
pod/kuma-control-plane-7b4cd4dc8-wxrcj 1/1 Running 0 72s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kuma-control-plane ClusterIP 10.7.255.138 <none> 5681/TCP,5683/TCP,... 73s
# port-forward the control-plane service
kubectl port-forward svc/kuma-control-plane -n kuma-system 5683:5683
Kuma is now deployed and we have port-forwarded the control-plane service so we can access it through a browser and see what we have just deployed. The UI is very user friendly and gives plenty of help to get users started with the tool. By default, Kuma deploys a default mesh called default
and this is the one we will use for this blog. Kuma allows you to deploy multiple meshes on the same Kuma cluster, which can provide a team the ability to grant applications, or each team their own mesh if needed.
In the UI, you will also notice there are not any data planes and this is because we have not on-boarded any applications onto a Kuma mesh. Once we deploy our application, we will see there is now a data plane resource and we can have more insights into our application.
Before we go any further, let’s deploy our application so that we have something to work with.
Deploying our Sample Application
We’ll leave this section short and sweet as I want to keep the focus on Kuma and Kong. To give us an application to work with, I’m going to deploy my Random Sports Team application as shown below.
# create a namespace for the application
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: random-sports-team
labels:
kuma.io/sidecar-injection: enabled
EOF
# deploy the application
$ kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: random-sports-team
namespace: random-sports-team
labels:
app: random-sports-team
spec:
selector:
matchLabels:
app: random-sports-team
template:
metadata:
labels:
app: random-sports-team
spec:
containers:
- name: random-sports-team
image: jacobmammoliti/random-sports-team:latest
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
EOF
# create a service for our application
# you will see the application pod CrashLoopBackoff until this is created
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: random-sports-team
namespace: random-sports-team
spec:
selector:
app: random-sports-team
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
type: ClusterIP
EOF
# verify the pod and service came up
# you will notice the pod has 2 containers, one being the application and the other being the data-plane proxy
$ kubectl get pods,svc -n random-sports-team
NAME READY STATUS RESTARTS AGE
pod/random-sports-team-db7f4d797-xz4zt 2/2 Running 1 6s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/random-sports-team ClusterIP 10.7.252.222 <none> 8080/TCP 6s
Optionally, we can port-forward the control plane service again and we should now see a new dataplane with our application in it as shown below.
Deploying Kong
Now that we have the service mesh and our application deployed, it’s time to configure a way to allow outside traffic to our application. For this, we will use Kong which is an Ingress Controller created by the same team that built Kuma. Kong can be deployed through Helm 2 and 3, but for this blog I’m just going to install with the Kubernetes manifests and kubectl
as shown below.
# deploy the Kong requirements such as namespace, CRDs, deployment, etc.
$ kubectl create -f https://bit.ly/k4k8s
# label the Kong namespace so that Kuma will inject the ingress controller with a data-plane proxy
$ kubectl label namespace kong kuma.io/sidecar-injection=enabled
# delete the current ingress pod so that it can be injected with the data-plane proxy
$ kubectl delete pods --all -n kong
# verify the ingress pod and services have come up successfully
$ kubectl get pods,svc -n kong
NAME READY STATUS RESTARTS AGE
pod/ingress-kong-67d54d4df6-2rm8w 3/3 Running 0 30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-proxy LoadBalancer 10.7.249.75 <redacted> 80:32151/TCP,443:31233/TCP 6m29s
service/kong-validation-webhook ClusterIP 10.7.249.63 <none> 443/TCP 6m29s
After this, Kong has successfully been deployed and we can now return to our application. We need to create a Kubernetes Ingress object which I’ll do below as well as annotate our randoms-sports-team
service to allow Kuma to handle the load balancing instead of Kong.
# create an ingress object in the random-sports-team namespace
$ kubectl apply -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: random-sports-team-ingress
namespace: random-sports-team
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: random-sports-team
servicePort: 8080
EOF
# add the upstream annotation
$ kubectl annotate svc/random-sports-team -n random-sports-team ingress.kubernetes.io/service-upstream="true"
You should now be able to visit your application on the public IP assigned to the kong-proxy
service.
Securing with communications mTLS
The final topic I want to cover in this blog is mutual TLS. A big benefit to adopting a service mesh like Kuma is the ability to ensure all services are talking over a secure connection. This is very easy to configure in Kuma and is very configurable. For the purposes of this blog, we will just enable mTLS across the default
mesh and then allow all services to communicate with each other securely. In this example we are using a built-in CA to sign our certificates but Kuma does allow you to provide your own if required.
$ kubectl apply -f - <<EOF
apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
name: default
spec:
mtls:
enabledBackend: ca-1
backends:
- name: ca-1
type: builtin
EOF
$ kubectl apply -f - <<EOF
apiVersion: kuma.io/v1alpha1
kind: TrafficPermission
mesh: default
metadata:
namespace: default
name: all-traffic-allowed
spec:
sources:
- match:
service: '*'
destinations:
- match:
service: '*'
EOF
And that’s it! The kong-proxy
service is now securely talking to the random-sports-team
service over the Kuma service mesh securely using mTLS. To have a look at the mTLS settings in the UI, we can port-forward the service again and head over to Meshes → Overview and verify it is using the built-in CA we specified.
Extending This
In this blog, we deployed the Kuma control-plane which gave us the grounds to deploy the default
service mesh and deploy our application within it. We also used the Kong ingress controller to allow outside traffic to the random-sports-team
application. There are plenty of ways we can go from here but some of the features I want to cover in a future blog focus around getting this production ready by adding traffic shaping, custom TLS certificates to Kong through cert-manager, and exploring what other additional features Kuma can provide for us. If you are interested in learning more about Kuma or service mesh in general, as well as Kong, let me know and I’d love to discuss it with you.