Author: Jacob Mammoliti

Running Consul’s Ingress Gateway in Kubernetes


With the release of Consul 1.8, HashiCorp added additional features that helped users connect services inside and outside of the the Connect service mesh. The two features that stood out to me in this release, were the additions of the Ingress and Terminiating Gateways. In this blog, I am going to focus on the Ingress Gateway and demonstrate how it works inside of a Kubernetes cluster.


I will be using Google Cloud Platform and its Google Kubernetes Engine (GKE) offering for my Kubernetes cluster. To follow along, you will need:

  • Access to a GCP Project with Kubernetes Engine Admin and Kubernetes Enginer Cluster Admin IAM permissions
  • Consul 1.8.0+ binary installed locally

Deploying Consul on Kubernetes

The first item to take care of is deploying the Kubernetes cluster with Consul. To make this process easier, I have written a Terraform module that handles the deployment of a GKE cluster and then takes care of the Consul deployment via the Helm chart provided by HashiCorp. The repository can be found here. To easily follow along, clone the repository, as I reference a few files in that repo locally.

A lot of the initial configuration steps can be found in the repository, so I will skip through that here and start by configuring my tfvars file. At a high-level, this file dictates how I want the Kubernetes and Consul clusters to look. I’m setting the region to deploy into and enabling the Consul Ingress Gateway along with a few other Consul related attributes. In a production environment, I would highly recommend enabling Gossip encryption, ACLs, and the usuage of TLS. Below is the tfvars file I am using for my environment.

cluster_name                     = "consul-east"
region                           = "us-east1-b"
project_id                       = <REDACTED>
initial_node_count               = 3
consul_datacenter                = "dc-east"
consul_image_tag                 = "consul:latest"
consul_ingress_gateway_enabled   = true
consul_connect_enabled           = true
preemptible                      = true

With the variable file configured, the environment can be deployed by running a Terraform apply.

# Build the GKE cluster with Consul deployed afterwards
$ terraform apply -var-file=dc-east.tfvars
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.


connect = gcloud container clusters get-credentials consul-east --zone us-east1-b --project <REDACTED>

# Connect to the GKE cluster
$ gcloud container clusters get-credentials consul-east --zone us-east1-b --project <REDACTED>
Fetching cluster endpoint and auth data.
kubeconfig entry generated for consul-east.

# Verify the Consul cluster is running
$ kubectl get pods -n consul
NAME                                                          READY   STATUS    RESTARTS   AGE
consul-9g685                                                  1/1     Running   0          3m24s
consul-connect-injector-webhook-deployment-757d6f68c4-2ksmp   1/1     Running   0          3m23s
consul-controller-8c9f5695-rvgdn                              1/1     Running   0          3m23s
consul-gmh6w                                                  1/1     Running   0          3m24s
consul-ingress-service-6f8cccf5f5-f4xkw                       2/2     Running   0          3m23s
consul-ingress-service-6f8cccf5f5-vrjcw                       2/2     Running   0          3m23s
consul-mht4k                                                  1/1     Running   0          3m24s
consul-server-0                                               1/1     Running   0          3m23s
consul-server-1                                               1/1     Running   0          3m23s
consul-server-2                                               1/1     Running   0          3m22s
consul-webhook-cert-manager-5f7b4df6b4-tp6f4                  1/1     Running   0          3m23s

# Verify the Consul Ingress Gateway service is up and has an External IP
$ kubectl get svc consul-ingress-service -n consul
NAME                     TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)                         AGE
consul-ingress-service   LoadBalancer   8080:32704/TCP,8443:32365/TCP   5m56s

Deploying a Sample Application

Now that Consul is running, I’m going to deploy a simple application, that will also be onboarded to the Cons ul Connect service mesh, to test out the Ingress Gateway. The application is a simple webserver from HashiCorp that returns the text defined as an argument passed in the YAML spec.The full YAML can be found in the repository I linked to above. To deploy the application run the following command:

# Deploy the sample application
$ kubectl apply -f kubernetes/hello-world.yaml
serviceaccount/static-server created
pod/static-server created

# Ensure the application successfully deployed
$ kubectl get pods
static-server      3/3     Running   0          22s

Configuring the Ingress Gateway

At the time of writing this, there is not a CRD (Custom Resource Definition) for the Ingress Gateway config entry. For now, the gateway can be configured by communicating with Consul directly. To do this in a lab environment, I typically port-forward the consul-ui service and then apply changes as shown below.

# Define the protocol for the static-server application as HTTP
$ kubectl apply -f - <<EOF
kind: ServiceDefaults
  name: static-server
  protocol: 'http'

# Port-forward the consul-ui service
$ kubectl port-forward svc/consul-ui 8500:80 -n consul
Forwarding from -> 8500

# In a separate terminal, export the CONSUL_HTTP_ADDR environment variable

# In the same separate terminal, configure the Ingress Gateway
$ consul config write - <<EOF
Kind = "ingress-gateway"
Name = "ingress-service"

Listeners = [
   Port     = 8080
   Protocol = "http"
   Services = [
       Name  = "static-server"
Config entry written: ingress-gateway/ingress-service

Verifying the Ingress Gateway

To verify the Ingress Gateway is working as expected, I can try to curl the gateway’s endpoint to verify I’m hitting my application.

Note: Consul routes service traffic based on Host headers. If you don’t specify one in the Ingress Gateway config it takes the format of <service_name>.ingress.<datacenter_name>.consul.

$ curl -H 'Host: static-server.ingress.dc-east.consul'
"Hello World!"

Great, that works as expected! In most cases however, you are unlikely to tell your clients to reach your application through an IP and pass in an internal Consul name as the Host header. To fix this, I can update the Ingress gateway config entry to define a custom host as shown below. For this example I am going to use which provides easy wildcard DNS.

$ consul config write - <<EOF
Kind = "ingress-gateway"
Name = "ingress-service"

Listeners = [
   Port     = 8080
   Protocol = "http"
   Services = [
       Name  = "static-server"
       Hosts = ""
Config entry written: ingress-gateway/ingress-service

Now try to curl the service with the new hostname.

$ curl
"Hello World!"

Even better, the service is now reachable at a more client friendly endpoint. To see what this looks like in the Consul UI, ensure the port-forward is still active and access Consul via If I click on the ingress-service service and then Upstreams, I can see the application that is being exposed by the Ingress Gateway.


Expanding on this Exercise

This blog has been an introduction on how to use the Consul Ingress Gateway with Kubernetes. There are many additional features that can be added to this to get it in a more “production ready” state, but I wanted to try and keep it focused on the Ingress Gateway for this exercise. Some areas of these areas of expansion include:

  • Tightening up service mesh communication with Intentions
  • Enable Consul ACLs
  • Deploy a second version of the application and enable service splitting

If you are interested in learning more about Consul on Kubernetes or Consul in general, let me know in the comments below and I’d love to have a conversation about it!



//blog search

//other topics