Blog
Insights

How to secure the Kubernetes API behind a VPN

Lefteris Nikoltsios
Author
Lefteris Nikoltsios
Lead Software Engineer

Key Points

Earlier this week, the first major vulnerability (CVE-2018–1002105) was discovered in Kubernetes, the container management platform taking the DevOps world by storm. The vuln, on a default install, allows an attacker with access to the Kubernetes API to gain full administrator access to the cluster and everything running on it. In cyber security terms, it doesn’t get much worse.

Fortunately, public cloud platforms were quick to patch the vulnerability, but for those who care about security it was a reminder that a single layer of defence is rarely enough. What if anyone was exploiting this before it was publicly disclosed, what if there’s another vuln we don’t yet know about, what if you can’t upgrade your private cluster that quickly? If you’re running anything sensitive on Kubernetes clusters, these questions should matter to you.

Many Kubernetes implementations leave the API server exposed to the internet, and specifically, in Google Cloud Platform’s native “public” implementation you can’t even add a firewall. Even if you could, security by IP white-listing is rarely ideal, as it prevents flexible working locations, and also means an attacker who compromises any device on your office network has direct access to production systems. A VPN is a flexible and secure solution to this problem.

This blog describes a secure architecture for installing a Kubernetes cluster by hiding the Kubernetes API server behind a VPN, while allowing the containers to be accessible from the public internet as normal.

In this case we used the Kubernetes service native to Google Cloud Platform, but the proposed architecture could easily be applied in any other cloud or self-hosted infrastructure.

Secure Kubernetes Architecture

The following image shows our target architecture:


Secure Kubernetes Architecture

To get started, let’s first create our Kubernetes cluster in its own network. In Google Cloud you can do this by installing Kubernetes in private mode. By selecting this option, Kubernetes’ slave nodes will use non-public routable IP addresses. In this case, we created a VPC network with one subnet:

  • 10.50.40.0/26

VPC network details

and two Secondary IP ranges (this isn’t mandatory, but we’re trying to stay true to the general Kubernetes setup guides) which will be used during the Kubernetes setup:

  • kubernetes-services: 10.0.32.0/20
  • kubernetes-pods: 10.4.0.0/14

VPC secondary ranges for kubernetes services and pods

Even in “private mode” Google Cloud by default still exposes the Kubernetes API to the internet, so we also have to configure it as a private master node. Now the API can only be accessed from our slave subnet, 10.50.40.0/26. The following diagram shows how our cluster is now set up:


Kubernetes setup configuration

Now you have your Kubernetes cluster securely installed inside your VPC, so it’s only accessible from inside the cloud, but your DevOps team still need to access the API for controlling the cluster. This is where the VPN comes in.

For this guide, we installed an OpenVPN access server (from the Google Marketplace) which gives access to the above private subnet. To make it work, we need two network interfaces:

  • nic0, for the External IP address
  • nic1, for the created VPC network

For some reason the OpenVPN access server from Google Marketplace comes with only one network interface. So to add the second we created a new virtual machine using the old one as template, this time with two interfaces:


Network interfaces of the OpenVPN access server

Next we configure the access server to allow VPN users access to our cluster subnets. Add two lines to the “Specify the private subnets to which all clients should be given access (one per line)” setting as follows:

  • 10.50.40.0/26 (This allows access to all the Kubernetes nodes and in general to all the machines on the created VPC network)
  • 172.16.0.16/28 (This allows access to the Kubernetes master API server on Google Cloud’s own private network)

OpenVPN VPN settings

So far, so good. Except currently the OpenVPN access server doesn’t know how to route traffic to the network 172.16.0.16/28 where our Kubernetes API server exists. To allow the server to route this traffic we had to add a new route:

sudo ip route add 172.16.0.16/28 via 10.50.40.1 dev ens5

This routes traffic from any VPN users on the VPN subnet to the API server, via the interface we placed on the VPC network (nic1 - which in our case has been named as ens5 by the OpenVPN access server).

Now we have the cloud part fully configured, we need to install VPN clients on any workstations/laptops that need access to the Kubernetes API server, so the kubectl management tool can connect to the cluster.

So now everything is secure, our final step is to install an ingress to the cluster, to allow the public to access our apps and services located on the cluster. This creates a LoadBalancer, and the services can be accessed via the external IP of the LoadBalancer.

Hey presto, you now have a secure Kubernetes architecture operating inside a VPC, with public services fully accessible and the private ones nicely tucked up behind a VPN.

We hope this helps the DevOps community continue to move fast and break things, without compromising on security. Any feedback, comments or suggestions are welcome - would you have implemented it differently?

Thanks to Chris Wallis.

Get our free

Ultimate Guide to Vulnerability Scanning

Learn everything you need to get started with vulnerability scanning and how to get the most out of your chosen product with our free PDF guide.

Sign up for your free 14-day trial

7 days free trial