Kubernetes Security: Fundamentals, Tools & Best Practices

published
July 22, 2024
TABLE OF CONTENTS
Get Secure Remote Access with Netmaker
Sign up for a 2-week free trial and experience seamless remote access for easy setup and full control with Netmaker.

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides a robust framework for managing distributed systems and microservices, allowing developers to deploy applications seamlessly across clusters of virtual or physical machines. 

With its rich ecosystem of plugins and tools, Kubernetes enables efficient management of containerized workloads at scale, fostering agility, reliability, and portability in modern cloud-native environments.

Kubernetes is designed to manage containerized applications in any environment. It’s noted for this flexibility, but the same benefit comes with a unique set of security challenges.

Core concepts of Kubernetes security

At its core, Kubernetes networking enables communication between pods, services, and external clients, ensuring seamless connectivity and efficient traffic routing.

Pods, which are the basic units of deployment in Kubernetes, each have their own unique IP address. Containers within the same pod share this IP address and can communicate with each other directly over the localhost interface. This simplifies inter-container communication within the same node.

Cluster networking in Kubernetes involves ensuring communication between pods across different nodes in the cluster. Various networking solutions or plugins, such as Calico, Flannel, or Weave, facilitate this by providing network overlays, routing capabilities, and policies. These solutions ensure that regardless of which node a pod is deployed on, it can communicate with other pods within the same cluster.

Services in Kubernetes provide a way to abstract and expose a set of pods as a network service. Each service is assigned a stable IP address and a DNS name that other pods can use to access it. Services use selectors to dynamically discover and load balance traffic to pods that match certain criteria, such as labels.

Ingress controllers extend the capabilities of services by managing external access to services within the cluster. They define rules for routing incoming traffic based on hostnames or paths to the appropriate services, enabling external clients to communicate with applications running in the cluster.

Network policies provide a way to define and enforce rules for communication between pods and services. These policies allow administrators to specify which pods can communicate with each other based on criteria such as pod labels, namespaces, or IP ranges. This enhances security within the cluster by restricting unauthorized access and controlling traffic flow.

Overall, Kubernetes networking is a sophisticated system that ensures reliable, secure, and scalable communication between applications and services deployed within a Kubernetes cluster. Understanding these core concepts is crucial for configuring, managing, and troubleshooting networking issues effectively in Kubernetes environments.

How to secure the different clusters in Kubernetes

Nodes

The node is where your pods run, so it’s critical to lock it down. One of the first things to do is to ensure that only authorized personnel have SSH access to the nodes. You should set up role-based access control (RBAC) policies to restrict access based on roles within the team.

Next, focus on keeping the node OS and all software up to date. This means regular patching and updates. Automated tools like AWS Systems Manager can help with this. They can push updates across all nodes in a cluster, ensuring that you are not exposed to known vulnerabilities.

Then, there's network security. Set up network policies to control the traffic between pods and between nodes. Kubernetes Network Policies let you define which pods can communicate with each other and with outside services. For instance, you can create a policy that only allows traffic from a web pod to a database pod, effectively limiting exposure.

Another critical area is runtime security. You can use tools like Falco to monitor the behavior of the containers running on the nodes. Falco can detect suspicious behavior such as a shell being launched inside a container or unexpected file access. This helps you catch and respond to potential security incidents quickly.

You can also isolate workloads using namespaces. By separating different teams’ workloads, you can ensure that even if one namespace is compromised, it doesn’t affect others. For added isolation, use node labels and node selectors to schedule sensitive workloads on dedicated nodes.

Finally, disk encryption is crucial. Using Kubernetes Secrets to manage credentials and encrypting data at rest with tools like dm-crypt ensures that even if someone gains physical access to a node, they can’t easily read the data.

So, securing the nodes involves a mix of access control, network policies, runtime security, workload isolation, and encryption. Each step helps in building a robust defense against a myriad of potential threats.

Pods

In Kubernetes, pods are the smallest deployable units in the system. Each pod can contain one or more containers, which share the same network namespace. This setup means they can communicate with each other using `localhost`. But with great power comes great responsibility. Securing Pods is essential to keep business networks safe.

One critical aspect is controlling what each pod can do. By default, Kubernetes gives pods a lot of freedom, which isn't always a good thing. For example, if a pod is compromised, an attacker can gain extensive access. 

To mitigate that risk, you can use Security Contexts to define what actions a pod can and cannot perform. Setting a pod’s `runAsUser` to a non-root user is a simple yet effective measure. In practice, this means configuring the Pod's YAML file to include `runAsUser: 1000` (or another non-root user ID). By doing this, you ensure that the container doesn’t have unnecessary root privileges, reducing the risk if it’s breached.

Network policies are another area to look at when securing pods. They let you define which pods can communicate with each other. Consider an e-commerce application. You have front-end pods that need to talk to back-end pods, but they shouldn't be able to talk to the database pods directly. 

You can create a network policy that restricts access, ensuring only the back-end pods can communicate with the database. In a YAML file, this might look like specifying that the `podSelector` for the database only allows traffic from pods labeled as `role: backend`.

Secrets management is another area of interest for pods’ security. Storing sensitive data like API keys, passwords, and certificates directly inside pods is risky. Instead, Kubernetes provides a way to manage this data securely using Secrets. 

For example, you can create a secret to store a database password and then reference this Secret in your pod’s configuration. This keeps sensitive information out of your codebase and away from prying eyes.

Pod security policies (PSPs) add another layer of security. They allow you to set granular rules about the security standards pods must meet to be deployed. 

For instance, you might enforce policies that prevent pods from using privileged containers or require certain security parameters to be set, such as mandatory `readOnlyRootFilesystem: true`. These policies act as gatekeepers, ensuring only compliant pods get the green light.

Finally, regular updates and patches are vital for maintaining security. Outdated images can have vulnerabilities that are easily exploited. By regularly updating container images and applying security patches, you can protect against known threats. Using tools like image scanners can help you identify vulnerabilities in the images we plan to deploy.

Namespaces

Namespaces are your best friend for enforcing isolation and access control. Think of them as separate compartments within your cluster. Each application or team can have its own namespace, acting like a sandbox. This not only helps in organizing resources but also enhances security.

For instance, imagine you have two teams: one working on a customer-facing web app and another on internal tools. By creating separate namespaces, you can ensure that team A’s resources won’t interfere with team B’s. Here’s how you can create these namespaces:

kubectl create namespace team-a
kubectl create namespace team-b

Once you have your namespaces, you can get really detailed with permissions using role-based access control. With RBAC, you define who can do what within each namespace. For example, you might allow developers in team A to deploy applications in the `team-a` namespace but restrict them from accessing resources in `team-b`.

Here’s a simple YAML file to create a role that allows reading pods in the `team-a` namespace:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: team-a
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

And then we bind this role to a user:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: team-a
subjects:
- kind: User
  name: developerA
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

This method ensures that developerA can only read pods in the `team-a` namespace and has no access to `team-b`. 

Network policies are another layer of security in namespaces. They control the traffic flow between pods. For example, to block all traffic from `team-b` namespace to `team-a` namespace, you can define a NetworkPolicy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  namespace: team-a
  name: deny-from-team-b
spec:
  podSelector: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: team-b

This setup makes sure that none of the pods in `team-b` can talk to any pod in `team-a`. Using namespaces this way helps you keep things tidy and secure.

Namespaces are essential for multi-tenancy and resource management as well. You can allocate a dedicated quota to each namespace to prevent any single team or application from hogging all the resources. Here’s a quick example of a resource quota for `team-a` namespace:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: team-a-quota
  namespace: team-a
spec:
  hard:
    pods: "10"
    requests.cpu: "4"
    requests.memory: "8Gi"
    limits.cpu: "5"
    limits.memory: "10Gi"

This quota limits `team-a` to 10 pods, 4 CPUs, and 8Gi of memory for requests, ensuring fair usage across the cluster.

By strategically using namespaces, you can isolate teams, enforce security policies, and manage resource allocation effectively. This helps in maintaining a secure and well-organized Kubernetes environment in a business network.

Kubernetes security best practices

Enable Role-Based Access Control (RBAC)

RBAC allows you to define who can do what within a cluster. For example, by assigning roles, you can ensure that only certain users have access to view or edit specific resources. This minimizes the chances of unauthorized access and potential breaches.

Use namespaces to segregate workload

Imagine namespaces as different sections in a library. By isolating applications within their own namespaces, you can limit the blast radius if something goes wrong. If one application gets compromised, it's much harder for an attacker to reach into another namespace. It’s like having multiple vaults, each with its own combination.

Also make it a point to scan container images for vulnerabilities before deploying them. Tools like Clair or Trivy can scan images and detect known vulnerabilities. This way, you can catch issues before they make it into your production environment.

Set up stringent network policies

Kubernetes allows you to define which pods can talk to which services. By setting up stringent network policies, you can prevent unwanted communication within the cluster. 

For example, you can ensure that a frontend service can talk to a backend service but not directly to the database. It’s like installing fire doors within a building to prevent a fire from spreading.

Secure sensitive information

Never store sensitive information like API keys or passwords directly in the container images or in plain text within configuration files. Instead, use Kubernetes Secrets. This way, sensitive data is stored securely and can be accessed by the containers that need it.

Enable logging and monitoring

Tools like Prometheus, Grafana, and Elasticsearch can help track what’s happening inside the cluster. By keeping an eye on logs and performance metrics, you can quickly detect and respond to any suspicious activity. You can use that data to surveil your environment for threats.

Kubernetes security tools

Principle of Least Privilege

The Principle of Least Privilege gives users and applications the minimum level of access they need to perform their tasks. This reduces the risk of accidental or malicious damage if credentials are compromised.

In Kubernetes, you can implement this principle using Role-Based Access Control (RBAC). RBAC defines roles that have specific permissions, and then assign those roles to users or service accounts. 

For example, if a developer only needs to deploy applications and not manage cluster-wide settings, you create a role that includes permissions only for deployment tasks, and assign it to them. This way, they can't accidentally change crucial settings or access resources they shouldn't.

Service accounts are another vital part. Applications running on the cluster use these accounts to interact with the Kubernetes API. By default, Kubernetes gives these service accounts a lot of power, which isn't always necessary. 

Instead, you should create custom service accounts with limited permissions tied to the specific needs of the application. For instance, if an application only needs to read data from a specific namespace, the associated service account should only have read permissions for that namespace, nothing more.

Namespaces themselves are useful for isolating resources and managing access. By creating separate namespaces for different teams or projects, you can enforce policies and roles specific to those namespaces. This keeps different parts of the organization from stepping on each other’s toes and limits the scope of any security breach. If an attacker gains access to one namespace, they won't automatically have access to others.

Audit logs are another essential security tool. By regularly reviewing these logs, you can monitor who accessed what and when. Any unusual activity, like a user accessing resources they normally wouldn’t, can be a red flag. Regular audits help ensure that the principle of least privilege is being adhered to and allow you to fine-tune permissions as needed.

By actively applying the principle of least privilege in Kubernetes, you minimize the chances of internal errors and external attacks causing significant harm. The goal is to be smart with permissions, being judicious with access.

Network segmentation

Managing Kubernetes with multiple microservices and tenants requires a firm handle on network traffic. By default, Kubernetes networking is flat, meaning any workload can communicate with another without restriction. This poses a risk. 

Attackers exploiting a running workload can use that openness to explore the internal network, move laterally to other containers, or access private APIs.

In Kubernetes, you can isolate traffic on several levels, such as between pods, namespaces, or labels. The goal of network segmentation is to minimize the blast radius if a container is compromised and to prevent lateral movement while ensuring valid traffic flows correctly. 

One way to enforce network isolation is by using separate clusters. This approach adds complexity, especially with tightly coupled microservices, but it’s viable for separating tenants based on risk.

Network policies are another powerful tool you can leverage for security when networking in Kubernetes. They are akin to firewall rules and control pod communication. Without them, any pod can interact with any other pod. 

Hence, defining network policies to limit communication to only necessary assets while denying everything else is essential. For instance, consider a policy to prevent backend egress between pods in the "default" namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-backend-egress
  namespace: default
spec:
  podSelector:
    matchLabels:
      tier: backend
  policyTypes:
  - Egress
  egress:
  - to:
    podSelector:
      matchLabels:
        tier: backend

Service meshes like Istio, Linkerd, or Hashicorp Consul offer other ways to segment network traffic. These technologies come with pros and cons depending on the use case. Here’s an Istio AuthorizationPolicy example:

apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
  name: "shoes-writer"
  namespace: default
spec:
  selector:
    matchLabels:
      app: shoes
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/default/sa/inventory-sa"]
    to:
    - operation:
        methods: ["POST"]

In this case, we’re enforcing rules for all deployments labeled with `app: shoes`. Only pods with the `inventory-sa` service account can access `shoes`, and only via the POST HTTP method.

Container Network Interface (CNI) plugins are another option. The CNI specification allows or disallows network access within Kubernetes. Solutions like Project Calico or Cilium provide various mechanisms for network isolation. A CNI is often needed if you want to implement Kubernetes Network Policies.

Choosing a CNI requires understanding the desired feature set from a security perspective and accounting for the resource overhead and maintenance involved.

Imagine a WordPress pod gets compromised on a cluster with no network segmentation. An attacker could use built-in utilities like `dig` or `curl` to explore the network. Discovering an internal API running on port `6379`, commonly Redis, they can interact with it, leading to data theft and modification. A locked-down network policy or service mesh would have blocked this connectivity, preventing the attack.

Another scenario involves a compromised web application on an unsegmented network. An attacker could request metadata URLs to grab `kube-env` files containing certificate keys for the bootstrap process. They could register as a node and escalate by stealing secrets. A simple NetworkPolicy could block such metadata URL calls:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: block-1
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 169.254.169.254/32
  podSelector: {}
  policyTypes:
  - Egress

Role-Based Access Control (RBAC)

Role-based access control (RBAC) restricts network access based on a person's role within an organization. It’s one of the main methods for advanced access control. The roles in RBAC refer to the levels of access that employees have to the network. 

Employees are only allowed to access the information necessary to effectively perform their job duties. Access can be based on several factors, such as authority, responsibility, and job competency. 

In addition to limiting access to network resources, RBAC can also restrict what specific tasks users can perform, such as viewing, creating, or modifying files. 

Lower-level employees usually do not have access to sensitive data if they do not need it to fulfill their responsibilities. So, a customer service representative might only have read-only access to customer data, while a manager could have full access to modify this data.

Secret management

Managing secrets properly ensures that things like passwords, API keys, and other sensitive data are kept safe from prying eyes. Kubernetes offers a built-in object called `Secret` to handle this sensitive data. 

It’s crucial to note that while Kubernetes keeps these secrets in base64 encoding, it’s not encryption. Base64 is just a way to represent binary data in an ASCII string format, which means anyone with access to the encoded data can easily decode it. So, our responsibility is to ensure access to these secrets is tightly controlled.

To create a secret, you might use a command like: `kubectl create secret generic my-secret --from-literal=username=myUser --from-literal=password=myPass`. This command generates a `Secret` object storing our `username` and `password`. 

Using `kubectl get secret my-secret -o yaml`, you can see the encoded values, but that means anyone else with access can too. So, we must implement Role-Based Access Control (RBAC) policies to limit who can view or modify these secrets.

Another key security practice is to use encryption at rest for secrets stored in Etcd, Kubernetes' underlying data store. Starting with Kubernetes v1.13, encryption at rest is supported. It requires setting up an encryption configuration file and specifying it in the API server's command options. 

For example, you’d add a section to your file like this:

apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: <base64-encoded-aes-key>
      - identity: {}

This snippet sets up AES-CBC encryption for secrets, ensuring they’re encrypted before being written to disk.

To further enhance security, consider using an external secrets management tool like HashiCorp Vault or AWS Secrets Manager. These tools integrate well with Kubernetes and offer additional layers of encryption and access control. 

For example, with Vault, we could use the Vault Agent Injector to automatically inject secrets into our pods. We just need to annotate our pod spec, like so:

annotations:
  vault.hashicorp.com/agent-inject: 'true'
  vault.hashicorp.com/role: 'my-app-role'
  vault.hashicorp.com/agent-inject-secret-db-creds: 'database/creds/my-role'

This setup pulls secrets from Vault and injects them directly into our containers, eliminating the need to store them on the Kubernetes API.

Note that secrets should never be hardcoded in your application code or stored in version control. Use Kubernetes secrets and external secret management tools to keep them safe. 

Always follow the principle of least privilege, ensuring only those who absolutely need access to the secrets can get to them. Using these strategies effectively will help keep your business networks secure and your sensitive data protected.

Get Secure Remote Access with Netmaker
Sign up for a 2-week free trial and experience seamless remote access for easy setup and full control with Netmaker.
More posts

GET STARTED

A WireGuard® VPN that connects machines securely, wherever they are.
Star us on GitHub
Can we use Cookies?  (see  Privacy Policy).