DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Related

  • Running Serverless Service as Serverful
  • Mastering Node.js: The Ultimate Guide
  • Keep Your Application Secrets Secret
  • Kubernetes Cluster Setup on Ubuntu, Explained

Trending

  • Building Safe AI: A Comprehensive Guide to Bias Mitigation, Inclusive Datasets, and Ethical Considerations
  • AI-Driven API and Microservice Architecture Design for Cloud
  • The Future of Agile Roles: The Future of Agility
  • Integrating Salesforce APEX REST
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Kubernetes Hardening Tutorial Part 3: Logging and Auditing

Kubernetes Hardening Tutorial Part 3: Logging and Auditing

Learn how to set up an AWS EKS cluster with Terraform and leverage best practices to configure roles, service accounts, logging, and auditing with useful tools.

By 
Tiexin Guo user avatar
Tiexin Guo
·
Mar. 25, 22 · Tutorial
Like (3)
Save
Tweet
Share
4.9K Views

Join the DZone community and get the full member experience.

Join For Free

In the first two parts of this tutorial, we discussed:

  • How to enhance your Pod security in your Kubernetes cluster.
  • How to harden your Kubernetes network security.

As the third and final part of this tutorial we are going over the authentication, authorization, logging, and auditing of a Kubernetes cluster. Specifically, we will demonstrate some of the best practices in AWS EKS. After reading this tutorial, you will be able to:

  • Create an AWS EKS cluster with Infrastructure as Code (IaC) using Terraform.
  • Understand the best practices of creating an EKS cluster.
  • Get a deeper understanding of service accounts.
  • Do an IAM-based user authentication.
  • Audit cluster access.

If you are using another public cloud service provider, the terminology might differ, but the principles still apply.

Creating a Cluster Using Infrastructure as Code With Terraform

We have created two Terraform modules that create the:

  • Networking parts: VPC, subnets, internet gateway, NAT gateway, route tables, etc.
  • Kubernetes cluster.

If you want to give it a try yourself, simply run:

Properties files
 
git clone https://github.com/IronCore864/k8s-security-demo.git
git fetch origin pull/12/head
git checkout -b aws_eks FETCH_HEAD
cd k8s-security-demo

# edit the config.tf and update the AWS region accordingly
# configure your aws_access_key_id and aws_secret_access_key

terraform init
terraform apply


To view the code more easily, read the pull request here.

AWS EKS Kubernetes Clusters Best Practices

Use a Dedicated IAM Role for Cluster Creation

The IAM user or role used to create the cluster is automatically granted system:masters permissions in the cluster's RBAC configuration:

Properties files
 
$ kubectl get clusterrolebinding cluster-admin -o yaml
# ...
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:masters
# ...
$ kubectl get clusterrole cluster-admin -o yaml
# ...
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'


Basically, this means, the user or role used to create the cluster (the role to which the AWS access key you used to run the Terraform scripts) will be a cluster-admin, who can do anything.

Note: This IAM entity doesn't appear in any visible configuration, so make sure to keep track of which IAM entity originally created the cluster.

Therefore, it is a good idea to create the cluster with a dedicated IAM role and regularly audit who can use this role.

If you use this role to perform some routine Kubernetes maintenance tasks on the cluster, it can be quite dangerous, since this role can do anything.

As a best practice, use a dedicated role to run your Terraform scripts and nothing else. For example, you should create an IAM role dedicated to your CI systems that runs the Terraform scripts; but make sure this role will only be used by your CI system and nobody else.

To access the cluster using another user or role, see section 4 of this tutorial.

Enable Audit Logging

Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster. The cluster audits the activities generated by users, applications that use the Kubernetes API, and the control plane itself.

AWS EKS control plane logging provides audit and diagnostic logs, which can be directly loaded into AWS CloudWatch Logs. These logs make it easy for you to secure and run your clusters. Enabling these logs is optional, but you should definitely do it. You can refine the exact log types you need.

As a reminder, the following cluster control plane log types are available:

  • Kubernetes API server component logs (API) – Your cluster's API server is the control plane component that exposes the Kubernetes API.
  • Audit (audit) – Kubernetes audit logs provide a record of the individual users, administrators, or system components that have affected your cluster.
  • Authenticator (authenticator) – Authenticator logs are unique to Amazon EKS. These logs represent the control plane component that Amazon EKS uses for Kubernetes Role-Based Access Control (RBAC) authentication using IAM credentials.
  • Controller manager (controllerManager) – The controller manager manages the core control loops that are shipped with Kubernetes.
  • Scheduler (scheduler) – The scheduler component manages when and where to run pods in your cluster.

When we use the Terraform resource aws_eks_cluster to create an AWS EKS cluster (see here), by default, none of those log types are enabled. To enable the logging, we need to pass the right values to the parameter enabled_cluster_log_types (here).

Properties files
 
#terraform/modules/eks/cluster.tf
resource "aws_eks_cluster" "cluster" {

    depends_on = [
    
        aws_iam_role_policy_attachment.AmazonEKSClusterPolicy,
        
        aws_iam_role_policy_attachment.AmazonEKSServicePolicy,
    
    ]
    
    version = var.k8s_version
    
    name = var.cluster_name
    
    role_arn = aws_iam_role.eks_role.arn
    
    vpc_config {
    
        subnet_ids = var.worker_subnet_ids
        
        security_group_ids = [aws_security_group.cluster.id]
    
    }
    
    enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]

}


The enabled_cluster_log_types parameter (optional) describes a list of the desired control plane logging to enable, and the most important one being the "audit" log.

Use Private EKS Cluster Endpoint Whenever Possible

By default, when you provision an EKS cluster, the API cluster endpoint is set to public, i.e. it can be accessed from the Internet. For the purpose of testing, the EKS cluster created using the Terraform modules provided in the GitHub link is also using a public endpoint.

However, if you are running a production-grade Kubernetes cluster in a company, it's safer to set the endpoint as "private" and private only. In this case:

  • All traffic to your cluster API server must come from within your cluster's VPC or a connected network (for example, you use a VPN, a jump host/bastion host, transit gateway, direct connect, etc, to access the cloud resources).
  • There is no public access to your API server from the internet. Any kubectl commands must come from within the VPC or a connected network.

To disable the public endpoint and use the private endpoint only, add a vpc_config section in the resource "aws_eks_cluster":

Properties files
 
 vpc_config {
    endpoint_private_access = true
    endpoint_public_access  = false
  }


While this requires extra setup (so that you are in a connected network and can access the API endpoint), it also drastically reduces the attack surface.

If restricting to a private endpoint isn't really an option for you, don't worry too much, because even if you are using the public endpoint, the traffic is still encrypted, and you are still required to authenticate. We will, however, demonstrate possible dangerous situations in the next section.

Deep Dive Into Service Accounts

Kubernetes has two types of users: service accounts, and normal user accounts. Service accounts handle API requests on behalf of Pods, and the authentication is typically managed automatically by K8s using tokens.

Now, let's poke around and see what we can do:

Properties files
 
# the pod name for you might differ
$ kubectl get pod aws-node-pdh5b -o yaml | grep -i account
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
  serviceAccount: aws-node
  serviceAccountName: aws-node
      - serviceAccountToken:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount

$ kubectl describe sa aws-node | grep secrets
Image pull secrets:  <none>
Mountable secrets:   aws-node-token-lw6z4

$ kubectl get secret aws-node-token-lw6z4
NAME                   TYPE                                  DATA   AGE
aws-node-token-lw6z4   kubernetes.io/service-account-token   3      3d14h

$ kubectl exec --stdin --tty aws-node-pdh5b -- /bin/bash
bash-4.2 $ ls /var/run/secrets/kubernetes.io/serviceaccount
ca.crt  namespace  token


In this quick demo above, we have seen that:

  • The service account uses a secret, which stores the ca.crt, namespace, and token. If the Kubernetes user has permission to read this secret, they can get these contents.
  • The service account is mounted as some files under the path /var/run/secrets/kubernetes.io/serviceaccount. Even if the Kubernetes user doesn't have the permission to read a Kubernetes secret, but is able to SSH into the pod, they can also read the content of the secrets.

A service account token is a long-lived, static credential. If it is compromised, lost, or stolen, an attacker may be able to perform all the actions associated with that token until the service account is deleted.

So, accessing Pod Secrets and executing commands in a pod should be restricted and should only be granted on a need-to-know basis (i.e. least privileged principle, read about Pod Security Policies in the first tutorial). If the serviceaccount information is leaked, it can be used from outside the cluster. Let's demonstrate that:

Properties files
 
# Point to the API server; this is only an example
APISERVER="https://8885DF557F2AC3947D28381DBF5B7670.gr7.ap-southeast-1.eks.amazonaws.com"
# serviceaccount path in the pod: /var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE="NAMESPACE_FROM_THE_SERVICEACCOUNT_PATH_HERE"
TOKEN="TOKEN_FROM_THE_SERVICEACCOUNT_PATH_HERE"
# copy the ca.crt to a local file
CACERT=ca.crt
# Explore the API with TOKEN
$ curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "ip-172-16-56-239.ap-southeast-1.compute.internal:443"
    }
  ]
}


As we can see above, as long as we have the information that is stored in the serviceaccount, we can actually get access to the cluster anytime we want.

If you really need to grant access to K8s API from outside the cluster, like from an EC2 instance (for example, running your CI/CD pipelines), or from an end user's laptop, it’s better to use some other method and map it to a K8s RBAC role.

Let's see how to do this in AWS EKS.

User Authentication

To grant additional AWS users or roles the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes and create a Kubernetes rolebinding or clusterrolebinding with the name of a group that you specify in the aws-auth ConfigMap.

Example:

Properties files
 
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::111122223333:role/eksctl-my-cluster-nodegroup-standard-wo-NodeInstanceRole-1WP3NUE3O6UCF
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - userarn: arn:aws:iam::111122223333:user/admin
      username: admin
      groups:
        - system:masters
    - userarn: arn:aws:iam::444455556666:user/ops-user
      username: ops-user
      groups:
        - eks-console-dashboard-full-access-group


If you apply the aws-auth ConfigMap above:

  • IAM user with ARN arn:aws:iam::111122223333:user/admin will be mapped to system:masters group
  • IAM user with ARN arn:aws:iam::444455556666:user/ops-user will be mapped to eks-console-dashboard-full-access-group.

Then you can use Kubernetes RBAC to bind the group to a Role/ClusterRole. For more details on Kubernetes RBAC, see the official documentation here.

It's worth noting that when multiple users need identical access to the cluster, rather than creating an entry for each individual IAM User, you should allow those users to assume an IAM Role, and map that role to a Kubernetes RBAC group in the mapRoles section of the ConfigMap. This keeps the aws-auth ConfigMap short, simple, easy to read/manage. This will be easier to maintain, especially as the number of users that require access grows.

While IAM is AWS's preferred way to authenticate users who need access to an AWS EKS cluster, it is not the only way! It's possible to use an OIDC identity provider such as GitHub using an authentication proxy and Kubernetes impersonation. Due to the length of this article, we will not cover how to do this here; but please do keep in mind that IAM isn't your only choice.

Cluster Access Auditing

It's clear that auditing Kubernetes access is crucial for cluster security. It's more crucial to know that who requires access is likely to change over time, thus auditing regularly is the best practice: to see who has been granted access, the rights they’ve been assigned, and whether they still need the access.

Here, we introduce two small tools for an easier audit:

kubectl-who-can

kubectl-who-can is a small tool that shows which subjects have RBAC permissions to VERB [TYPE | TYPE/NAME | NONRESOURCEURL] in Kubernetes.

It's an open-source project by Aqua Security and you might have already known them because of their other project trivy which is a scanner for vulnerabilities in container images, file systems, and Git repositories, as well as for configuration issues.

The easiest way to install kubectl-who-can is by Krew, which is the plugin manager for kubectl CLI tool. Assuming you have already installed krew, you can simply run:

Properties files
 
kubectl krew install who-can


For example, if you run:

Properties files
 
tiexin@Tiexins-Mac-mini ~ $ kubectl who-can get secrets
ROLEBINDING                         NAMESPACE    SUBJECT           TYPE            SA-NAMESPACE
system:controller:bootstrap-signer  kube-system  bootstrap-signer  ServiceAccount  kube-system
system:controller:token-cleaner     kube-system  token-cleaner     ServiceAccount  kube-system

CLUSTERROLEBINDING                           SUBJECT                         TYPE            SA-NAMESPACE
cluster-admin                                system:masters                  Group
system:controller:expand-controller          expand-controller               ServiceAccount  kube-system
system:controller:generic-garbage-collector  generic-garbage-collector       ServiceAccount  kube-system
system:controller:namespace-controller       namespace-controller            ServiceAccount  kube-system
system:controller:persistent-volume-binder   persistent-volume-binder        ServiceAccount  kube-system
system:kube-controller-manager               system:kube-controller-manager  User


You can easily get all the users/group/serviceaccounts that can get secrets.

RBAC Lookup

RBAC Lookup is a CLI that allows you to easily find Kubernetes roles and cluster roles bound to any user, service account, or group name. It helps to provide visibility into Kubernetes auth.

For Mac users, the easiest way to install it is by "brew:"

Properties files
 
brew install FairwindsOps/tap/rbac-lookup


For the simplest use case, rbac-lookup returns any matching user, service account, or group along with the roles it has been given:

Properties files
 
$ rbac-lookup aws-node
SUBJECT                 SCOPE          ROLE
kube-system:aws-node    cluster-wide   ClusterRole/aws-node


Summary

As the third and last part of the Kubernetes hardening tutorial, this blog also closes the detailed introduction and hands-on guide on NSA/CISA's Kubernetes Hardening Guidance. For additional security hardening guidance, here are some useful links to give it a deeper dive:

  • Center for Internet Security.
  • Kubernetes benchmarks.
  • Docker Security Technical Implementation Guides.
  • Kubernetes Security Technical Implementation Guides.
  • Cybersecurity and Infrastructure Security Agency (CISA) and analysis report.
  • Kubernetes documentation.
  • Container Security book by Liz Rice.
API AWS Kubernetes Docker (software) Hardening (computing)

Published at DZone with permission of Tiexin Guo. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Running Serverless Service as Serverful
  • Mastering Node.js: The Ultimate Guide
  • Keep Your Application Secrets Secret
  • Kubernetes Cluster Setup on Ubuntu, Explained

Partner Resources


Comments

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: