eks-security-options

5 ways to secure your Amazon EKS Control Plane

Spread the love

The EKS Control Plane is primarily composed of the EKS API, which serves as the interface that AWS provides for users to interact with cluster resources. The API is often publicly accessible to facilitate seamless integration with various tools; such CI/CD systems and other third-party systems such as Lens, Rancher, Port.io, Backstage, and many others. While this accessibility can expedite the process of integrating various services, it simultaneously presents potential vulnerabilities, making the EKS API susceptible to a variety of attacks. For instance, exposing the API can lead to unauthorized access and data leaks, highlighting the importance of securing the control plane effectively. In this article, I have highlighted five ways to secure an EKS control plane and how to enable secure access to it. Let’s dive in

1. IAM and EKS tokens are crucial components of EKS security.

IAM (Identity and Access Management) and EKS authentication tokens play a critical role in securing your Amazon EKS (Elastic Kubernetes Service) clusters by governing access control and ensuring that only authorized entities can interact with the Kubernetes API server. IAM defines the identities (users, roles, or federated entities) and their associated permissions, determining who can access your EKS cluster and what actions they are permitted to perform—such as deploying workloads, modifying resources, or viewing configurations. When a user or service wants to interact with the EKS API, they must present a valid authentication token, typically generated using the aws eks get-token command, which retrieves a short-lived token signed using AWS STS (Security Token Service). These tokens act as a bridge between AWS IAM and Kubernetes RBAC (Role-Based Access Control), enabling fine-grained authorization within the cluster. Mismanagement of these tokens—such as leaving them exposed, failing to rotate them, or assigning overly permissive IAM policies—can open your cluster to potential compromise. Therefore, it’s crucial to regularly audit IAM policies, implement least privilege access, monitor token usage, and set appropriate token expiration durations to ensure a robust and secure EKS environment.

Basic Configuration Guide: IAM and EKS Token Authentication

Here’s a step-by-step to securely configure IAM and token authentication for EKS:

a. Create an IAM Role or User for EKS Access

You can either create:

  • An IAM user for individuals (e.g., developers)
  • An IAM role for federated users or automated services (e.g., GitHub Actions, GitOps)

Example IAM policy for EKS access:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "eks:DescribeCluster"
      ],
      "Resource": "*"
    }
  ]
}

b. Update aws-auth ConfigMap in the EKS Cluster

Use kubectl edit configmap aws-auth -n kube-system to map IAM users or roles to Kubernetes RBAC.

Example entry:

mapRoles: |
  - rolearn: arn:aws:iam::<account-id>:role/EKSAdminRole
    username: eks-admin
    groups:
      - system:masters

This gives the IAM role full access (admin) to the cluster.

c. Generate an Authentication Token

Ensure the AWS CLI is configured with credentials:

aws eks get-token --cluster-name <cluster_name>

This command outputs a signed token valid for 15 minutes, used as a bearer token in kubectl.

d. Update kubeconfig File

Update your kubeconfig context with:

aws eks update-kubeconfig --name <cluster_name>

This configures kubectl to use IAM-based tokens for authentication.

e. Assign Kubernetes RBAC Roles

Define RBAC roles in Kubernetes to control access within the cluster:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-only-binding
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
  - kind: User
    name: eks-user
    apiGroup: rbac.authorization.k8s.io

2. Implementing the principle of least privilege on IAM access is vital when granting permissions for your EKS cluster.

Implementing the principle of least privilege (PoLP) is a foundational best practice for securing IAM access to your Amazon EKS cluster. This approach mandates that users, roles, and service accounts are granted only the specific permissions they need to perform their tasks—nothing more, nothing less. Instead of assigning overly broad policies such as eks:* or ec2:*, which can expose critical infrastructure to unnecessary risk, permissions should be fine-tuned to align with exact job responsibilities. For example, a developer responsible for deploying applications might only need access to eks:DescribeCluster, eks:ListFargateProfiles, and basic kubectl actions scoped to a particular namespace using Kubernetes Role-Based Access Control (RBAC). By clearly defining what actions are permitted (Action), under which resources (Resource), and under what conditions (Condition), you significantly reduce your EKS cluster’s attack surface. This minimizes the likelihood of privilege escalation, accidental deletion of production services, or exposure of sensitive configurations. AWS IAM policies should be routinely audited and revised to remove unused privileges, and should be coupled with Kubernetes-native controls such as Role and ClusterRoleBinding to further enforce scope within the cluster itself. Applying least privilege not only strengthens security, but also enhances operational discipline and traceability across your DevOps workflows.

Basic Configuration Guide: Applying Least Privilege Access to EKS

Use the following steps to apply least privilege access for IAM users or roles interacting with EKS and Kubernetes:

a. Create a Minimal IAM Policy

Instead of giving full eks:* permissions, define only required actions.

Here’s an example IAM policy for a read-only EKS user:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "eks:DescribeCluster"
      ],
      "Resource": "*"
    }
  ]
}

For a deployment-specific user:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "eks:DescribeCluster",
        "eks:ListFargateProfiles"
      ],
      "Resource": "*"
    }
  ]
}

Attach this policy to an IAM user or role via the AWS Console, CLI, or Terraform.

b. Map the IAM Identity to Kubernetes via aws-auth ConfigMap

Use kubectl to map the IAM identity to a Kubernetes role or group.

Example:

mapUsers: |
  - userarn: arn:aws:iam::<account-id>:user/deploy-user
    username: deploy-user
    groups:
      - dev-deploy-group

You can also use mapRoles for IAM roles.

c. Define Kubernetes RBAC Roles and Bindings

Within the EKS cluster, define RBAC permissions to limit access to certain namespaces or actions.

Example Kubernetes Role (namespace-scoped):

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev
  name: app-deployer
rules:
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get", "list", "create", "update", "delete"]

Example RoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: deploy-access
  namespace: dev
subjects:
  - kind: User
    name: deploy-user
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: app-deployer
  apiGroup: rbac.authorization.k8s.io
d. Test Access

Use kubectl as the IAM user or role via a configured kubeconfig:

aws eks update-kubeconfig --name <cluster_name> --profile deploy-user
kubectl auth can-i create deployments --namespace=dev
e. Review & Audit Permissions Regularly
  • Use IAM Access Analyzer and AWS CloudTrail for insight into unused permissions.
  • Periodically clean up and tighten overly permissive policies.
  • Rotate credentials or tokens and enforce MFA where possible.

3. Hosting the EKS Control Plane on a private subnet can significantly enhance security.

Hosting the Amazon EKS control plane on private subnets is one of the most effective ways to harden your Kubernetes infrastructure against external threats. By default, when you create an EKS cluster, the Kubernetes API server endpoint is publicly accessible over the internet (unless explicitly configured otherwise). This increases the attack surface, making it susceptible to unauthorized access attempts, misconfigured security groups, or unmonitored public API usage. To mitigate this, you can configure the EKS control plane to use private endpoint access and deploy it within private subnets inside your Virtual Private Cloud (VPC). This ensures that access to the EKS API server is only available from within your VPC (e.g., through a VPN, bastion host, or AWS Direct Connect), while your worker nodes can communicate with the control plane over private IP addresses. With this setup, no public IP is exposed for cluster administration, drastically improving security posture. Moreover, you still retain management flexibility by exposing a limited public endpoint for read-only access (if needed), while enforcing tight controls through security groups and IAM roles. Implementing this architecture using Infrastructure as Code (IaC) tools like Terraform allows you to automate, version, and audit this secure configuration across environments.

Basic Terraform Guide: Deploying EKS with a Private Control Plane

Here’s how to deploy an EKS cluster with its control plane residing in private subnets using Terraform:

a. Define a VPC with Private Subnets

Use the AWS VPC module or define your own. Here’s a snippet using the AWS VPC module:

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  name    = "eks-vpc"
  cidr    = "10.0.0.0/16"

  azs             = ["eu-west-1a", "eu-west-1b"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24"]

  enable_nat_gateway = true
  single_nat_gateway = true

  tags = {
    Name = "eks-vpc"
  }
}
b. Create the EKS Cluster with Private Endpoint Only

Use the terraform-aws-eks module and set the control plane endpoint access to private:

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = "secure-eks-cluster"
  cluster_version = "1.29"

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = false

  enable_irsa = true

  node_groups = {
    default = {
      desired_capacity = 2
      max_capacity     = 3
      min_capacity     = 1

      instance_types = ["t3.medium"]
      subnet_ids     = module.vpc.private_subnets
    }
  }

  tags = {
    "Environment" = "prod"
    "Terraform"   = "true"
  }
}

cluster_endpoint_private_access = true ensures that only private IPs can reach the EKS API.
cluster_endpoint_public_access = false removes any public API endpoint for this cluster.

c. Add Bastion Host (Optional for Access)

If you need secure administrative access to the cluster, deploy a bastion host in one of the public subnets:

resource "aws_instance" "bastion" {
  ami           = "ami-xxxxxxxx" # Amazon Linux 2
  instance_type = "t2.micro"
  subnet_id     = module.vpc.public_subnets[0]
  key_name      = "my-key"

  associate_public_ip_address = true

  tags = {
    Name = "bastion-host"
  }
}

From this bastion, you can use kubectl to interact with the cluster over the private endpoint.

d. Test Cluster Access from Private Subnet

SSH into the bastion host and use the AWS CLI to test access:

aws eks --region eu-west-1 update-kubeconfig --name secure-eks-cluster
kubectl get nodes
e. Optional: Enable Public Endpoint with Restricted CIDRs

If your DevOps team needs limited internet access, you can enable public endpoint with CIDR-based restrictions:

cluster_endpoint_public_access       = true
cluster_endpoint_public_access_cidrs = ["203.0.113.0/24"] # Office VPN
Additional Best Practices
  • Use AWS PrivateLink to access EKS APIs from VPCs without traversing the internet.
  • Attach restrictive security groups to the control plane and node groups.
  • Pair with IAM policies and Kubernetes RBAC for layered access control.
  • Audit activity with AWS CloudTrail, VPC Flow Logs, and GuardDuty.

By using private subnets for the EKS control plane, you shield your Kubernetes API from public exposure and enhance your infrastructure’s confidentiality and integrity. Terraform makes it repeatable, traceable, and scalable across environments—whether for dev, staging, or production.

4. Utilizing Roles and Single Sign-On (SSO) to access your EKS cluster can simplify management and enhance security.

Utilizing AWS IAM Roles in conjunction with Single Sign-On (SSO) is a powerful and scalable method to simplify access management while enhancing the security posture of your Amazon EKS clusters. In traditional IAM setups, managing access through static IAM users and long-lived access keys can become difficult to audit and scale across teams. By adopting role-based access control (RBAC) via IAM roles, you allow users to assume permissions dynamically based on their job function or group membership. Integrating AWS IAM Identity Center (formerly AWS SSO) enables secure federated access—users log in with their corporate credentials (e.g., from Active Directory, Okta, Google Workspace, or Azure AD), and assume IAM roles mapped to their organizational roles. This eliminates the need for credential management, enforces strong authentication (including MFA), and provides centralized control over who can access your Kubernetes clusters. Furthermore, AWS SSO integrates seamlessly withkubectl, allowing users to request EKS tokens on-demand via short-lived, signed sessions, all without managing individual IAM users. This setup provides a seamless developer experience, better auditability through CloudTrail, and tighter security by enforcing least privilege access and identity federation.

Basic Configuration Guide: Accessing EKS with AWS SSO and IAM Roles

Here’s how to securely configure EKS access using AWS Identity Center (SSO) and dynamic IAM roles:

a. Enable AWS IAM Identity Center (SSO)

In the AWS Console:

  • Navigate to IAM Identity Center
  • Choose Enable and select your identity source:
    • AWS Directory Services (AD)
    • External IdP like Okta, Azure AD, or Google Workspace via SAML 2.0

Create user groups (e.g., eks-admins, devs, read-only) and assign users accordingly.

b. Create IAM Roles for EKS Access

Use aws_iam_role with a trust relationship that allows SSO to assume it.

Example IAM Role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "<IAM Identity Center SSO ARN or Role ARN>"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Attach a policy granting minimal EKS permissions:

{
  "Effect": "Allow",
  "Action": [
    "eks:DescribeCluster"
  ],
  "Resource": "*"
}
c. Map the Role in the aws-auth ConfigMap

Update your EKS cluster’s aws-auth ConfigMap:

mapRoles: |
  - rolearn: arn:aws:iam::<account-id>:role/EKSAdminSSORole
    username: eks-admin
    groups:
      - system:masters

This gives SSO users who assume this role Kubernetes admin privileges (system:masters).

Apply it with:

kubectl edit configmap aws-auth -n kube-system
d. Configure AWS CLI for SSO

On a user’s machine:

aws configure sso

Provide:

  • SSO Start URL
  • AWS Region
  • SSO Account ID
  • Role name (e.g., EKSAdminSSORole)

AWS CLI will open a browser window for user authentication via SSO.

e. Update kubeconfig for EKS Access

After SSO is authenticated:

aws eks update-kubeconfig --name <cluster-name> --region <region> --alias eks-sso

This links your SSO session to the kubectl context.

Test access:

kubectl get nodes
f. Optional: Set Role-Based Access in Kubernetes

Define Kubernetes RBAC per group:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dev-readonly-access
subjects:
  - kind: User
    name: dev-user@company.com
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io

This grants SSO-authenticated users read-only access.

Benefits of Using AWS SSO for EKS
  • No IAM user keys: Reduced risk from compromised credentials
  • Federated identity: Centralized access with existing IdP (e.g., Okta, Azure AD)
  • Short-lived tokens: Temporary session credentials via AWS STS
  • Audit logging: Every SSO session is traceable in CloudTrail
  • RBAC-ready: Works cleanly with Kubernetes RoleBindings
  • Scalable access: Easily onboard/offboard teams

5. Accessing your EKS cluster via VPN adds another layer of security by ensuring that all traffic to and from the cluster is encrypted.

Accessing your Amazon EKS cluster through a VPN connection introduces an additional and powerful layer of network-level security. While EKS clusters can be configured to restrict access to their Kubernetes API endpoint using private subnets and security groups, integrating a VPN ensures that all traffic between your development environment and the EKS control plane remains encrypted and isolated from the public internet. This approach is especially critical in environments where regulatory compliance, data confidentiality, or corporate security policies prohibit direct access over public IPs. With VPN in place, administrators, CI/CD systems, or developers can securely manage EKS workloads from on-premises environments, laptops, or corporate networks—without exposing the EKS API server to public interfaces. AWS offers multiple solutions for this, including AWS Site-to-Site VPN for hybrid networks, AWS Client VPN for individual access, or custom OpenVPN configurations using EC2 instances. Regardless of the method, the goal is the same: create a secure tunnel between your private network and AWS, restrict access via identity-aware policies and security groups, and ensure your Kubernetes operations never leave a trusted path.

Basic Configuration Guide: Secure EKS Access via VPN

Below is a simplified guide using AWS Client VPN, a fully managed AWS VPN service ideal for individual developer access.

a. Create a Client VPN Endpoint

In the AWS Console or Terraform, define a Client VPN endpoint:

resource "aws_ec2_client_vpn_endpoint" "eks_vpn" {
  description = "Client VPN for EKS Cluster"
  server_certificate_arn = "arn:aws:acm:region:account:certificate/your-cert-id"
  authentication_options {
    type = "certificate-authentication"
    root_certificate_chain_arn = "arn:aws:acm:region:account:certificate/your-root-cert"
  }

  connection_log_options {
    enabled = false
  }

  client_cidr_block = "10.20.0.0/22"
  split_tunnel = true
  dns_servers = ["AmazonProvidedDNS"]
  transport_protocol = "udp"
  vpn_port = 443
}

Use AWS ACM to provision certificates for server and client authentication.

b. Associate VPN with Your VPC and Subnets
resource "aws_ec2_client_vpn_network_association" "vpn_subnet_association" {
  client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.eks_vpn.id
  subnet_id              = module.vpc.private_subnets[0]
}

This allows the VPN to route traffic inside your VPC, where the EKS cluster resides.

c. Add Authorization Rules

To allow VPN clients to access VPC resources like the EKS control plane:

resource "aws_ec2_client_vpn_authorization_rule" "eks_access" {
  client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.eks_vpn.id
  target_network_cidr    = "10.0.0.0/16"  # Replace with your VPC CIDR
  authorize_all_groups   = true
}
d. Configure Security Groups and Route Tables
  • Attach a security group to your EKS control plane that allows inbound traffic from your VPN CIDR block
  • Update route tables to ensure proper routing of VPN traffic to private subnets

Example Security Group Rule:

ingress {
  from_port   = 443
  to_port     = 443
  protocol    = "tcp"
  cidr_blocks = ["10.20.0.0/22"] # VPN CIDR
}
e. Download and Use OpenVPN Client
  • Export the VPN configuration file (.ovpn) from the AWS Console
  • Import into OpenVPN client on your local machine
  • Connect to the VPN

Test with:

aws eks update-kubeconfig --name your-eks-cluster
kubectl get pods --all-namespaces

The aws eks update-kubeconfig command will now access the EKS API server through the VPN tunnel instead of a public endpoint.

Alternate VPN Options
  • AWS Site-to-Site VPN: Best for hybrid cloud with corporate data centers
  • OpenVPN on EC2: Flexible and self-managed for full control
  • AWS Transit Gateway + VPN: Enterprise-scale network aggregation
Benefits of VPN Access to EKS
  • End-to-End Encryption: All traffic between your workstation and the control plane is protected
  • No Public Endpoint Required: Remove public exposure by using private-only API endpoints
  • Identity-based Access: Use VPN policies to limit who can connect
  • Flexible Network Control: Easily pair with IAM, security groups, and RBAC for multi-layered defense

By securing EKS cluster access behind a VPN, you’re ensuring that sensitive Kubernetes operations happen over a trusted, encrypted channel—eliminating one of the most common cloud-native attack vectors. Whether you’re connecting from a CI/CD pipeline, developer machine, or enterprise network, this setup delivers defense-in-depth and peace of mind.

 

Conclusion

These five strategies will not only safeguard your EKS Control Plane but also promote a secure environment for your applications. By understanding and implementing these measures, you can minimize risks and enhance the overall security posture of your Kubernetes workloads running in EKS. It’s crucial to regularly review and update your security practices to adapt to new threats and vulnerabilities in the ever-evolving landscape of cloud security. Remember, a proactive approach to security will always yield better results in protecting your resources.

 


Spread the love

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
×