short-lived-vs-long-lived-aws-credentials

Secure Alternatives to Long-Lived AWS IAM Credentials: Powerful Ways to Eliminate Access Keys and Secret Keys

Share

One security vulnerability I have consistently encountered in my AWS Career is accounts created with long-lived credentials. Whether it is an IAM User that was created and never used, or an IAM user with an Access Key and Secret Key not rotated for over 600 days, and the key is either still in use or was never used. This poses a huge security risk to the AWS account.

According to the Security Pillar of the AWS Well-Architected FrameworkSEC02-BP05, one of the implementation guides in that regulation talks about the regular audit of credentials, rotation of IAM credentials, and using IAM Roles. This article will address various ways of using non-long-lived credentials in AWS.

What are Long-Lived Credentials?

According to ChatGPT:

Long-lived credentials refer to authentication tokens or keys that have extended validity periods. They enable users or systems to access resources without needing frequent re-authentication. These credentials can include API keys, OAuth tokens, session cookies, or any other access token that remains valid for a long duration, typically ranging from several hours to several days or even longer.

Within the context of AWS, a long-lived credential refers to an Access Key and a Secret Key created through IAM. They do not have a life span and can be used repeatedly without the need to reauthenticate. Although using credentials that do not expire can be fun, it poses significant security risks to your AWS account. It means that if the wrong actor gains access to those credentials, they have the same privileges and access as you, the original owner. Some of the disadvantages of long-lived credentials are as follows:

  • Extended exposure poses a huge window for attackers to compromise.
  • Delayed detection and response times in times of compromise.
  • Persistent access by an authorized user or attacker.
  • Management challenges for a huge number of long-lived credentials.
  • Compliance and regulatory requirements do not support long-lived credentials.
  • Lack of visibility, control, and difficulty in auditing.

Now that we understand what a long-lived credential is, and the security risks it poses. Let us talk about ways credentials can be configured in AWS without using Access Key and Secret Key, which are long-lived credentials.

Short-lived Credential Options in AWS

You can configure your code and another third-party system to authenticate with AWS without using Access and Secret Key in various ways. The methods mentioned above all have one thing in common: they are short-lived, meaning that the credentials are generated on the fly and have an expiry time. This expiration is managed by the IAM role attached to it. When using short-lived credentials in AWS, the IAM Role is the service that sits between the third-party service (application code, Github, Terraform Cloud, etc).

0*j2p8DQKhKpj0zCdy
Source: https://www.linkedin.com/pulse/aws-iam-roles-guide-granting-permissions-trusted-entities-chukwu-ligpf/

The IAM role is in charge of the Trust, who can assume the role, and the Permissions, which are granted via IAM Policies. The trust comes from the third-party service, which could come in different forms, as shown in the following screenshot.

Press enter or click to view the image in full size

1*wV 1VWGDzbc7uvct wy7OA
Trust Entities for IAM Roles

The following is a list of options for short-lived credentials configurations in AWS:

  • AWS IAM Roles
  • AWS IAM Roles Anywhere
  • IAM Roles with Service Account (IRSA)
  • EKS Pod Identity
  • OIDC Integration
  • SSO Integration (via AWS IAM Identity Center)

Let us take them one at a time, describing how they are configured and explaining their use cases.

AWS IAM Roles
This is the most basic way to ensure you do not use Access Key and Secret keys in AWS. IAM Roles are generally used within AWS for service-to-service communication. It is the recommended way to communicate between services within AWS. All you need to do is create the Role and attach it to the service. Ensure the Role has the right policy attached to it, and the Trusted Entity is the originating service that needs to access another service.

Sample: An application running in an EC2 instance needs to upload a file to S3 or access files in S3.

Solution:
The proper way to configure this is to create an IAM Role with an S3 Policy with EC2 as the trusted entity, then attach this IAM Role to the EC2 Instance. The application running in that Instance will not need an Access Key and a Secret Key to authenticate and access S3. Since the Role is attached to the EC2, it will use the IAM Role to authorize and authenticate access. The following guide gives step-by-step details on how to configure that.

The next is similar to this one, which is also IAM Roles, but IAM Roles Anywhere, which was launched in July 2022, to expand the scope of IAM Roles.

AWS IAM Roles Anywhere
This option is very similar to the normal IAM Roles; the major difference between the previous and this one is the scope. While the previous is mostly focused on services that are within AWS, this focuses on services outside of AWS. The default method of accessing AWS services before this was the use of Access and Secret Keys. With this solution, you no longer need Access Keys and Secret Keys. Applications or services running outside of AWS can use this authentication option, which is a safer approach.
It uses a Certificate Authority (CA) and certificate-based authentication to authenticate external workloads. It strictly uses X. 509 certificates, which are issued by a certificate authority, which is registered in IAM Role as trust between your public-key infrastructure (PKI) and IAM Roles Anywhere.

IAM Roles Anywhere can be used with On-premise virtual machines and virtual machines in other cloud providers to authorize access to AWS services via AWS Policies.

Sample: A blockchain application has been deployed in your on-premises environment as a Node. If you want to get notified by the system based on a price change in Bitcoin, you decide to use SNS as your notification service. What is the best way to authorize the application with SNS?

Solution: The best way to authorize with SNS is to use IAM Roles Anywhere. It not only provides short-lived credentials for access, but the credentials are generates on demand. This solves the hassles of managing keys effectively for AWS access.

The following is a guide on how to configure IAM Roles Anywhere.

Let us switch things a little bit, let us see how to configure short-lived credentials for other types of systems

IAM Roles with Service Account (IRSA)
This use case covers EKS specifically. This feature gives fine-grained, detailed access to Kubernetes pods to AWS services. Pods in Amazon EKS can only have access to AWS services primarily via the IAM role attached to the Worker Nodes (EC2 Instance), which was explained in the first scenario. While this is a quick option for short-lived access credentials in Kubernetes, IRSA is a native way to integrate an IAM Role with a specific set of pods. One disadvantage of the EC2 IAM Role for EKS worker nodes is that every pod in that node will have the same privileges, which is not right from a security standpoint.

Let us dissect IRSA for a minute and understand what it is. IR, which means IAM Roles, which we already described earlier. SA is an acronym for Service Account. A Service Account is a type of account that is used by processes running in pods to interact with the Kubernetes API.

In this scenario, both the IAM Role and Service Account are combined. Instead of the normal use of the Service Account for ClusterRole and ClusterRoleBinding, AWS IAM can be used with the service account. The following is a sample code of a Kubernetes service account using an IAM Role.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: s3-access-sa
  namespace: default
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::<account_id>:role/S3AccessRole

The code above uses the Role ARN to refer to the specific role that will be assumed. This service account is then used in the pod/deployment configuration as shown in the following example:

apiVersion: v1
kind: Pod
metadata:
  name: s3-access-pod
  namespace: default
spec:
  serviceAccountName: s3-access-sa
  containers:
  - name: my-container
    image: amazonlinux
    command: ["/bin/sh"]
    args: ["-c", "sleep 3600"]

One last component of this setup to take note of is the Trusted Entity of the IAM, which is the OIDC of the EKS cluster. We shall talk more about OIDC in the next two titles.

Sample: Two deployments have been made to a Kubernetes cluster. Both deployments have one pod each. Pod one needs to access Amazon SES to send emails, while the other pod needs access to DynamoDB to read and write data. If both pods run in one Worker Node, we can’t assign a single Role to the Node with multiple policies. What is the best solution?

Solution: The best solution here is to use IRSA. With IRSA, there can be two roles: one with an IAM policy for Amazon SES access and the other with an IAM policy for DynamoDB. Then the trusted entity for the role becomes the OIDC of the cluster. Lastly, two service accounts are created with the Role ARNs, respectively. Next is to configure the service account with the deployments. This will give both pods unique access to the respective services they need to access. The following guide explains how to configure IRSA in Amazon EKS.

The next option is still EKS-related but with a different twist to managing access. Let’s jump right in.

EKS Pod Identity
At AWS re: Invent 2023EKS Pod Identity was launched. EKS Pod Identity introduces a new way for a pod running in an EKS cluster to access AWS services. Very similar to IRSA, it was developed to address some limitations of IRSA. Which are;

  • Cluster Administrators who do not have IAM access administrator permissions to create OIDC providers or update the IAM trust policy, which are prerequisites for IRSA.
  • IRSA, service accounts in Kubernetes are mapped to IAM roles. However, this mapping is static and can be coarse-grained, making it challenging to enforce fine-grained access control at the pod or container level
  • With IRSA, creating pod-specific access policies can be complex, requiring additional configuration and maintenance
  • Ensuring the principle of least privilege with IRSA can be challenging because service accounts might end up with more permissions than necessary if they are shared among multiple pods with different requirements.

With EKS Pod Identity, IAM Roles are mounted directly on pods to give the pods more granular access configuration than IRSA would give. But for EKS Pod Identity to work, you need to first deploy the EKS Pod Identity agent into the Kubernetes cluster. This works as a pull agent within the cluster to ensure changes to policies on a Role are always in sync with the access that the entity has. The fact that IAM Roles are used with EKS Pod Identity means that our pods and applications running in the Kubernetes cluster will not need long-lived credentials to gain access to AWS services, instead. It also supports service accounts like the IRSA, but the EKS Pod Identity needs to be associated with the IAM Role for it to function properly, and then the service account will be configured in the deployment as shown in the IRSA section of this article.

The following guide explains how to set up EKS Pod Identity for your EKS cluster from scratch.

Sample: There are 20 EKS clusters to be configured, and short-lived credential access needs to be given via IAM roles. What is the best way to configure this without repeatedly updating the Trusted Entity configuration of the IAM role?
Solution: Set up EKS Pod Identity to fix this issue, because it does not have the limitation of IRSA and ensures that credentials generated are short-lived.

OIDC Integration
This is another clean way to integrate third-party services with AWS and make them assume IAM Roles and gain access to AWS services via the policies attached to the IAM Role. OIDC is an acronym for OpenID Connect. OpenID Connect is an identity authentication protocol that is an extension of Open Authorization (OAuth) 2.0, to standardize the process for authenticating and authorizing users when they sign in to access digital services.

Many systems support OIDC standards, making its application broad for a lot of use cases. OIDC handles the authorization part of the process, while the IAM Policy handles the authentication part. Privileges are not assigned on the OIDC system; instead, it is done on the IAM Policy attached to the IAM Role that is attached to the OIDC configuration.
The following image explains the flow of OIDC and how it can enable authorization between two third-party systems.

Press enter or click to view the image in full size

0*zq cDQMMv5ju kTy
Source: https://openid.net/developers/how-connect-works/

This is the workflow broken down into steps:

  • A user navigates to the application via the browser.
  • The user clicks on Sign-In/Login.
  • The client sends a request to the OpenID Provider.
  • The OpenID Provider authenticates the user and obtains authorization
  • OpenID responds with an Identity token
  • The client sends a request with access to the token to the user’s device
  • The user info endpoint returns claims about the user

OIDC is supported by a variety of third-party services. The following is a list of some services that support OIDC Integration with AWS IAM:

  • Github
  • Gitlab
  • Bitbucket
  • JFrog
  • Hashicorp Vault
  • Terraform Cloud
  • Octopus Deploy

Sample: You are a DevOps Engineer in an organization, and you notice the AWS Access and Secret Key used for GitHub Actions is 500 days old. No one is rotating the keys, and everyone is scared to touch them because it could lead to a deployment outage. What is the best long-term solution to this?
Solution: Github supports OIDC integration with AWS. The best solution is to configure the OIDC integration with AWS, test and confirm that it works, and delete the Access and Secret Key credentials.

The following is a guide that explains how to configure OIDC integration with Github Actions to ensure there is no need for long-lived credentials for Github Actions. This guide is also for AWS OIDC integration for GitLab and Bitbucket.

Note:

IAM Role supports various other forms, like SAML and another AWS account, which were not mentioned. Not all was mentioned in this piece. You can see more about it in the AWS documentation on IAM Roles.

Conclusion

Infrastructure Security is core to the survival of every organization using AWS Cloud security. It starts with user, application, and system access. This piece has summarized one aspect of a proper to manage long-lived access, which could pose a huge security risk if not well managed. This article has identified five different ways that can be used to authorize service within and outside AWS without creating an Access key and Secret key directly from the IAM Console. It has also given guides on how each of these can be implemented.

References & Further Reading


Share

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
×