eks-vs-ecs-containers

AWS Container Orchestration: ECS vs EKS

Spread the love

There was a time when containers were viewed as experimental tools, suitable only for testing and not viable for production. But that perception has changed drastically. Today, companies run large-scale, production-grade systems using containers, spanning thousands of workloads. From powering web applications to handling complex data engineering pipelines and machine learning tasks with platforms like Kubeflow, containers are now an essential part of modern infrastructure. Major players like Netflix rely heavily on containers to run their production environments efficiently. These examples highlight how widely containers are being adopted across various industries.

Despite their popularity, containers still raise important security concerns. Issues such as privilege escalation and vulnerabilities in base images remain real threats. Fortunately, the industry has responded with various container hardening techniques to mitigate these risks. The bottom line: containers are not just a passing trend—they are a foundational technology that’s here to stay.

If you’re just getting started with containers or Docker, it’s a good idea to first check out our Docker Series Part 1 and Part 2. These resources will provide you with the foundational knowledge needed to dive deeper into this topic.

To better understand container use at scale, consider this analogy: In chemistry, a chemist explores concepts and theories, while a chemical engineer focuses on applying and scaling those ideas for industrial use. Similarly, containers may work perfectly on a single machine for development, but scaling them for production introduces new challenges. Factors like availability, security, scalability, fault tolerance, and flexibility all come into play when operating containers at scale.

Container Orchestration to the Rescue

 


This is where container orchestration tools come to the rescue. Container orchestration is designed to solve all of the concerns mentioned earlier. They make it easier to scale from a single container on a single machine to hundreds and thousands of containers. The concept of
Distributed Systems/Computing is applied here to main these containers in what is logically known as a cluster.

According to Wikipedia, a computer cluster is a set of loosely or tightly coupled connected computers that work together so that, in many respects, they can be viewed as a single system.

When it comes to container orchestration, there are lots of options out there. Some are open-source, and some are proprietary. The open-source options can be deployed both on OnPrem (your local server or personal computer) or on the Virtual Machines of any Cloud Provider (Digital OceanAWSAzureLinode). The proprietary alternatives are usually hosted within the environments of the provider. An example is Amazon ECS. To use Amazon ECS, you will need to be within the AWS environment. Out of the many options, the most popular is Kubernetes (you can check out our Kubernetes Series Part 1Part 2Part 3Part 4Part 5Part 6).

Phew… that was a lot of staging setting there, so let us go back to looking at the difference that the difference between these ECS and EKS. But first, let us look at them one after the other before we juxtapose them. But first of all, let us get some basic terms clear; this will help to drive the conversation better.

Terms

Cluster: A combination of Nodes

Services: The unit of deployment used to make a container available for consumption publicly or privately (ECS and EKS).

Pod: The smallest unit of deployment in Kubernetes, which runs one or more containers.

Replica: An object type in Kubernetes that is used to make a replica of a pod.

Task Definition: This is a declarative configuration of a task and container in ECS

Container: The runtime of a Docker image

Task: The smallest unit of deployment in ECS

EKS: Amazon Elastic Container Service for Kubernetes

ECS: Amazon Elastic Container Service

Amazon ECS

Amazon ECS is an acronym for AWS Elastic Container Service. It is a highly scalable container orchestration service. It is proprietary to AWS. This means that to use ECS, you must be within the AWS ecosystem. It supports Docker and allows you to run multiple containers of a single Docker image. The major components of an ECS are: Cluster, Task Definition, Task, Containers, and Services. ECS can be deployed in two modes: EC2 and Fargate (Serverless).

EC2 Model: In this model, the cluster is made up of EC2 Instances (Virtual Machines). The containers are deployed on these EC2 Instances and managed through the tasks defined in the task definition. The task then deploys the containers within the EC2 instances that are created for the cluster. One major advantage of this EC2 model is that you have control and choice of the type of EC2 instance you would like to use. For example, if you are to run training for a machine-learning model that has unique GPU requirements. You can choose to use a GPU-optimized instance for that unique purpose.
On the flip side, you are in charge of the security patches and network security of the EC2 instances as stated in the AWS Shared Responsibility Model. You are also responsible for the scalability of the EC2 instances in the cluster. But good enough, it comes with the auto-scaling feature, which can be configured to ensure that the EC2 instances scale up and down as needed.
In terms of costing here, you are only charged for the EC2 instances that you run within the cluster and the VPC networking, too.

Fargate Model (Serverless): In this model, you do not have to worry about EC2 instances or servers. You select specific CPU and Memory resources you want to use, and your containers are deployed there. Unlike EC2, there are no servers to manage here. AWS is responsible for the availability and scalability of the containers. But you need to select the appropriate CPU and Memory; when these are used up, the application might start becoming unavailable. So in summary, you deliver your Docker image to AWS Fargate and AWS helps you scale the container based on the certain configuration you issue to the service. In terms of cost, you are charged based on the CPU and Memory that were selected. The number of CPU cores and the Memory in Gigabytes per task determine the cost of running the cluster.

Whether it is the EC2 Model or the Fargate model that is in use. All the components of ECS work together in the same way. Briefly, this is how ECS works.

1. You define the container information in a JSON file called the ECS Task Definition. This can be done using the AWS Console or can be written with any text editor. The ECS Task definition practically defines everything about a container, from the Docker image repository to the type of ECS
(EC2 or Fargate) to the system resources that will be allocated to it. Below is a sample of a Task Definition:

{
  "ipcMode": null,
  "executionRoleArn": "arn:aws:iam::123456789876:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "dnsSearchDomains": null,
      "logConfiguration": {
        "logDriver": "awslogs",
        "secretOptions": null,
        "options": {
          "awslogs-group": "/ecs/my-task-def",
          "awslogs-region": "eu-west-2",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "entryPoint": null,
      "portMappings": [
        {
          "hostPort": 80,
          "protocol": "tcp",
          "containerPort": 80
        }
      ],
      "command": null,
      "linuxParameters": null,
      "cpu": 0,
      "environment": [],
      "resourceRequirements": null,
      "ulimits": null,
      "dnsServers": null,
      "mountPoints": [],
      "workingDirectory": null,
      "secrets": null,
      "dockerSecurityOptions": null,
      "memory": null,
      "memoryReservation": null,
      "volumesFrom": [],
      "stopTimeout": null,
      "image": "123456789876.dkr.ecr.eu-west-2.amazonaws.com/my-docker-image:latest",
      "startTimeout": null,
      "firelensConfiguration": null,
      "dependsOn": null,
      "disableNetworking": null,
      "interactive": null,
      "healthCheck": null,
      "essential": true,
      "links": null,
      "hostname": null,
      "extraHosts": null,
      "pseudoTerminal": null,
      "user": null,
      "readonlyRootFilesystem": null,
      "dockerLabels": null,
      "systemControls": null,
      "privileged": null,
      "name": "my-container"
    }
  ],
  "placementConstraints": [],
  "memory": "1024",
  "taskRoleArn": "arn:aws:iam::1234567890987:role/ecsTaskExecutionRole",
  "compatibilities": [
    "EC2",
    "FARGATE"
  ],
  "taskDefinitionArn": "arn:aws:ecs:eu-west-2:1234567890987:task-definition/my-task-def:115",
  "family": "my-task-def",
  "requiresAttributes": [
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "ecs.capability.execution-role-awslogs"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.ecr-auth"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.task-iam-role"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "ecs.capability.execution-role-ecr-pull"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "ecs.capability.task-eni"
    }
  ],
  "pidMode": null,
  "requiresCompatibilities": [
    "FARGATE"
  ],
  "networkMode": "awsvpc",
  "cpu": "512",
  "revision": 115,
  "status": "ACTIVE",
  "inferenceAccelerators": null,
  "proxyConfiguration": null,
  "volumes": []
}
Sample Task Definition

2. This task definition is sent over to the ECS service to create a task(s). A task is the smallest unit of deployment in ECS. A task practically runs and manages the container. It ensures the container is always up and running and helps to restart the container in the eventuality of a failure. You can have more than one task, which means more than one container to give your application higher availability.

3. Services are used to allow traffic into the containers. When the container is running in a task, it needs to receive traffic from the internet, and in AWS, one way to make a container receive traffic is via the Load Balancing service. The service is usually configured during the setup of the task definition. There is the option to use any of the three types of AWS load balancers.

Amazon EKS

This is the AWS flavor of the popular Kubernetes technology. AWS has taken time to rework Kubernetes to fit into its infrastructure. Such as VPC, Load Balancing, EC2, NAT Gateways, etc. The components of AWS EKS are the same as the components of Kubernetes. Which we have referred to at the beginning. The working system of Kubernetes is basically as follows:

1. A declarative file is used to define the Pod/Deployment, which is usually written in YAML. This file defines the Docker image, system resources, and other relevant information needed for the container to run.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myappms
  labels:
    app: myappms
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myappms
  template:
    metadata:
      labels:
        app: myappms
    spec:
      containers:
      - env:
        - name: ConnectionString
          valueFrom:
            secretKeyRef:
              name: interswitchmssecrets
              key: scrConnectionString
        name: myappms
        image: myapp/myappms:staging
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: regcred
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
spec:
  selector:
    app: myappms
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 80
  type: LoadBalancer
Sample Kubernetes File

2. This declarative file is sent over to the Kubernetes API, which eventually starts scheduling pods based on the configuration in the manifest file. Each pod runs the actual container, which is the runtime of the Docker image that was defined in the manifest file.

3. In most cases, the service object that opens traffic to the pods is defined in the manifest file. In this case, it will be using the AWS Load Balancer to also receive traffic into the pods in the cluster.

0*RNJkTVNIENuAhGZG

That was a helpful breakdown to clearly explain how both technologies operate and the terminology involved. But hey, there are some similarities between them, too. So let’s take a moment to highlight those.

Similarities between ECS and EKS

  1. Both Amazon EKS and Amazon ECS (EC2 Model) have Nodes, and these are practically EC2 instances where the containers run.
  2. Both EKS and ECS have a layer of abstraction for containers. Kubernetes calls it deployments ECS calls it ECS service. Looking at their functionalities critically, they are quite similar. Tasks ensure that there is always a container running based on the task definition configuration, while deployments ensure pods are always running to maintain the desired state defined in the manifest file.
  3. Both EKS and ECS have autoscaling for the nodes they are backed by.
  4. Both EKS and ECS have a holistic abstraction called a Cluster, which is a combination of all working components.
  5. EKS calls them manifest files that are written in YAML. ECS calls it a task definition written in JSON. They both define how the containers will run in the cluster.
  6. Both use a load balancer to receive traffic into the containers.
  7. A scheduler in Kubernetes schedules pods, while a Task scheduler in ECS schedules tasks
  8. An ECS task is equivalent to an EKS pod

CapEx and OpEx Considerations: EKS vs ECS

Both Amazon EKS and ECS involve Capital Expenditures (CapEx), but EKS typically incurs slightly higher upfront costs. This is because EKS includes a fixed monthly charge for the control plane—around $144/month, in addition to the cost of the EC2 instances used for your workloads. ECS, by contrast, doesn’t charge for the control plane; you only pay for the EC2 instances you run.

When it comes to Operational Expenditures (OpEx), both services are similar if you’re using the EC2 launch type. In this setup, you are still responsible for managing the underlying EC2 instances—handling patching, scaling, and monitoring—so the operational overhead remains significant.

However, the equation changes when you use Fargate, the serverless launch type available with both ECS and EKS. With Fargate, AWS manages the infrastructure for you. This drastically reduces OpEx, since you no longer need to manage the underlying servers. Your only cost is the CapEx, based on the CPU and memory allocated to run your containers.

That said, EKS comes with additional operational complexity. Running a production-grade EKS cluster requires in-depth knowledge of Kubernetes components, configurations, and best practices. Tasks like setting up monitoring, networking, RBAC, and security policies all add to the OpEx and require skilled personnel or automation.

Conclusion

So, which option should you go with in the end? It’s easy to feel overwhelmed by the choices, as each platform has its own appeal. Kubernetes (especially with EKS on AWS) has become the go-to solution for container orchestration due to its flexibility and ecosystem. However, it also brings a level of complexity that often isn’t apparent until you’re deep into implementation.

If you’re part of a startup or a small team without a seasoned DevOps engineer familiar with Kubernetes, I recommend starting with ECS Fargate. It offers a much gentler learning curve, simplifies deployment, and can scale automatically with minimal effort. But if your team includes someone highly skilled in Kubernetes, then EKS is a powerful choice that offers greater control and customization for advanced use cases.


Spread the love

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
×