AWS Elastic Container Service (Amazon ECS) is one of the compute services provided by Amazon, which is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. Amazon ECS lets clients launch and stop container-based applications with simple API calls. Amazon ECS allows customers […]
Amazon Elastic Container Service
Amazon Elastic Container Service (Amazon ECS) is one of the compute services provided by Amazon, which is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. Amazon ECS lets clients launch and stop container-based applications with simple API calls. Amazon ECS allows customers to launch and stop container-based applications with simple API calls, that allows them to get the state of the cluster from a centralized service, and gives access to many familiar Amazon EC2 features. ECS is a great choice to run containers for several reasons.
AWS customers are able to run their ECS clusters using AWS Fargate, which is serverless compute for containers. It can natively integrate with other services such as Amazon Route 53, Secrets Manager, AWS Identity and Access Management (IAM), and Amazon CloudWatch providing you a familiar experience to deploy and scale your containers.
ECS is used extensively within Amazon to power services such as Amazon SageMaker, AWS Batch, Amazon Lex, and Amazon.com’s recommendation engine, ensuring ECS is tested extensively for security, reliability, and availability.
- Images are built from a Dockerfile, that is a plain text file that specifies all of the components that are included in the container. These images are then stored in a registry from which they can be downloaded and run on the cluster.
- A Docker container is a standardized unit of software development, containing everything that the client software application needs to run including code, runtime, system tools, system libraries, etc. Containers are created from a read-only template called an image.
To deploy applications on Amazon ECS, clients application components need to be architected in order to run it in containers.
After creating a task definition for the application within Amazon ECS, customers can specify the number of tasks that will run on their cluster. A task is the instantiation of a task definition within a cluster.
- Each task that uses the Fargate launch type has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
Amazon ECS task scheduler is responsible for placing tasks within the cluster.
- Amazon ECS allows customers to define tasks through a declarative JSON template called a Task Definition. Within a Task Definition they can specify one or more containers that are required for the task, including the Docker repository and image, memory and CPU requirements, shared data volumes, and how the containers are linked to each other.
- The API actions, which is provided b ECS allow customers to create and delete clusters, register and deregister tasks, launch and terminate Docker containers, and provide detailed information about the state of your cluster and its instances.
Customers can upload a new version of their application task definition, and the Amazon ECS scheduler automatically starts new containers using the updated image and stop containers running the previous version.
- The Amazon ECS will automatically recover unhealthy containers to ensure that you have the desired number of containers supporting your application.
The task definition is a text file, in JSON format, that describes one or more containers, up to a maximum of ten, that form your application. It can be thought of as a blueprint for your application. Task definitions specify various parameters for your application.
The specific parameters available for the task definition depend on which launch type you are using.
In order, the application to run on Amazon ECS, customers need to create a task definition.
Amazon ECS container instance is an Amazon EC2 instance, which runs the Amazon ECS container agent. Amazon ECS ddownloadsthe clients container images from a registry that they specify, and runs those images within the cluster. When running tasks using Amazon ECS, users place them on a cluster, which is a logical grouping of resources.
When using the Fargate launch type with tasks within the customers cluster, Amazon ECS manages the cluster resources.
When using the EC2 launch type, then the customers clusters are a group of container instances they manage.
Amazon ECS is integrated with AWS Cloud Map, that helps customers discover and connect their containerized services with each other. Cloud Map enables customers to define custom names for application resources, and it maintains the updated location of these dynamically changing resources.
Service mesh makes it easy to build and run complex microservices applications by standardizing how every microservice in the application communicates.
Amazon Elastic Container Service supports Docker networking and integrates with Amazon VPC to provide isolation for containers.
Amazon ECS is integrated with Elastic Load Balancing, allowing customers to distribute traffic across your containers using Application Load Balancers or Network Load Balancers.
Amazon ECS allows clients to specify an IAM role for each ECS task. This allows the Amazon ECS container instances to have a minimal role
AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions.
FireLens for Amazon ECS enables customers to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics.
Recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service.
It has a definition parameters that enable customers to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value
It supports injecting sensitive data into the containers that store in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in the container definition.
It enables CloudWatch Container Insights, and supports spot capacity provider.
Fargate Platform Version‐1.2.0 enables private registry authentication using AWS Secrets Manager.
Fargate Platform Version‐1.1.0
It has the Amazon ECS task metadata endpoint, supports Docker health checks in container definitions, and it also supports Amazon ECS service discovery.
Clusters are Region-specific
A cluster may contain a mix of tasks using either the Fargate or EC2 launch types. A cluster may contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers, however when specifying a capacity provider strategy they may only contain one or the other but not both.
It allows customers to create custom IAM policies to allow or restrict user access to specific clusters.
AWS Fargate is a technology that is used with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS)
The core task of Fragate is to provision and scale clusters, patch and update each server, task placement strategies including, manages the availability of containers All the user need to do is define the application’s requirements, select Fargate as the launch type in the console or CLI, and Fargate takes care of the rest.
Each Fargate task has its own isolation boundary, and it does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
Amazon ECS EC2 launch type enables customers to manage a cluster of servers and schedule placement of containers on the servers.
In order to take full advantage of Fargate, customers required to do the following:
Amazon ECS task definitions for Fargate network mode need to be set to awsvpc. The awsvpc network mode provides each task with its own elastic network interface.
Customers need to specify CPU and memory at the task level. They can also specify CPU and memory at the container level for Fargate tasks, if they desire to do so. Most use cases are satisfied by only specifying these resources at the task level.
Individual ECS tasks or EKS pods each run in their own dedicated kernel runtime environment and do not share CPU, memory, storage, or network resources with other tasks and pods. This ensures workload isolation and improved security for each task or pod.
Fargate,is built-in integrations with other AWS services including Amazon CloudWatch Container Insights. With this customers gather metrics and logs for monitoring their applications through an extensive selection of third party tools with open interfaces.
The type of instance that client specify determines the hardware of the host computer used for their instance. Each instance type offers different compute, memory, and storage capabilities and are grouped in instance families based on these capabilities. Each instance type provides higher or lower minimum performance from a shared resource.
ECS Cluster Auto scaling
ECS Cluster Auto Scaling (CAS) is a service provided by AWS, and it has capability to manage the scaling ECS for EC2 Auto Scaling Groups (ASG). With CAS, customers can configure ECS to scale the ASG automatically. Each cluster has one or more capacity providers and an optional default capacity provider strategy.
ECS will ensure the ASG scales in and out as needed with no further intervention required. CAS relies on ECS capacity providers, which provide the link between ECS cluster and the ASGs. Each ASG is associated with a capacity provider, and each such capacity provider has only one ASG, but many capacity providers can be associated with one ECS cluster.
When managed scaling is enabled, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. Amazon ECS creates an AWS Auto Scaling scaling plan with a target tracking scaling policy based on the target capacity value the customer specified.
Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.
- H1 and D2 instances feature up to 16 TB 48 TB of HDD-based local storage respectively, both deliver high disk throughput, and a balance of compute and memory. D2 instances offer the lowest price per disk throughput performance on Amazon EC2.
- I3 and I3en These instance family provides Non-Volatile Memory Express (NVMe) SSD-backed instance storage optimized for low latency, very high random I/O performance, high sequential read throughput (I3) and provide high IOPS, high sequential disk throughput (I3en), and offers the lowest price per GB of SSD instance storage on Amazon EC2.
Task definition is the recipe that Amazon ECS use the customer cluster. Task definition written as JSON statements. A task definition is required to run Docker containers in Amazon ECS. Task definitions are split into separate parts: the task family, the IAM task role, the network mode, container definitions, volumes, task placement constraints, and launch types.
The family and container definitions are required in a task definition, while task role, network mode, volumes, task placement constraints, and launch type are optional.
Amazon ECS provides a GPU-optimized AMI that comes ready with pre-configured NVIDIA kernel drivers and a Docker GPU runtime.
Amazon ECS enables you to inject sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition.
Capacity Providers manage compute capacity for containers, that allow the application to define its requirements for how it uses the capacity. It can be used to define flexible rules for how containerized workloads run on different types of compute capacity, and manage the scaling of the capacity. Using Capacity Providers improve the availability, scalability, and cost of running tasks and services on ECS.
A capacity provider is used in association with a cluster to determine the infrastructure that a task runs on. For Amazon ECS on Amazon EC2 users, a capacity provider consists of a name, an Auto Scaling group, and the settings for managed scaling and managed termination protection.
A default capacity provider strategy is associated with each Amazon ECS cluster. Which determines the capacity provider strategy the cluster will use if no other capacity provider strategy or launch type is specified when running a task or creating a service.
A capacity provider strategy gives customers control over how their tasks use one or more capacity providers. The capacity provider strategy consists of one or more capacity providers with an optional base and weight specified for each provider.
Capacity Providers work with both EC2 and Fargate, do that customers can create a Capacity Provider associated with an EC2 Auto Scaling Group (ASG)
Splitting running tasks and services across multiple Capacity Providers enables new capabilities such as running a service in a predefined split percentage across Fargate and Fargate Spot, or ensuring that a service runs an equal number of tasks in multiple availability zones without requiring the service to rebalance.
Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this family are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.
- C5n instances are ideal for high compute applications (including High Performance Computing (HPC) workloads, data lakes, and network appliances such as firewalls and routers) that can take advantage of improved network throughput and packet rate performance. C5n instances offers up to 100 Gbps network bandwidth and increased memory over comparable C5 instances.
- C5 instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio. C5 instances offer a choice of processors based on the size of the instance.
- C5 instances are ideal for applications where you prioritize raw compute power, such as gaming servers, scientific modeling, high-performance web servers, and media transcoding.
- C4 instances are the latest generation of Compute-optimized instances, featuring the highest performing processors and the lowest price/compute performance in EC2