amazon API Gateway
Amazon API Gateway is a fully managed AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. It acts as a “front door” for applications to access data, business logic, or functionality from customers back-end services, such as applications running on Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS) or AWS Elastic Beanstalk, code running on AWS Lambda, or any web application.
- API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud. As an API Gateway API developer, can create APIs for use in their own client applications, or create APIs available to third-party app developers.
- API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management.
- Amazon API Gateway provides developers with a simple, flexible, fully managed, pay-as-you-go service that handles all aspects of creating and operating robust APIs for application back ends.
API Gateway features
API Gateway has powerful, flexible authentication mechanisms, such as AWS Identity and Access Management policies, Lambda authorizer functions, and Amazon Cognito user pools.
- Using signature version 4 authentication, customers can use AWS Identity, Access Management (IAM), and access policies to authorize access to their APIs and all the other AWS resources.
- Customers can use AWS Lambda functions to verify and authorize bearer tokens such as JWT tokens or SAML assertion.
API Gateway enables customers to manage traffic of their backend systems by allowing them to set throttling rules based on the number of requests per second for each HTTP method in the APIs.
- API Gateway handles any level of traffic received by an API. Using REST APIs, customers can set up a cache with customizable keys and time-to-live in seconds for the API data to avoid hitting your backend services for each request.
- API Gateway provides customers with a dashboard to visually monitor calls to the services. The API Gateway console is integrated with Amazon CloudWatch, means customers get backend performance metrics such as API calls, latency, and error rates.
Serverless Developer Portal
Using a Serverless Developer Portal customers can use to publish API Gateway, and manage APIs directly from API Gateway. A developer portal is an application that customers use to make their APIs available to their customers. Once customers publish APIs in a developer portal, their users can:
- Discover which APIs are available.
- Browse your API documentation.
- Register for—and immediately receive—their own API key that can be used to build applications.
- Try out your APIs in the developer portal UI.
- Monitor their own API usage.
Amazon API Gateway publishes updates Serverless Developer Portal applications in the AWS Serverless Application Repository regularly.
- AWS clients can customize and incorporate it into their build and deployment tools. The front end is written in React and is designed to be fully customizable.
After an API is deployed and in use, API Gateway provides customers with a dashboard to visually monitor calls to the services. The API Gateway console is integrated with Amazon CloudWatch, so that customers can get backend performance metrics such as API calls, latency, and error rates.
- Because API Gateway uses CloudWatch to record monitoring information, AWS clients can set up custom alarms on API Gateway APIs.
- CloudTrail captures all REST API calls for API Gateway as events, including calls from the API Gateway console and from code calls to the API Gateway APIs.
- By creating a trail, customers can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for API Gateway.
- Using the information collected by CloudTrail, customers can determine the request that was made to API Gateway, the IP address from which the request was made, who made the request, when it was made, and more.
AWS WAF is a web application firewall that helps protect web applications and APIs from attacks. It enables customers to configure a set of rules (called a web access control list (web ACL)) that allow, block, or count web requests based on customizable web security rules and conditions that they define.
- AWS WAF is customers first line of defense against web exploits. When AWS WAF is enabled on an API, AWS WAF rules are evaluated before other access control features, such as resource policies, IAM policies, Lambda authorizers, and Amazon Cognito authorizers.
- Customers can use AWS WAF to protect their API Gateway API from common web exploits, such as SQL injection and cross-site scripting (XSS) attacks.
- By creating rules that match a specified string or a regular expression pattern in HTTP headers, method, query string, URI, and the request body (limited to the first 8 KB).
stateful & stateless
API Gateway support for stateful (WebSocket) and stateless (HTTP and REST) APIs. Using HTTP APIs, customers can build APIs for services behind private ALBs, private NLBs, and IP-based services registered in AWS Cloud Map, such as ECS tasks.
- HTTP API: HTTP APIs are optimized for building APIs that proxy to AWS Lambda functions or HTTP backends, making them ideal for serverless workloads. They do not currently offer API management functionality.
- REST API: REST APIs offer API proxy functionality and API management features in a single solution. REST APIs offer API management features such as usage plans, API keys, publishing, and monetizing APIs.
- WebSocket API: WebSocket APIs maintain a persistent connection between connected clients to enable real-time message communication. With WebSocket APIs in API Gateway, AWS customers can define backend integrations with AWS Lambda functions, Amazon Kinesis, or any HTTP endpoint to be invoked when messages are received from the connected clients.
Using API Gateway, AWS customers can create a custom API to the code running in AWS Lambda and then call the Lambda code from your API. API Gateway can execute AWS Lambda code in their account, start AWS Step Functions state machines, or make calls to AWS Elastic Beanstalk, Amazon EC2, or web services outside of AWS with publicly accessible HTTP endpoints.
- Using the API Gateway console, customers can define the REST API and its associated resources and methods,
- They can also manage their API lifecycle, generate client SDKs, and view API metrics.
- Using API Gateway, AWS clients can create a custom API to their code running in AWS Lambda and then call the Lambda code from their API.
- API Gateway can execute AWS Lambda code in your account, start AWS Step Functions state machines, or make calls to AWS Elastic Beanstalk, Amazon EC2, or web services outside of AWS with publicly accessible HTTP endpoints.
Canary release is a software development strategy in which a new version of an API as well as other software is deployed as a canary release for testing purposes. However, the base version (production release) remains deployed as a production release for normal operations on the same stage.
- In a canary release deployment, total API traffic is separated at random into a production release and a canary release with a pre-configured ratio. In average the canary release receives a small percentage of API traffic and the production release takes up the rest.
- By keeping canary traffic small and the selection random, it protects most users from potential bugs in the new version.
- By enabling a canary release customers can use the stage cache to store responses and use cached entries to return results to the next canary requests, within a pre-configured time-to-live (TTL) period.
- In a canary release deployment, the production release and canary release of the API can be associated with the same version or with different versions.
- When they are associated with different versions, responses for production and canary requests are cached separately and the stage cache returns corresponding results for production and canary requests.
- When the production release and canary release are associated with the same deployment, the stage cache uses a single cache key for both types of requests and returns the same response for the same requests from the production release and canary release.
Using AWS X-Ray, customers can trace and analyze user requests as they travel through customers Amazon API Gateway APIs to the underlying services. API Gateway supports X-Ray tracing for all API Gateway endpoint types: Regional, edge-optimized, and private. You can use X-Ray with Amazon API Gateway in all AWS Regions where X-Ray is available.
- Because X-Ray gives an end-to-end view of an entire request, AWS clients can analyze latencies in their APIs and its backend services.
- They can use an X-Ray service map to view the latency of an entire request and that of the downstream services that are integrated with X-Ray.
- Customers can also configure sampling rules to tell X-Ray which requests to record and at what sampling rates, according to criteria that they specify
API Gateway concept
API Gateway is an AWS service supports:
- Creating, deploying, and managing a RESTful application programming interface (API) to expose backend HTTP endpoints, AWS Lambda functions, or other AWS services.
- Creating, deploying, and managing a WebSocket API to expose AWS Lambda functions or other AWS services.
- Invoking exposed API methods through the frontend HTTP and WebSocket endpoints.
The metrics reported by API Gateway provide information that AWS customers can analyze in different ways. The following is some common uses for the metrics that are:
- Monitor the IntegrationLatency metrics to measure the responsiveness of the backend.
- Monitor the Latency metrics to measure the overall responsiveness of customers API calls.
- Monitor the CacheHitCount and CacheMissCount metrics to optimize cache capacities to achieve a desired performance.
API Gateway HTTP
A collection of routes and methods that are integrated with backend HTTP endpoints or Lambda functions. Customers can deploy this collection in one or more stages. Each route can expose one or more API methods that have unique HTTP verbs supported by API Gateway.
- AWS customers can use API Gateway for critical production applications, including from simple HTTP proxies to full API management with request transformation, authentication, and validation.
- HTTP APIs focuses on delivering enhanced features, improved performance, and an easier developer experience for customers building with API Gateway.
- There are two API Gateway namespaces for managing API Gateway deployments. The API V1 namespace represents REST APIs and API V2 represents WebSocket APIs and the new HTTP APIs.
API Gateway WebSocket
A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. So that customers can deploy this collection in one or more stages. API methods are invoked through frontend WebSocket connections that they can associate with a registered custom domain name.
- The WebSocket Protocol enables two-way communication between a client running untrusted code in a controlled environment to a remote host that has opted-in to communications from that code.
- The security model used for this is the origin-based security model commonly used by web browsers. The protocol consists of an opening handshake followed by basic message framing, layered over TCP.
API Gateway REST
A REST API in API Gateway is a collection of resources and methods that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. Customers can use API Gateway features to help them with all aspects of the API lifecycle, from creation through monitoring the production APIs.
- API resources are organized in a resource tree according to the application logic. Each API resource can expose one or more API methods that have unique HTTP verbs supported by API Gateway.
- Representational state transfer (REST) is a software architectural style that defines a set of constraints to be used for creating Web services. Web services that conform to the REST architectural style, called RESTful Web services, provide interoperability between computer systems on the Internet.
- API Gateway REST APIs use a request/response model where a client sends a request to a service and the service responds back synchronously. This kind of model is suitable for many different kinds of applications that depend on synchronous communication.
- Customers can monitor API execution by using CloudWatch, which collects and processes raw data from API Gateway into readable, near-real-time metrics
- RESTful Web services allow the requesting systems to access and manipulate textual representations of Web resources by using a uniform and predefined set of stateless operations.
API Gateway Resources
API Gateway acts as a "front door" for applications to access data, business logic, or functionality from your backend services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, any web application, or real-time communication applications.
API deployment :- A point-in-time snapshot of your API Gateway API. To be available for clients to use, the deployment must be associated with one or more API stages.
API developer:- The customers AWS account that owns an API Gateway deployment (for example, a service provider that also supports programmatic access).
API endpoint:- A hostname for an API in API Gateway that is deployed to a specific Region. The following types of API endpoints are supported:
- Edge-optimized API endpoint
- Private API endpoint
- Regional API endpoint
API key:- An alphanumeric string that API Gateway uses to identify an app developer who uses AWS customers REST or WebSocket API.
- API Gateway can generate API keys on customers behalf, or they can import them from a CSV file.
- They can use API keys together with Lambda authorizers or usage plans to control access to your APIs.
App developer:- An app creator who may or may not have an AWS account and interacts with the API that customers, the API developer, have deployed.
- App developers are AWS client customers.
Callback URL:- When a new user is connected to through a WebSocket connection, AWS clients can call an integration in API Gateway to store the client’s callback URL. They can then use that callback URL to send messages to the client from the backend system.
Developer portal:- An application that allows your customers to register, discover, and subscribe to customers API products (API Gateway usage plans), manage their API keys, and view their usage metrics for their APIs.
Proxy integration:- A simplified API Gateway integration configuration. Customers can set up a proxy integration as an HTTP proxy integration or a Lambda proxy integration.
- For HTTP proxy integration, API Gateway passes the entire request and response between the frontend and an HTTP backend.
- For Lambda proxy integration, API Gateway sends the entire request as input to a backend Lambda function. API Gateway then transforms the Lambda function output to a frontend HTTP response.
Quick create:- Using quick create, customers can simplify the created HTTP API. The quick create, that creates an API with a Lambda or HTTP integration has a default catch-all route, and a default stage that is configured to automatically deploy changes.
Regional API endpoint:- The host name of an API that is deployed to the specified Region and intended to serve clients, such as EC2 instances, in the same AWS Region. API requests are targeted directly to the Region-specific API Gateway API without going through any CloudFront distribution.
- AWS customers can apply latency-based routing on Regional endpoints to deploy an API to multiple Regions using the same Regional API endpoint configuration, set the same custom domain name for each deployed API, and configure latency-based DNS records in Route 53 to route client requests to the Region that has the lowest latency.
Route;- A WebSocket route in API Gateway is used to direct incoming messages to a specific integration, such as an AWS Lambda function, based on the content of the message. When customers define the WebSocket API, they specify a route key and an integration backend.
- The route key is an attribute in the message body. When the route key is matched in an incoming message, the integration backend is invoked.
Route request:- The public interface of a WebSocket API method in API Gateway that defines the body that an app developer must send in the requests to access the backend through the API.
Route response:- The public interface of a WebSocket API that defines the status codes, headers, and body models that an app developer should expect from API Gateway.
Usage plan:- A usage plan provides selected API clients with access to one or more deployed REST or WebSocket APIs. Customers can use a usage plan to configure throttling and quota limits, which are enforced on individual client API keys.
WebSocket connection:- API Gateway maintains a persistent connection between clients and API Gateway itself. There is no persistent connection between API Gateway and backend integrations such as Lambda functions. Backend services are invoked as needed, based on the content of messages received from clients.
Edge-optimized API endpoint:- The default hostname of an API Gateway API that is deployed to the specified Region while using a CloudFront distribution to facilitate client access typically from across AWS Regions.
- API requests are routed to the nearest CloudFront Point of Presence (POP), which typically improves connection time for geographically diverse clients.
Integration request:- The internal interface of a WebSocket API route or REST API method in API Gateway, in which customers map the body of a route request or the parameters and body of a method request to the formats required by the backend.
Integration response:- The internal interface of a WebSocket API route or REST API method in API Gateway, in which customers map the status codes, headers, and payload that are received from the backend to the response format that is returned to a client app.
Mapping template:- A script in Velocity Template Language (VTL) that transforms a request body from the frontend data format to the backend data format, or that transforms a response body from the backend data format to the frontend data format.
- Mapping templates can be specified in the integration request or in the integration response.
- They can reference data made available at runtime as context and stage variables.
Method request:- The public interface of a REST API method in API Gateway that defines the parameters and body that an app developer must send in requests to access the backend through the API.
Method response:- The public interface of a REST API that defines the status codes, headers, and body models that an app developer should expect in responses from the API.
Mock integration:- In a mock integration, API responses are generated from API Gateway directly, without the need for an integration backend. As an API developer, you decide how API Gateway responds to a mock integration request. For this, you configure the method’s integration request and integration response to associate a response with a given status code.
Model:- A data schema specifying the data structure of a request or response payload. A model is required for generating a strongly typed SDK of an API. It is also used to validate payloads.
- A model is convenient for generating a sample mapping template to initiate creation of a production mapping template.
Private API endpoint:- An API endpoint that is exposed through interface VPC endpoints and allows a client to securely access private API resources inside a VPC. Private APIs are isolated from the public internet, and they can only be accessed using VPC endpoints for API Gateway that have been granted access.
Private integration:- An API Gateway integration type for a client to access resources inside a customer’s VPC through a private REST API endpoint without exposing the resources to the public internet.
Amazon CloudFront is a AWS service that speeds up distribution of customer’s static and dynamic web content, such as .html, .css, .js, and image files, to users. It securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
- CloudFront speeds up the distribution of the content by routing each user request through the AWS backbone network to the edge location that can best serve your content.
- CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services.
- CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience.
- Using AWS origins, customers can improve performance, reliability, and ease of use as a result of AWS’s ability to track and adjust origin routes, monitor system health, respond quickly when any issues occur, and the integration of Amazon CloudFront with other AWS services.
Amazon CloudFront, AWS Shield, AWS Web Application Firewall (WAF), and Amazon Route 53 work seamlessly together to create a flexible, layered security perimeter against multiple types of attacks including network and application layer DDoS attacks. With Amazon CloudFront, customers can deliver their content, APIs or applications via SSL/TLS, and advanced SSL features are enabled automatically.
- Using AWS Certificate Manager (ACM), customers can create a custom SSL certificate and deploy to their CloudFront distribution for free.
To deliver content to end users with lower latency, Amazon CloudFront uses a global network of 216 Points of Presence with 205 Edge Locations and 11 Regional Edge Caches in 84 cities across 42 countries. Amazon CloudFront Edge locations are located in:
- North America with Regional Edge caches being located in Virginia; Ohio; Orego.
- Europe with Regional Edge caches being located in Frankfurt, Germany; London, England.
- Asia with Regional Edge caches in Mumbai, India; Singapore; Seoul, South Korea; Tokyo, Japan.
- Australia with Regional Edge caches being in Sydney.
- South America with Regional Edge caches located in São Paulo, Brazil
- Middle East Edge location located in Dubai; Fujairah; Manama; Tel Aviv.
- Africa Edge locations located in Cape Town, South Africa; Nairobi, Kenya
- China Edge locations located in Beijing; Shenzhen; Shanghai; Zhongwei.
By using Amazon CloudFront, customers can cache their content in CloudFront’s edge locations worldwide and reduce the workload on the origin by only fetching content from the origin when needed.
- CloudFront also allows customers to set up multiple origins to enable redundancy in their backend architecture.
- Customers can use CloudFront’s native origin failover capability to automatically serve their content from a backup origin when their primary origin is unavailable.
- The origins that customers set up with origin failover can be any combination of AWS origins like EC2 instances, Amazon S3 buckets, or Media Services, or non-AWS origins like an on-premises HTTP server
With Amazon CloudFront, customers can restrict access to their content through a number of capabilities. With Signed URLs and Signed Cookies, they can support Token Authentication to restrict access to only authenticated viewers.
- Through geo-restriction capability, customers can prevent users in specific geographic locations from accessing content that they’re distributing through CloudFront.
- With Origin Access Identity (OAI) feature, you can restrict access to an Amazon S3 bucket to only be accessible from CloudFront.
Amazon CloudFront is continuously measuring internet connectivity, performance and computing to find the best way to route requests to our network; taking into account performance, load, operational status, and other factors to deliver the best experience in real-time.
- Amazon CloudFront is optimized for both, providing extensive flexibility for optimizing cache behavior, coupled with network-layer optimizations for latency and throughput.
- CloudFront supports the WebSocket protocol as well as the HTTP protocol with the following HTTP methods: GET, HEAD, POST, PUT, DELETE, OPTIONS, and PATCH.
- The content delivery network (CDN) is architected to keep objects longer in cache and to reduce cache churn.
- Techniques including tiered caching and de-duplication optimization of objects in cache help maximize cache retention.
Amazon CloudFront provides developers with a full-featured API to create, configure and maintain their CloudFront distributions. Developers also have access to a number of tools such as AWS CloudFormation, CodeDeploy, CodeCommit and AWS SDKs to configure and deploy their workloads with Amazon CloudFront.
With built-in device detection, CloudFront can detect the device type such as Desktop, Tablet, Smart TV, or Mobile device, and pass that information in the form of new HTTP Headers to customers application to easily adapt content variants or other responses.
- Amazon CloudFront can also detect the country-level location of the requesting user for further customization of the response.
- Using Lambda@Edge customers can respond to requests at the lowest latency across AWS locations globally.
- For web or mobile requests, the compute request from AWS client users can be delivered closer to them.
CloudFront Content Delivery
How Regional Caches Work
Regional edge caches are CloudFront locations that are deployed globally, and located between AWS customers origin server and the POPs—global edge locations that serve content directly to viewers.
- Regional edge caches have a larger cache than an individual POP, so objects remain in the cache longer at the nearest regional edge cache location. Which keeps most of the customers content closer to their viewers.
- When a viewer makes a request on the website or through the application, DNS routes the request to the POP that can best serve the user’s request.
- The regional edge cache location of the CloudFront again checks its cache for the requested files. If the files are in the cache, CloudFront forwards the files to the POP that requested them. As soon as the first byte arrives from the regional edge cache location, CloudFront begins to forward the files to the user.
- CloudFront adds the files to the cache in the POP for the next time someone requests those files.
Once AWS customers configure CloudFront to deliver their content, when users request customers files before it gets to the end users here’s what happens:
- A user accesses the website or application and requests one or more files, such as an image file and an HTML file. Then
- DNS routes the request to the CloudFront POP (edge location) that can best serve the request to the nearest CloudFront POP in terms of latency, and routes the request to that edge location. After that
- In the POP, CloudFront checks its cache for the requested files. If the files are in the cache, CloudFront returns them to the user.
Regional Edge Caches
CloudFront points of presence (POPs), that is an edge location make sure that popular content can be served quickly to viewers. CloudFront also has regional edge caches that bring more of the content closer to the viewers, even when the content is not popular enough to stay at a POP. Regional edge caches help with all types of content, particularly content that tends to become less popular over time.
- Regional edge caches help with all types of content, particularly content that tends to become less popular over time. Examples include user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos; and news and event-related content that might suddenly find new popularity.
CloudFront Use Cases
Using CloudFront can help you accomplish a variety of goals. This section lists just a few, together with links to more information, to give you an idea of the possibilities.
ECS Cluster Auto Scaling
Serve Video On Demand or Live Streaming Video
CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events.
- For video on demand (VOD) streaming, using CloudFront, customers can stream video in common formats such as MPEG DASH, Apple HLS, Microsoft Smooth Streaming, and CMAF, to any device.
- For broadcasting a live stream, they can cache media fragments at the edge, so that multiple requests for the manifest file that delivers the fragments in the right order can be combined, to reduce the load on the origin server.
Customize at the Edge
Running serverless code at the edge opens up a number of possibilities for customizing the content and experience for viewers, at reduced latency.
- AWS clients can return a custom error message when their origin server is down for maintenance, so viewers don’t get a generic HTTP error message.
- They can use a function to help authorize users and control access to their content, before CloudFront forwards a request to their origin.
- Using Lambda@Edge with CloudFront enables a variety of ways to customize the content that CloudFront delivers.
Accelerate Static Website
- A simple approach for storing and delivering static content is to use an Amazon S3 bucket.
- Using S3 together with CloudFront has a number of advantages, including the option to use Origin Access Identity (OAI) to easily restrict access to your S3 content.
Encrypt Specific Fields
While configuring the HTTPS with CloudFront, AWS clients already have secure end-to-end connections to origin servers. When they add field-level encryption, they can protect specific data throughout system processing in addition to HTTPS security, so that only certain applications at the origin can see the data.
- To set up field-level encryption, customers need to add a public key to CloudFront, and then specify the set of fields that they want to be encrypted with the key.
Using Lambda@Edge enables customers to configure their CloudFront distribution to serve private content from their own custom origin, as an option to using signed URLs or signed cookies.
- Customers can use several techniques to restrict access to their origin exclusively to CloudFront, including using whitelisting CloudFront IPs in the firewall and using a custom header to carry a shared secret.
Amazon Direct Connect
AWS Direct Connect links the customer internal network to an AWS Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, customers can create virtual interfaces directly to public AWS services or to Amazon VPC, bypassing internet service providers in their network path. Using AWS Direct Connect, AWS clients can establish private connectivity between AWS and their datacenter, office, or colocation environment, which in many cases can reduce the network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
- Using AWS Direct Connect, data will be delivered through a private network connection between AWS and customers datacenter or corporate network.
- All AWS services, including Amazon EC2, Amazon VPC, Amazon S3, and Amazon DynamoDB can be used with AWS Direct Connect.
- Each AWS Direct Connect connection can be configured with one or more virtual interfaces. Virtual interfaces can be configured to access AWS services such as Amazon EC2, Amazon EBS, and Amazon S3 using public IP space, or resources in a VPC using private IP space.
- An AWS Direct Connect location provides access to AWS in the Region with which it is associated. Customers can use a single connection in a public Region or AWS GovCloud (US) to access public AWS services in all other public Regions.
Direct Connect Features
AWS Direct Connect reduces customer’s network costs into and out of AWS in two ways.
- By transferring data to and from AWS directly, customers can reduce their bandwidth commitment to the Internet service provider.
- All data transferred over customers dedicated connection is charged at the reduced AWS Direct Connect data transfer rate rather than Internet data transfer rates.
AWS Direct Connect makes it easy to scale your connection to meet customers needs. AWS Direct Connect provides 1 Gbps and 10 Gbps connections, and they can easily provision multiple connections if you need more capacity.
- AWS customers can use AWS Direct Connect instead of establishing a VPN connection over the Internet to their Amazon VPC.
With AWS Direct Connect, customers can transfer their business critical data directly from their datacenter, office, or colocation environment into and from AWS bypassing their Internet service provider, which removes network congestion.
- AWS Direct Connect’s simple pay as-you-go pricing, and no minimum commitment means customers pay only for the network ports they use and the data transferred over the connection.
AWS customers can use AWS Direct Connect to establish a private virtual interface from their on-premise network directly Amazon VPC, that provides them with a private, high bandwidth network connection between the networks their VPC.
- With multiple virtual interfaces, customers can even establish private connectivity to multiple VPCs while maintaining network isolation.
AWS customers can sign up for AWS Direct Connect service quickly and easily using the AWS Management Console.
- The console provides a single view to efficiently manage all customers connections and virtual interfaces.
- Customers can also download customized router templates for their networking equipment after configuring one or more virtual interfaces.
Applications that use real-time data feeds can also benefit from using AWS Direct Connect. Applications like voice and video perform best when network latency remains constant.
- With AWS Direct Connect, customers control how their data is routed, which can provide a more consistent network experience over Internet-based connections.
AWS Direct Connect enables customers to build hybrid environments that satisfy regulatory requirements requiring the use of private connectivity.
- Hybrid environments allow customers to combine the elasticity and economic benefits of AWS with the ability to utilize other infrastructure that they already own.
Direct Connect Resiliency Toolkit
AWS offers its clients the ability to achieve highly resilient network connections between Amazon VPC and their on-premises infrastructure. The Direct Connect Resiliency Toolkit provides a connection wizard with multiple resiliency models. These models help customers to order dedicated connections to achieve their SLA objective. Once customers select the resiliency model, Direct Connect Resiliency Toolkit guides them through the dedicated connection ordering process.
- The resiliency models are designed to ensure that you have the appropriate number of dedicated connections in multiple locations.
The best practice is to use the Connection wizard in the Direct Connect Resiliency Toolkit to order the dedicated connections to achieve your SLA objective. These resiliency models are available in the in AWS Direct Connect Resiliency Toolkit:
- Maximum Resiliency: This model provides customers a way to order dedicated connections to achieve an SLA of 99.99%. It requires them to meet all of the requirements for achieving the SLA that are specified in the AWS Direct Connect Service Level Agreement.
- High Resiliency: This model provides you a way to order dedicated connections to achieve an SLA of 99.9%. It requires customers to meet all of the requirements for achieving the SLA that are specified in the AWS Direct Connect Service Level Agreement.
- Development and Test: This model provides customers a way to achieve development and test resiliency for non-critical workloads by using separate connections that terminate on separate devices in one location.
- Classic. This model is intended for users that have existing connections and want to add additional connections. This model does not provide an SLA.
The Direct Connect Resiliency Toolkit has the following benefits:
- Provides guidance on how AWS clients determine and then order the appropriate redundant AWS Direct Connect dedicated connections.
- Ensures that the redundant dedicated connections have the same speed.
- Automatically configures the dedicated connection names.
- Automatically approves customers dedicated connections when they have an existing AWS account and selects a known AWS Direct Connect Partner. The Letter of Authority (LOA) is available for immediate download.
- Automatically creates a support ticket for the dedicated connection approval when the client is new to AWS services.
- It provides an order summary for the customer’s dedicated connections with the SLA that they can achieve and the port-hour cost for the ordered dedicated connections.
- Creates link aggregation groups (LAGs), and adds the appropriate number of dedicated connections to the LAGs when customers choose a speed other than 1 Gbps or 10 Gbps.
- Provides a LAG summary with the dedicated connection SLA that customers can achieve, and the total port-hour cost for each ordered dedicated connection as part of the LAG.
- Prevents customers from terminating the dedicated connections on the same AWS Direct Connect device.
Types of connection
AWS Direct Connect enables customers to establish a dedicated network connection between their network and one of the AWS Direct Connect locations. There are two types of connections:
After customers have downloaded the Letter of Authorization and Connecting Facility Assignment (LOA-CFA), they need to complete the cross-network connection, also known as a cross connect.
- AWS Direct Connect is available at locations around the world. In some campus settings, AWS Direct Connect is accessible via a standard cross-connect from other data centers operated by the same provider on the same campus.
- With Direct Connect Gateway and global public Virtual Interfaces, customers can access any other AWS Region from their chosen location.
With the introduction of the granular Data Transfer Out allocation feature, the AWS account responsible for the Data Transfer Out will be charged for the Data Transfer Out performed over a transit/private virtual interface. The AWS account responsible for the Data Transfer Out will be determined based on the customer’s use of the private/transit virtual interface as follows:
- Private virtual interface(s) is used to interface with Amazon Virtual Private Cloud(s) with or without Direct Connect gateway(s). In the case of the private virtual interface, the AWS account owning the AWS resources responsible for the Data Transfer Out will be charged.
- Transit virtual interface(s) is used to interface with AWS Transit Gateway(s). In the case of the transit virtual interface, the AWS account owning the Amazon Virtual Private Cloud(s) attached to the AWS Transit Gateway associated with the Direct Connect gateway attached to the transit virtual interface will be charged. Please note that all applicable AWS Transit Gateway specific charges (Data Processing and Attachment) will be in addition to the AWS Direct Connect Data Transfer Out.
A physical Ethernet connection associated with a single customer. Customers can request a dedicated connection through the AWS Direct Connect console, the CLI, or the API.
- AWS customers can add a dedicated connection to a link aggregation group (LAG), which allows them to treat multiple connections as a single one.
- After customers create a connection, they need to create a virtual interface to connect to public and private AWS resources
- These are the available operations in Dedicated Connection:
- Creating a connection
- Viewing connection details
- Updating a connection
- Deleting connections
A physical Ethernet connection that an AWS Direct Connect Partner provisions on behalf of a customer. Customers request a hosted connection by contacting a partner in the AWS Direct Connect Partner Program, who provisions the connection.
- After receiving the request of a connection, AWS makes a Letter of Authorization and Connecting Facility Assignment (LOA-CFA) available to you to download.
- Once AWS customers accept a connection, they need to create a virtual interface, in order to connect to public and private AWS resources.
- These are the available operations in Hosted connection.
- Creating a connection
- Viewing connection details
- Updating a connection
- Deleting connections