Guideline To AWS Architecture

One of the most in-demand qualifications for cloud engineers is the AWS Certified Solutions Architect and Associate certification. Although there are a number of players in the public cloud space including Google Cloud, Microsoft Azure to name the few, Amazon Web Services (AWS) certifications are by far  the most sought-after out there, because AWS holds the largest public cloud market share.

While the focus of this website will be how to deal with certain Amazon web services, one should know the importance of having the certification on the subject matter, especially if you are not a cloud engineer.

For the moment we at (Amazon Bate) will focus on only to show you the way we understand AWS, rather than AWS Solutions  Architect Certification. However, through the process we hope it will be a good guideline for your certification process. The services we will cover include Amazon Elastic Cloud Compute (EC2), Virtual Private Cloud (VPC), Identity and Access Management (IAM), Simple Storage Service (S3), and Relational Databases (RDS). To fully understand the Amazon web services, the first step is to look for the AWS Certified Solutions Architect Study Guide and its white paper. Here at Amazon Bate, we exhausted all the material that needs to be covered in order to achieve the goal set by the Study Guide. 

First thing is first, understanding Amazon Web Services is not an easy task. You will not understand AWS in a single web page like this one with 5000 words. This web page is merely trying to explain the most popular AWS services such as Amazon  EC2, Simple Storage, database, VPC. However, in this introductory part of the website, we will go through what you need to know abouts AWS service in order for you to fully understand and practice clouding. 

By definition Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Which simply means instead of buying, owning, and maintaining physical data centers and servers, AWS customers can access technology services, such as computing power, storage, and databases, on an as-needed basis. This introductory part also provides the designing solutions key concepts and differences of AWS. It includes a discussion on how to take advantage of attributes that are specific to the dynamic nature of cloud computing, such as elasticity and infrastructure automation. These patterns can provide the context for a more detailed review of choices, operational status, and implementation status as detailed in the AWS Well-Architected Framework.

  • The AWS Cloud includes many design patterns and architectural options that are customer centric.  It’s important to Identify and recognize cloud architectural considerations, such as fundamental components and effective designs. How to design cloud services, This are the guidelines of AWS 10 Designing Principles

1. Disposable Resources Instead of Fixed Servers

AWS infrastructure environment dynamically provisioned cloud computing. Because of that servers(instances) and other components are temporary resources, as such customers can launch as many as you need, and use them only for as long as they need them.

Immutable infrastructure:- Using immutable infrastructure patterns,  AWS clients can launch their instance and don’t have to worry about updating it. Instead, when there is a problem or need for an update, the problem server is replaced with a new server that has the latest configuration. 

  • This enables resources to always be in a consistent (and tested) state, and makes rollbacks easier to perform.

Bootstrapping:- After launching an AWS resource such as an EC2 instance or Amazon Relational Database Service, customers can execute automated bootstrapping actions, which are scripts that install software, or copy data to bring that resource to a certain state.

Golden Images:- Golden Images is a snapshot of a particular state of a resource, and AWS customers can use it to launch certain AWS resources such as EC2 instances, Amazon RDS DB instances, and Amazon Elastic Block Store volumes.

  • When compared to the bootstrapping approach, a golden image results in faster start times, and it removes dependencies on configuration services or third-party repositories.

Docker:- Docker is an open-source technology that allows AWS customers to build and deploy distributed applications inside software’s container. It allows customers to package a piece of software in a Docker image, which is a standardized unit for software development, containing everything the software needs to run: code, runtime, system tools, system libraries, etc.

  •  AWS Elastic Beanstalk, Amazon Elastic Container Service (Amazon ECS) and AWS Fargate let you deploy and manage multiple containers across a cluster of EC2 instances.

Hybrid:- AWS customers can combine both bootstrapping and Golden image approaches to configure their resources. While some parts of the configuration are captured in a golden image, others configured dynamically through a bootstrapping action.

  • Items that do not change often or that introduce external dependencies will typically be part of your Golden Image.
  • Items that change often can be set up dynamically through bootstrapping actions.

2. Automation 

When deploying AWS resources customers’ can use automation to  improve both systems stability and the efficiency of their organization. The following Applications architecture ensures more resiliency, scalability, and performance.

AWS Elastic Beanstalk:- Using Beanstalk service, customers can deploy and scale web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

  • Once customers upload their application code, the service automatically handles all the details, such as resource provisioning, load balancing, auto scaling, and monitoring.

Amazon EC2 auto recovery:- AWS customers can create an Amazon CloudWatch alarm that monitors an EC2 instance and automatically recovers it if it becomes impaired.

  • A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata. 

AWS Systems Manager:- Using System Manager, customers can automatically collect software inventory, apply OS patches, create a system image to configure Windows and Linux operating systems, and execute arbitrary commands. 

  • Provisioning these AWS services simplifies the operating model and ensures the optimum environment configuration.

Auto Scaling:- Auto Scaling maintain application availability and scale Amazon EC2, Amazon DynamoDB, Amazon ECS, Amazon Elastic Container Service for Kubernetes capacity up or down automatically according to the conditions customers define.

  • Customers can use Auto Scaling to help make sure that you are running the desired number of healthy EC2 instances across multiple Availability Zones. 
  • Auto Scaling can automatically increase the number of EC2 instances during demand spikes to maintain performance and decrease capacity during less busy periods to optimize costs.

Amazon CloudWatch alarms:- By creating a CloudWatch alarm, customers can send an Amazon Simple Notification Service (Amazon SNS) message when a particular metric goes beyond a specified threshold for a specified number of periods.

Those Amazon SNS messages can automatically kick off the execution of a subscribed Lambda function, enqueue a notification message to an Amazon SQS queue, or perform a POST request to an HTTP or HTTPS endpoint.

Amazon CloudWatch Events:- CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources

  • Using simple rules, AWS clients can route each type of event to one or more targets, such as Lambda functions, Kinesis streams, and SNS topics.

AWS Lambda scheduled events:- By creating a Lambda function, customers can configure AWS Lambda to execute the function in a regular schedule.

  • AWS WAF security automations: AWS WAF is a web application firewall that enables customers to create custom application-specific rules that block common attack patterns that can affect application availability, compromise security, or consume excessive resources. 
    • Customers can administer AWS WAF completely through APIs, which makes security automation easy by enabling rapid rule propagation and fast incident response

3. Loose Coupling

As application complexity increases, a desirable attribute of an IT system can be broken into smaller, loosely coupled components. This means that a change or a failure in one component should cascade to other components. 

A way to reduce interdependencies in a system is to allow the various components to interact with each other only through specific, technology-agnostic interfaces, such as RESTful APIs. With this customers can modify the underlying implementation without affecting other components. 

  • One of the resources AWS customers is a fully managed Amazon API Gateway service, which enables them to create, publish, maintain, monitor, and secure APIs at any scale. It handles all tasks involved in accepting and processing up to hundreds of thousands of concurrent AP calls, including traffic management, authorization and access control, monitoring, and API version management.

Service Discovery:- By using load balancer, which gets its own hostname, AWS customers can consume a service through a stable endpoint, for any Amazon EC2-hosted service. This can be combined with DNS and private Amazon Route 53 zones, for any particular load balancer’s endpoint to be abstracted and modified at any time.

Asynchronous Integration:- Asynchronous integration is another form of loose coupling between services. This model is suitable for any interaction that does not need an immediate response and where an acknowledgement that a request has been registered will suffice. 

  • It involves one component that generates events and another that consumes them. The two components do not integrate through direct point-to-point interaction but usually through an intermediate durable storage layer, such as an SQS queue or a streaming data platform such as Amazon Kinesis, cascading Lambda events, AWS Step Functions, or Amazon Simple Workflow Service.

4. Databases

Relational Databases:- Relational databases (RDS or SQL databases) normalize data into well- defined tabular structures known as tables, which consist of rows and columns. RDS provides a powerful query language, flexible indexing capabilities, strong integrity controls, and the ability to combine data from multiple tables in a fast and efficient manner. Amazon RDS is easy to set up, operate, and scale a relational database in the cloud and it supports many familiar database engines.

NoSQL Databases:– Using NoSQL databases instead of relational databases, users trade some of the query and transaction capabilities of relational databases for a more flexible data model that seamlessly scales horizontally. NoSQL databases use a variety of data models, including graphs, key-value pairs, and JSON documents, and are widely recognized for ease of development, scalable performance, high availability, and resilience. 

  • Amazon DynamoDB is a fast and flexible NoSQL database service for applications that need consistent, single-digit, millisecond latency at any scale, and it is a fully managed cloud database that supports both document and key-value store models.

Data Warehouse:- A data warehouse is a specialized type of relational database, which is optimized for analysis and reporting large amounts of data. It can be used to combine transactional data from disparate sources such as user behavior in a web application, data from your finance and billing system, or customer relationship management. 

  • Amazon Redshift is a managed data warehouse service that is designed to operate at less than a tenth the cost of traditional solutions.

Graph Databases:- A graph is defined as consisting of edges (relationships) that directly relate to nodes (data entities) in the store, and a graph database uses these structures for queries. The relationships enable data in the store to be linked together directly, which allows for the fast retrieval of complex hierarchical structures in relational systems. 

  • Graph databases are purposely built to store and navigate relationships and are typically used in use cases like social networking, recommendation engines, and fraud detection, in which AWS customers are able to create relationships between data and quickly query these relationships.

5. Services, Not Servers 

AWS offers a broad set of compute, storage, database, analytics, application, and deployment services that help organizations move faster and lower IT costs. 

  • Managed Services:- AWS managed services provide building blocks for developers, who manage services include databases, machine learning, analytics, queuing, search, email, notifications, and more. 
    • These services includes Amazon SQS, Amazon S3, Amazon CloudFront for content delivery, ELB for load balancing, Amazon DynamoDB for NoSQL databases, Amazon CloudSearch for search workloads, Amazon Elastic Transcoder for video encoding, Amazon Simple Email Service (Amazon SES) for sending and receiving. 
  • Serverless Architectures:- Serverless architectures can reduce the operational complexity of running applications. Aws clients can build both event-driven and synchronous services for mobile, web, analytics, CDN business logic, and IoT without managing any server infrastructure. Which reduces costs because customers don’t have to manage or pay for underutilized servers, or provision redundant infrastructure to implement high availability.

6. Removing Single Points of Failure

A system is highly available when it can withstand the failure of an individual component or multiple components, such as hard disks, servers, and network links. Introducing Redundancy Single points of failure can be removed by introducing redundancy, which means AWS customers have multiple resources for the same task. Redundancy can be implemented in either standby or active mode. 

  • Standby redundancy:- when a resource fails, functionality is recovered on a secondary resource with the failover process. The failover typically requires some time before it completes, and during this period the resource remains unavailable. The secondary resource can either be launched automatically only when needed (to reduce cost), or it can already be running idle (to accelerate failover and minimize disruption). Standby redundancy is often used for stateful components such as relational databases. 
  • Active redundancy:- requests are distributed to multiple redundant compute resources. When one of them fails, the rest can simply absorb a larger share of the workload. Compared to standby redundancy, active redundancy can achieve better usage and affect a smaller population when there is a failure.

7. Optimize for Cost 

Moving the existing architectures into the cloud not only reduce capital expenses and drive savings as a result, but by iterating and using more AWS capabilities, customers can realize further opportunity to create cost-optimized cloud architectures.

  • To help customers identify cost-saving opportunities and keep their resources right-sized, AWS provides tools such as AWS Elastic Beanstalk and AWS OpsWorks.
    • In order To make those tools’ outcomes easy to interpret, customers need to define and implement a tagging policy for their AWS resources.
  • By implementing Auto Scaling Amazon EC2 workloads, customers can horizontally scale up when needed and scale down and automatically reduce their spending when capacity is not needed anymore. AWS customers can take advantage of the variety of purchasing options that AWS offers:
    • Amazon EC2 On-Demand instance
    • Reserved instance 
    • Spot instance

8. Caching 

Caching is a technique that stores previously calculated data for future use. Which improves application performance and increases the cost efficiency of an implementation. 

  • Application Data Caching:- Applications can be designed to store and retrieve information from fast, fully managed, in-memory caches. Cached information might include the results of I/O-intensive database queries, or the outcome of computationally intensive processing.
    • Amazon ElastiCache is an AWS web service that can be used to deploy, operate, and scale an in-memory cache in the cloud. There are two type in-memory open-source caching engines that AWS customers can implement: 
      • Memcached and 
      • Redis
  • Edge Caching:- Using Amazon CloudFront edge location, AWS customers can cache copies of static content such as images, CSS files, or streaming pre-recorded video. They also can cache dynamic content such as responsive HTML, live video which is a CDN with multiple points of presence around the world. 
    • Edge caching allows content to be served by infrastructure that is closer to

9. Scalebilty

Scaling Vertically:- AWS clients can scale their instance by increasing the specifications of an individual resource, which includes upgrading a server with a larger hard drive or a faster CPU.

Scaling Horizontally:-  AWS clients can scale their instance horizontally by increasing the number of resources including add more hard drives to a storage array or adding more servers to support an application

Stateless Application:- A stateless application is an application that does not need knowledge of previous interactions and does not store session information; Which is unique data for users that persists between requests while they use the application.

  • Since most of the available compute resources such as EC2 instances and AWS Lambda functions can service any request, stateless applications have the ability to scale horizontally.

Distribute Load to Multiple Nodes:– AWS clients can distribute the workload to multiple nodes in their environment, using either a push or a pull model.

  • With a push model, customers can use Elastic Load Balancing (ELB) to distribute a workload. And ELB routes incoming application requests across multiple EC2 instances.
  • For asynchronous, event-driven workloads, customers can implement the pull model for tasks that need to be performed or data that needs to be processed, and stored as messages in a queue using Amazon Simple Queue Service (Amazon SQS) or Amazon Kinesis.

Stateless Components, Stateful Components, Implement Session Affinity, Distributed Processing, and Implement Distributed Processing are other ways that AWS customers can use to scale their resources.

10. Security

AWS provides many features that can help customers build architectures that feature defense in depth methods. These methods includes building a VPC topology that isolates parts of the infrastructure through the use of:

  • Subnets 
  • Security groups
  • Routing controls 
  • AWS WAF
  • Web application firewall

For access control,AWS customers can use IAM to define a granular set of policies and assign them to:

  • Users
  • Groups

AWS Cloud also offers many options to protect customers’ data, whether it is in transit or at rest with encryption.

Share Security Responsibility with AWS

AWS is responsible for the security of the underlying cloud infrastructure and its customers are responsible for securing the workloads they deploy in AWS. This helps customers to reduce the scope of their responsibility and focus on the core competencies through the use of AWS managed services. 

  • Reduce Privileged Access
  • Security as Code
  • Real Time Audting

Planning and design:- AWS has a Well-Architected Framework, which is based on five pillars including Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.

Operational Excellence:- Operational Excellence simply means the ability to run and monitor systems that deliver business value, and continually improve supporting processes and procedures. 

Security:- A Well-Architectured framework should be able to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. 

Reliability:- A Well-Architected framework should be able to recover from infrastructure or service disruptions, and dynamically acquire computing resources to meet demand. It also should be able to mitigate disruptions such as misconfigurations or transient network issues. 

Performance Efficiency:- A Well-Architected should be able to use computing resources efficiently to meet system requirements, and to maintain the efficiency as demand changes and technologies evolve. 

Cost Optimization:-  Finally a Well-Architected needs to run systems that deliver business value at the lowest price possible.

Logging:- Logging data is the simplest way of collecting data for measurement and it plays an important role in any organization, as such it provides a way to measure the health of hardware devices and software applications alike. Log sources, can be network devices, operating systems, applications, or cloud services,

Monitoring:- Using Amazon CloudWatch, AWS clients can monitor their instances which collects and processes raw data from Amazon EC2 into readable, near real-time metrics. These statistics are recorded for a period of 15 months, so that you can access historical information and gain a better perspective on how your web application or service is performing.

Compute

Elastic Cloud Compute (EC2)

 

Amazon Elastic Compute Cloud (EC2) is a web service that provides secure and resizable compute capacity in the cloud and it provides different instance types that allows AWS customers to choose the CPU, memory, storage, and networking capacity they need to run their applications. AWS offers three types of instance: On-Demand Instances, Spot Instances, and Reserved Instances. Each instance type offers different compute, memory, and storage capabilities; and they are grouped in instance families based on these capabilities.

  • Reserved instance:- Reserved instances are not physical instances, rather a billing discount applied to the use of On-Demand Instances in the customer account. These On-Demand Instances need to match certain attributes, such as instance type and Region in order to benefit from the billing discount.
    • Instance type: For example, m4.large. This is composed of the instance family (m4) and the instance size (large).
    • Region: The Region in which the Reserved Instance is purchased.
    • Tenancy: Whether your instance runs on shared (default) or single-tenant (dedicated) hardware. 
    • Platform: The operating system; for example, Windows or Linux/Unix. 
  • On Demand:- An On-Demand Instance is an instance that is used as needed, and AWS customers have full control over its life cycle, they can decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when customers purchase On-Demand Instances. They pay only for the seconds that the On-Demand Instances are in the running state.
  • Spot:- A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable customers to request unused EC2 instances at steep discounts, the hourly price for a Spot Instance is called a Spot price.

Amazon Machine Image (AMI)

Amazon Machine Image (AMI) is an AWS template (appliance) that contains a software configuration such as an operating system, an application server, and applications.

  • Configure an Amazon Machine Image (AMI).
  • Operate and extend service management in a hybrid IT architecture.
  • Configure services to support compliance requirements in the cloud.
  • Launch instances across the AWS global infrastructure.
  • Configure IAM policies and best practices.

Architectural trade-off Decisions

  • Understand and identify areas where increasing the performance of whether workload will have a positive impact on efficiency or the customer experience. 
  • Research and understand the various design patterns and services that help improve workload performance, and identify what needed to be traded to achieve higher performance. 
  • When evaluating performance-related improvements, consider the choice impact on customers and workload efficiency.
  • As changes are made to improve performance, evaluate metrics and data that were collected to determine the impact of the performance improvement had on the workload. This measurement helps customers  understand the improvements that result from the tradeoff, and helps them  determine if any negative side-effects were introduced.
  • Use strategies like caching data to prevent excessive network or database calls, use read-replicas for database engines to improve read rates, sharding or compressing data where possible to reduce data volumes. In short, when possible, utilize a number of strategies to improve performance.

 

AWS Elastic Beanstalk

AWS Elastic Beanstalk is an AWS service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

  • Use Case
  • AWS Beanstalk Features 
  • Application
  • Application version
  • Environment
  • Environment tier
  • Platform

Amazon Lambda

AWS Lambda is an AWS Compute service that lets customers run code run code without provisioning or managing servers. AWS Lambda executes users code only when needed and scales automatically, from a few requests per day to thousands per second.

Topics includes

  • AWS Lambda Feature
  • Function
  • Runtime
  • Event
  • Concurrency
  • Trigger

AWS Batch

AWS Batch is an Amazon web service that allows developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources including CPU and memory optimized instances; based on the volume and specific resource requirements of the batch jobs submitted.

Topics includes

  • AWS Batch Features 
  • Jobs
  • Job Definitions
  • Job Queues
  • Job Scheduling
  • Compute Environments

Amazon Container Service

Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and it enables AWS customers to easily run applications on a managed cluster of Amazon EC2 instances.

Topics includes:

  • Amazon ECS Features 
  • Docker Basic
  • AWS Fargats
  • Clusters
  • Task DeDefinitis
  • Container Instances
  • Container Agent
  • Scheduling Tasks

Amazon Kubernetes

Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that allows AWS customers to run Kubernetes on AWS without needing to install, operate, and maintain their own Kubernetes control plane.

Topics includes:

  • Amazon EKS Features 
  • Cluster endpoint access
  • Deleting a cluster
  • Kubernetes versions
  • Platform versions
  • Worker nodes

Amazon Container Registry

Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that enables developers to store, manage, and deploy Docker container images. Docker is a technology that allows you to build, run, test, and deploy distributed applications that are based on Linux containers.

Topics include

  • Amazon ECR Features 
  • Registry
  • Authorization token
  • Repositories
  • Images

Amazon Lightsail

AWS Lightsail is designed to be the easiest way to launch and manage a virtual private server with AWS. Lightsail plans include everything customers need to jumpstart their project such as running wordpress websites including this one (amazonbate.com), in a virtual machine, SSD-based storage, data transfer, DNS management, and a static IP – for a low, predictable price.

Topics includes:

  • Amazon Lightsail Features 
  • DNS zone
  • Domain
  • WordPress
  • Bitnami
  • Load balancers
  • Static IP
  • Snapshot

Networking

Network computing  is the term used in computing which refers to computers or nodes working together over a network. Sometimes it is called Cloud computing, a kind of Internet-based computing that provides shared processing resources and data to devices on demand. Distributed computing.

Amazon Virtual Private Cloud

Amazon Virtual Private Cloud (VPC) is a logically isolated section of the Amazon Web Services (AWS) cloud function as AWS customers own data center.

VPC Basics:

  • Default vs Custom VPC
  • Private and Public subnet creation with valid CIDR block.
  • Create and Assign Internet Gateway
  • VPC tenancy
  • Routing tables for private and public subnets
  • Launching instances inside VPC
  • Public and Private IPs

Use cases of Elastic IP

  • Requirements for EC2 instance to connect with Internet
  • No of allowed VPCs, IGW and EIPs per region
  • Use cases for Elastic Network Interface
  • VPC endpoint for S3

NAT Instance

A NAT instance device forwards traffic from the instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances. When traffic goes to the internet, the source IPv4 address is replaced with the NAT device’s address and similarly, when the response traffic goes to those instances, the NAT device translates the address back to those instances’ private IPv4 addresses.

  • Use case for NAT instance
  • Configure NAT instance with right security group configurations
  • Performance tuning of NAT
  • Use of Source/Destination
  • Check Option
  • Routing table configuration for Private subnet with NAT

Network Access Control List:- Network ACLs are external network layers that act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level.

  • Security groups vs Network Access Control List
  • Stateful vs Stateless rules
  • Rules evaluation order
  • Default rules Association with Subnet

Virtual Private Network

AWS Virtual Private Network (AWS VPN) enables customers to establish a secure and private encrypted tunnel from their network or device to the AWS global network. There are kind AWS VPN services AWS Site-to-Site VPN and AWS Client VPN.

  • VPN Connection:- a VPN connection is  the connection between AWS customers Virtual Private Cloud (VPC) and their own on-premises network.

Topics includes:

  • Site-to-Site VPN
  • AWS Client VPN
  • Setting up hardware VPN
  • Components of VPN
  • Customer and Private Gateways
  • Failover scenarios
  • Static vs Dynamic routed VPN
  • Pricing for VPN connections

VPC Peering:- A VPC peering connection is a networking connection between two VPCs that enables customers to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network.

  • Limitation of peering in the context of region
  • Cross account Peering
  • IP address range impacts on peering
  • Transitive peering

Elastic Load Balancing

Elastic Load Balancing is an AWS service that distributes incoming application or network traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, in multiple Availability Zones; and it also scales customers load balancer as traffic to their application changes over time.

Topics includes:

  • Application Load Balancers
  • Network Load Balancers
  • Classic Load Balancers
  • Configure ELB with Health Check
  • Use of DNS address vs Static IP
  • Associate load balancer with an auto scaling groups
  • Healthy and Unhealthy thresholds.

Amazon CloudFront:Amazon:- CloudFront is a web service that increases the speed of distribution of customers static and dynamic web content, such as .html, .css, .js, and image files, to their end users. CloudFront delivers the uh content through a worldwide network of data centers called edge locations. 

Topics includes:

  • Accelerate Static Website Content Delivery
  • Serve Video On Demand or Live Streaming Video
  • Encrypt Specific Fields Throughout System Processing
  • Customize at the Edge

AWS Route53

Amazon Route 53 is an AWS service that provides a highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like Amazonbate.com into the numeric IP addresses, such as 321.222.22.122, where computers use to connect to each other.

Topics includes:

  • Domain name
  • Domain registrar
  • Domain registry
  • Domain reseller
  • Top-level domain (TLD)

Record Types:

  • Different types of DNS record types support including A,CNAME and ALIAS
  • Use case for ALIAS record and Zone Apex
  • Record Cost association with record types
  • Alias record integration with other AWS services mainly ELB, S3 and CloudFront
  • Policy Records
  • Number of Domains per Account

Routing Policies:

  • Simple, Weighted, Latency, Failover and Geolocation routing policies and use cases
  • Difference between routing policies DNS Failover:DNS failover components
 

API Gateway

  • Direct Connect:- AWS Direct Connect enables customers to establish a dedicated connection from an on-premises network to Amazon VPC. Using AWS Direct Connect, clients can establish private connectivity between AWS and their data center, office, or collocated environment.
  • Direct connect use cases and advantages.
  • Pricing and consolidated billing
  • Connection speeds
  • Failover scenarios
  • Direct Connect vs VPN

API Gateway:- Amazon API Gateway is an AWS service that allows developers to create, publish, maintain, monitor, and secure APIs at any scale. HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs, and the API Gateway REST API is made up of resources and methods that is a logical entity that an app can access through a resource path.

  • Working HTTP APIs
  • Working REST APIs
  • Working WebSocket APIs
  • Architecture of API Gateway
  • API Gateway use cases

DHCP:- The Dynamic Host Configuration Protocol(DHCP) is a network management protocol that is used on Internet Protocol networks whereby a DHCP server dynamically assigns an IP address and other network configuration parameters to each device on a network so they can communicate with other IP networks.

Security

AWS Identity and Access Management (IAM) is an Amazon web service that enables  AWS customers securely control access to AWS resources. They can use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources such as EC2, Databases and so on.

  • Identities
    • Users, Groups, and Roles
    • Tagging Users and Roles
    • Temporary Security Credentials
    • The Root User
  • Recognizing and implementing secure practices for optimum cloud deployment and maintenance. Content may include the following:
    • AWS shared responsibility model
    • AWS platform compliance
    • AWS security attributes (customer workloads down to physical layer)
    • AWS administration and security services.
  •  

IAM Use cases

Identity and Access Management (IAM) enables

  • AWS customers to control and manage access to AWS services and resources for their Users and Groups. In addition to Users and Groups, they can create and
  • Manage roles and policy documents.
  • Managing the credentials for customers’ AWS account.
  • Password Policy and Multi Factor Authentication (MFA)

IAM Users and Group: While IAM user is an entity that AWS customers create in AWS to represent a person or an application that uses it to interact with AWS, an IAM group is a collection of IAM users. Groups enable customers to specify permissions for multiple users, which can make it easier to manage the permissions for those users.

  • Fundamental of IAM and AWS account management
  • Root Account vs Power user
  • Default Permissions for a new user
  • Usage of Access Key Id and how it differs from account login credentials Policy

Document

Roles: An IAM role is an IAM identity which can be created by AWS customers account that has specific permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS.

  • Creation of a role Relationship with policy document
  • Difference between trust and permission policies
  • Three types of roles:

Cloud Storage

Simple Storage Service (S3)

Amazon Simple Storage Service provides secure, durable, and highly-scalable secure, durable, and highly-scalable object based storage. AWS customers can use Amazon S3 to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. 

Storage Tiers and Classes:

  • For performance-sensitive and frequently accessed data AWS customers can use:
    • S3 Standard
    • Reduced Redundancy
  • The following storage classes are designed for long-lived and infrequently accessed data.
    • S3 Standard-IA 
    • S3 One Zone-IA
  • For Frequently and Infrequently Accessed Objects, AWS clients can use the Automatically Optimizing Storage Class.
    • S3 Intelligent-Tiering
  • AWS also offers storage classes, that are designed for low-cost data archiving
    • S3 Glacier 
    • S3 Glacier Deep Archive

Hosting a Static website:– Using Amazon  S3 AWS customers can host a static website. On these static websites, a single webpage may include static content and it may also contain client-side scripts.

Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata, and the data  stored as objects within resources called “buckets”,

  • Bucket user policies

Versioning and Lifecycle Management:

  • Overview of Lifecycle
  • management Protecting an object from accidental deletion using
  • Versioning and MFA
  • Object size and transition duration limitations
  • Cross region Replication

Amazon Elastic Block Store (EBS)

Amazon Elastic Block Store (EBS) is an AWS service that can be used for high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale.

Amazon EBS volume types 

  • EBS General Purpose:- Backed by Solid-State Drives (SSDs), General Purpose (SSD) volumes provide the ability to burst to 3,000 IOPS per volume, independent of volume size, to meet the performance needs of most applications and also deliver a consistent baseline of 3 IOPS/GB.
  • General Purpose (SSD) volumes are suitable for a broad range of workloads, including small to medium-sized databases, development and test environments, and boot volumes. 
  • Provisioned IOPS (SSD) volumes offer storage with consistent and low-latency performance, are designed for I/O-intensive applications such as large relational or NoSQL databases, and allow you to choose the level of performance you need. 
  • Magnetic volumes, formerly known as Standard volumes, provide the lowest cost per gigabyte of all Amazon EBS volume types and are ideal for workloads where data is accessed infrequently and applications where the lowest storage cost is important.

Amazon EBS encryption  offers a straight-forward encryption solution for AWS customers EBS resources that doesn’t require them to build, maintain, and secure their own key management infrastructure.You can encrypt both the boot and data volumes of an EC2 instance.

AWS clients can encrypt both the boot and data volumes of an EC2 instance, and these are the types of data encrypted:

  • Data at rest inside the volume

  • All data moving between the volume and the instance

  • All snapshots created from the volume

  • All volumes created from those snapshots

EBS Snapshots:- Snapshots are incremental backups, which means that only the blocks on the device that have changed after the most recent snapshot are saved.

  • Creating and sharing snapshots between regions
  • Status of EC2 instance during snapshot creation
  • Impact on sharing Application consistent snapshot from RAID array.

AWS Snowball

The AWS Snowball is an AWS service that uses physical storage devices to transfer large amounts of data between Amazon Simple Storage Service (Amazon S3) and customers on-premise data storage location at faster-than-internet speeds.

  • Snowball data transfer job costs a flat fee for device handling and import and export operations at AWS services data centers.
  • Snowball encrypts all data with 256-bit encryption.

Snowball Client:- Snowball Client is software that you install on a local host computer and use to efficiently identify, compress, encrypt, and transfer data from the directories you specify to a Snowball

  • S3 Adapter is an S3-compatible interface that the Snowball Client can  use for reading and writing data on a Snowball.
  • AWS Snowball Edge 
  • Snowball devices Use Case Differences
  • Snowball Hardware Differences
  • Snowball Tool Differences

AWS S3 Glacier

S3 Glacier is an AWS storage service, which Optimized for infrequently used data, or “cold data.” It is an extremely low-cost storage service that provides durable storage with security features for data archiving and backup.

  • Vault:- A vault is a container for storing archives.  AWS customers can store an unlimited number of archives in a vault, based on the business or application needs.
  • Archive:- An archive is any data such as a photo, video, or document which has a base unit of storage in S3 Glacier. Each archive has a unique ID and an optional description. 
  • Job:- S3 Glacier jobs is one AWS resource that can perform:
    • A select query on an archive, 
    • Retrieve an archive, or 
    • Get an inventory of a vault.
  • Notifications configuration:- AWS S3 Glacier enables customers to use the notification mechanism to notify them when a job is complete. 
    • Configure the vault to send notification to an Amazon Simple Notification Service (SNS) topic when jobs complete

Amazon Elastic File System

Amazon Elastic File System is an AWS service that provides a simple, scalable, fully managed elastic NFS file system for use for both AWS Cloud services and on-premises resources.

  • Amazon EFS provides a file system interface, file system access semantics such as strong consistency and file locking, and concurrently-accessible storage for up to thousands of Amazon EC2 instances.
  • DataSync:– AWS DataSync provides a fast and simple way to securely sync existing file systems with Amazon EFS.
  • Provisioned Throughput:- Using this Amazon EFS customers can provision their file system’s throughput independent of the amount of data stored, optimizing their file system throughput performance to match their application’s needs.
  • Using amazon-efs-utils
  • Managing File Systems
  • Mounting EFS File Systems
  • Monitoring EFS

AWS Storage Gateway

AWS Storage Gateway is a hybrid cloud storage service that gives AWS customers on-premises access to virtually unlimited cloud storage; and it can be used to simplify storage management and reduce costs for key hybrid cloud storage use cases. 

  • File Gateway:- A file gateway enables a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance.
  • Volume Gateway: A volume gateway provides cloud-backed storage volumes that users can mount as Internet Small Computer System Interface (iSCSI) devices from their on-premises application servers.
  • Tape Gateway:- A tape gateway provides cloud-backed virtual tape storage. The tape gateway is deployed into AWS customers on-premises environment as a virtual machine (VM) running on VMware ESXi, KVM, or Microsoft Hyper-V hypervisor.

Hosting option – AWS clients can run Storage Gateway either on-premises as a VM appliance, or as hardware appliance or in AWS as an Amazon EC2 instance. 

Database

A database is an organized collection of data, generally stored and accessed electronically from a computer storage(system). Since databases are more complex they are often developed using formal design and modeling techniques.

  • The database management system (DBMS) is the software that interacts with end users, applications, and the database itself to capture and analyze the data it stored.

Amazon Redshift

Redshift: Amazon Redshift is a petabyte-scale data warehouse service in the cloud. AWS Redshift data warehouse is a collection of computing resources called nodes, which are organized into a group called a cluster.

  • Cluster 
  • Nodes 
  • OLAP vs OLTP
  • Single vs Multi node
  • Overview of columnar data storage, data compression and MPP
  • Encryption using KMS and HSM
  • Availability of Redshift

Amazon DynamoDB:

DynamoDB is an AWS NoSQL database service that provides fast and predictable performance with seamless scalability.  Using DynamoDB, AWS clients can create database tables that can store and retrieve any amount of data and serve any level of request traffic.

  • DB format and types of data stored
  • Consistency models for read
  • Overview of pricing
  • Scaling advantage against RDS
  • Read and write capacity units
  • Core Components of Amazon DynamoDB
  • DynamoDB API
  • Naming Rules and Data Types
  • Read Consistency
  • Partitions and Data Distribution

Amazon Relational Database Service

Amazon Relational Database Service (RDS) is a managed service to set up, operate, and scale a relational database in the AWS cloud. RDS Basics:Six different database technologies and database engines RDS supports.

  • DB Instances
  • DB Instance Classes
  • DB Instance Storage
  • Regions and Availability Zones
  • High Availability (Multi-AZ)
  • Multi AZ deployment
  • RDS Maintenance window and activities performed
  • Impact of Multi AZ on Maintenance activities
  • DB Subnet
  • Groups Replication Multi AZ failover with Primary and Standby
  • Use cases for Read Replica and limitations
  • RDS console and available metrics
  • BYOL and license included mode

Amazon Aurora

Amazon Aurora is fully managed by AWS RDS Service, that automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups. It provides the security, availability, and reliability of commercial databases at 1/10th the cost. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS)

  • Aurora DB Clusters
  • Regions and Availability Zones
  • Aurora Connection Management
  • DB Instance Classes
  • Aurora Storage and Reliability
  • Aurora Security
  • High Availability for Aurora
  • Aurora Global Database
  • Replication with Aurora

Amazon QLDB

Amazon Quantum Ledger Database (QLDB) is an AWS ledger database service, that is owned by a central trusted authority which provides a transparent, immutable, and cryptographically verifiable transaction log of all of AWS customers application changes‎.

  • Ledger Structure
  • Write Transactions
  • Querying Data
  • Data Storage
  • Immutable
  • Cryptographically Verifiable
  • SQL Compatible and Document Flexible
  • Open Source Developer Ecosystem
  • Highly Available and Scalable

Amazon Neptune

Amazon Neptune is an AWS graph database service that enables customers to build and run applications that work with highly connected datasets. The core purpose of this database is to optimize storing billions of relationships and querying the graph with milliseconds latency.

  • Neptune’s Bulk Loader
  • Querying
  • Managing Neptune
  • Neptune Streams
  • Neptune Full-Text Search
  • Monitoring Neptune
  • Backing Up and Restoring

ElastiCache

Amazon ElastiCache is an AWS web service that allows customers to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud.

  • Redis 
  • Memcached
  • Use cases for ElastiCache
    • Social Networking
    • Fraud Detection
    • Knowledge Graphs
    • IT Operations

Backup and Snapshots

  • Creating automated backup and Database Snapshots
  • Retention period and restore process
  • Backup storage cost Availability of DBs during backup
  • Deletion process of automated backup and DB snapshots

Encryption

The AWS Encryption SDK is a client-side encryption library designed to encrypt and decrypt data using industry standards and best practices.

  • Difference between client vs server side encryption
  • Data keys
  • Master key
  • Cryptographic materials manager
  • Master key provider (Java and Python)
  • Keyring (C and JavaScript)
  • Algorithm suite
  • Encryption context
  • Encrypted message
  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
  • Encryption at rest options
  • Cross region replication and

Auto Scaling

AWS Auto Scaling monitors customers applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, customers can select one of three predefined optimization strategies designed to optimize performance, optimize costs, or balance the two. 

  • Launch configuration parameters
  • Auto scaling with multi AZs
  • Three types of auto scaling policies: simple, step and scheduled
  • Warmup and cool down period

Simple Queue Service

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables AWS customers to decouple and scale microservices, distributed systems, and serverless applications.  It eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. 

  • Overview of SQS queue with use case of decoupling an application
  • Size of SQS message and billing method
  • Integration with Lambda and Auto Scale
  • Support of First In First Out “at least once delivery” concept
  • Message Visibility Timeout
  • Long poll vs short poll
  • Retention period of SQS messages
  • Concept of “Pull” or clients to “Poll”

Simple Workflow Service

Amazon SWF enables developers build, run, and scale background jobs that have parallel or sequential steps. Using Amazon SWF to manage workflows within customers application is easy. Amazon SWF acts as the coordination hub for all of the different components of your application:

  • Maintaining application state
  • Tracking workflow executions and logging their progress
  • Holding and dispatching tasks
  • Controlling which tasks each of your application hosts will be assigned to execute.
  • Overview of SWS with use cases
  • Definition of Domains, Workflow, Tasks, Workers, Deciders and Starters
  • SWS interaction with Humans
  • Retention period
  • Difference between SQS and SWS

Simple Notification Service

Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub or sub messaging service that enables AWS customers to decouple micro services, distributed systems, and serverless applications. These following are some of Amazon SNS event sources includes the following services:

  • ComputeAmazon EC2 Auto Scaling, AWS Elastic Beanstalk, AWS Lambda, Elastic Load Balancing
  • Storage: Amazon Elastic File System, Amazon Glacier, Amazon Simple Storage Service, AWS Snowball
  • Database: Amazon DynamoDB, Amazon ElastiCache, Amazon Redshift, Amazon Relational Database Service, AWS Database Migration Service
  • Networking: Amazon Route 53, Amazon VPC, AWS Direct Connect
  • Overview of SNS with use cases
  • Supported protocols
  • Concept of “Push”
  • SNS Message format
  • Size of SNS message and Pricing Model
  • Difference between SQS and SNS

Amazon CloudWatch

Amazon CloudWatch monitors AWS customers resources and the applications they run on AWS in real time. Customers can use CloudWatch to collect and track metrics, which are variables they also can measure for their resources and applications.

  • Easiest way to collect metrics in AWS and on-premises

  • Improve operational performance and resource optimization

  • Collect and aggregate container metrics and logs

  • Application Insights for .NET and SQL Server applications

AWS CloudTrail

AWS CloudTrail is an AWS service that support customers enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail.