AWS Lambda is a serverless compute service that runs AWS customers’ code in response to events and automatically manages the underlying compute resources. It can be used to extend other AWS services with custom logic, or create, back-end services that operate at AWS scale, performance, and security. AWS Lambda can automatically run code in response to multiple events, such as HTTP requests via Amazon API Gateway, modifications to objects in Amazon S3 buckets, table updates in Amazon DynamoDB, and state transitions in AWS Step Functions.
- AWS Lambda executes customers code only when needed and scales automatically, from a few requests per day to thousands per second.
- It runs customers code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging.
- It can be used to build serverless applications composed of functions that are triggered by events and automatically deploy them using CodePipeline and AWS CodeBuild.
AWS Lambda features
Customer can use any third party library or even native ones. With that they can package any code (frameworks, SDKs, libraries, and more) as a Lambda Layer and manage and share them easily across multiple functions.
- AWS Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API which allows customers to use any additional programming languages to author their functions.
AWS Lambda has built-in fault tolerance, and maintains compute capacity across multiple Availability Zones in each region to help protect customers code against individual machine or data center facility failures. Both AWS Lambda and the functions running on the service provide predictable and reliable operational performance.
- AWS Lambda is designed to provide high availability for both the service itself and for the functions it operates. There are no maintenance windows or scheduled downtime.
Provisioned Concurrency gives greater control over function start time for any application that is using AWS Lambda. It can configure the appropriate amount of concurrency that the customers application needs. Provisioned Concurrency gives greater control over the performance of clients serverless application.
- Provisioned Concurrency keeps functions initialized and hyper-ready to respond in double-digit milliseconds.
- Customers are able to increase the level of concurrency during times of high demand and lower it, or turn it off completely, when demand decreases.
AWS Step Functions provides serverless orchestration for modern applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps. Since AWS Lambda is severless customers can coordinate multiple AWS Lambda functions for complex or long-running tasks by building workflows with AWS Step Functions.
- Step Functions maintains application state, tracking exactly which workflow step users application is in, and stores an event log of data that is passed between application components. That means that if networks fail or components hang, their application can pick up right where it left off.
AWS Lambda allows clients to add custom logic to AWS resources such as Amazon S3 buckets and Amazon DynamoDB tables, to run your code in response to HTTP requests using Amazon API Gateway; or invoke your code using API calls made using AWS SDKs.
- It can be used to build data processing triggers for AWS services like Amazon S3 and Amazon DynamoDB, process streaming data stored in Kinesis, or create back end that operates at AWS scale, performance, and security.
AWS Lambda manages all the infrastructure to run clients code on highly available, fault-tolerant infrastructure. With that customers don’t have to update the underlying OS when a patch is released, or worry about resizing or adding new servers as your usage grows.
- AWS Lambda seamlessly deploys code, does all the administration, maintenance, and security patches, and provides built-in logging and monitoring through Amazon CloudWatch.
AWS Lambda invokes clients code only when needed and automatically scales to support the rate of incoming requests without requiring them to configure anything. There is no limit to the number of requests the code can handle; it starts running your the within milliseconds of an event, and since Lambda scales automatically, the performance remains consistently high as the frequency of events increases.
- Since the code is stateless, Lambda can start as many instances of it as needed without lengthy deployment and configuration delays.
Lambda@Edge is a feature of Amazon CloudFront that lets AWS customers run code closer to users of their application, which improves performance and reduces latency. With Lambda@Edge. With Lambda@Edge, AWS Lambda can run their code across AWS locations globally in response to Amazon CloudFront events, such as requests for content to or from origin servers and viewers. This makes it easier to deliver richer, more personalized content to your end users with lower latency.
- Once the customers upload their code to AWS Lambda, Lamda takes care of everything required to run and scale the code with high availability at an AWS location closest to their end user.
Customers can use AWS Identity and Access Management (IAM) to manage access to the Lambda API and resources like functions and layers. For users and applications in customers account that use AWS Lambda, they manage permissions in a permissions policy that customers can apply to IAM users, groups, or roles.
- AWS Lambda function has a policy called an execution role, which grants permission to access AWS services and resources. In order to access this function Amazon CloudWatch Logs for log streaming is neccessery.
- AWS Lambda also uses the execution role to get permission to read from event sources when clients use an event source mapping to trigger your function.
- Using resource-based policies AWS customers can give permission to other accounts and AWS services permission to use your Lambda resources. Lambda resources include functions, versions, aliases, and layer versions.
Resource-based policies let clients grant usage permission to other accounts on a per-resource basis. It also can be uses as a resource-based policy to allow an AWS service to invoke a function. AWS clients can grant an account permission to invoke or manage a function To grant permissions to another AWS account, specify the account ID as the principal.
- Resource-based policies let customers grant usage permission to other accounts on a per-resource basis, to allow AWS service those accounts to invoke the customers function.
- Resource-based policies apply to a single function, version, alias, or layer version. They grant permission to one or more services and accounts.
- The resource-based policy grants permission for the other account to access the function, but doesn’t allow users in that account to exceed their permissions.
- Customers can grant an account permission to invoke or manage a function, and add multiple statements to grant access to multiple accounts, or let any account invoke their function.
- To limit access to a user, group, or role in another account, customers needs specify the full ARN of the identity as the principal.
- Customers able to create one or more aliases for their AWS Lambda function. A Lambda alias is like a pointer to a specific Lambda function version. Users can access the function version using the alias ARN.
An AWS Lambda function’s execution role grants it permission to access AWS services and resources. AWS customers provide this role when they create a function, and Lambda assumes the role when the function is invoked.
- Customers can create an execution role for development that has permission to send logs to Amazon CloudWatch and upload trace data to AWS X-Ray.
- AWS Lambda allows customers to add or remove permissions from a function’s execution role at any time, or add a permissions for any services that the function calls with the AWS SDK, and for services that Lambda uses to enable optional features.
- Managed Policies for Lambda Features
- AWSLambdaBasicExecutionRole:– Permission to upload logs to CloudWatch.
- AWSLambdaKinesisExecutionRole:– Permission to read events from an Amazon Kinesis data stream or consumer.
- AWSLambdaDynamoDBExecutionRole:– Permission to read records from an Amazon DynamoDB stream.
- AWSLambdaSQSQueueExecutionRole:– Permission to read a message from an Amazon Simple Queue Service (Amazon SQS) queue.
- AWSLambdaVPCAccessExecutionRole:– Permission to manage elastic network interfaces to connect your function to a VPC.
- AWSXRayDaemonWriteAccess:– Permission to upload trace data to X-Ray.
RESOURCES AND CONDITIONS
Each API action supports a combination of resource and condition types that varies depending on the behavior of the action. Every IAM policy statement grants permission to an action that’s performed on a resource. When the action doesn’t act on a named resource, or when one grant permission to perform the action on all resources, the value of the resource in the policy is a wildcard (
- Conditions are an optional policy element that applies additional logic to determine if an action is allowed. For common conditions supported by all actions, Lambda defines condition types that can be used to restrict the values of additional parameters on some actions.
- The Condition element (or Condition block) lets customers specify conditions for when a policy is in effect. The Condition element is optional. In the Condition element, customers build expressions in which they use condition operators (equal, less than, etc.) to match the condition keys and values in the policy against keys and values in the request context.
- Customers can use the Condition element of a JSON policy to test specific conditions against the request context.
- When a request is submitted, AWS evaluates each condition key in the policy returns a value of true, false, not present, and occasionally null (an empty data string).
AWS Lambda provides managed policies that grant access to AWS Lambda API actions and, in some cases, access to other services used to develop and manage Lambda resources. Lambda updates the managed policies as needed, to ensure that AWS client users have access to new features when they’re released. Customers can use identity-based policies, that apply to users directly, or to groups and roles that are associated with a user, in order to grant users in their account access to Lambda. They can also grant users in another account permission to assume a role in the account and access the Lambda resources.
- AWSLambdaFullAccess:– Grants full access to AWS Lambda actions and other services used to develop and maintain Lambda resources.
- AWSLambdaReadOnlyAccess:– Grants read-only access to AWS Lambda resources.
- AWSLambdaRole:– Grants permissions to invoke Lambda functions.
AWS customers can use cross-account roles to give accounts that they trust access to Lambda actions and resources. Using resource-based policies is a better option to grant permission to invoke a function or use a layer.
When an application created in the AWS Lambda console, Lambda applies a permissions boundary to the application’s IAM roles. The permissions boundary limits the scope of the execution role that the application’s template creates for each of its functions, and any roles that the customer add to the template.
- The permissions boundary prevents users with write access to the application’s Git repository from escalating the application’s permissions beyond the scope of its own resources.
- The application templates in the AWS Lambda console include a global property that applies a permissions boundary to all functions that they create.
- The role that AWS CloudFormation assumes to deploy the application enforces the use of the permissions boundary. That role only has permission to create and pass roles that have the application’s permissions boundary attached.
- An application’s permissions boundary enables functions to perform actions on the resources in the application.
- To access other resources or API actions, customers need to expand the permissions boundary to include those resources.
- Permissions boundary – Extend the application’s permissions boundary when you add resources to your application, or the execution role needs access to more actions.
- Execution role – Extend a function’s execution role when it needs to use additional actions.
- Deployment role – Extend the application’s deployment role when it needs additional permissions to create or configure resources.
AWS Batch can be integrated with commercial and open-source workflow engines and languages such as:
AWS LAMBDA CONCURRENCY
When a function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance will be allocated, which increases the function’s concurrency. Concurrency is the number of requests that your function is serving at any given time.
- Basic function settings include the description, role, and runtime that you specify when you create a function in the Lambda console.
- Concurrency is subject to a Regional limit that is shared by all functions in a Region. To ensure that a function can always reach a certain level of concurrency, the customer needs to configure the function with reserved concurrency.
- When a function has reserved concurrency, no other function can use that concurrency.
- When Lambda allocates an instance of a function, the runtime loads customers’ function’s code and runs initialization code that they define outside of the handler.
- Since Lambda integrates with Application Auto Scaling. Customers able to configure Application Auto Scaling to manage provisioned concurrency on a schedule or based on utilization.
AWS Lambda alias is a pointer to a specific Lambda function version. It can be used to create one or more aliases for AWS clients Lambda function. Users can access the function version using the alias ARN.
- Each alias has a unique ARN. An alias can only point to a function version, not to another alias.
- Event sources like Amazon S3 invoke the Lambda function. These event sources maintain a mapping that identifies the function to invoke when events occur.
- When using a resource-based policy to give service, resource, or account access to a function, the scope of that permission depends on whether customers applied it to an alias (version), or to the function.
Using routing configuration on an alias AWS clients can send a portion of traffic to a second function version.
- By configuring the alias to send most of the traffic to the existing version, and only a small percentage of traffic to the new version, customers can reduce the risk of deploying a new version.
There are two ways to determine the Lambda function version to configure traffic weights between two function versions, which has been invoked:
- CloudWatch Logs:– Lambda automatically emits a START log entry that contains the invoked version ID to CloudWatch Logs for every function invocation.
- Response payload (synchronous invocations):– Responses to synchronous function invocations include an x-amz-executed-version header to indicate which function version has been invoked.
AWS LAMBDA NETWORK
Using Amazon Virtual Private Cloud (Amazon VPC), customers can create a private network for resources such as databases, cache instances, or internal services. They can configure a function to connect to private subnets in a virtual private cloud (VPC) in their account.
- While connecting a function to a VPC, Lambda creates an elastic network interface for each combination of security group and subnet in the function’s VPC configuration.
- Multiple functions connected to the same subnets share network interfaces, so connecting additional functions to a subnet that already has a Lambda-managed network interface is much quicker.
- If the Functions created are not active for a long period of time, Lambda reclaims its network interfaces, and the function becomes Idle. However, Invoking an idle function will reactivate it.
To give a function access to the internet route outbound traffic to a NAT gateway in a public subnet, internet access from a private subnet required for network address translation (NAT). The NAT gateway has a public IP address and can connect to the internet through the VPC’s internet gateway.
LAMBDA ENVIRONMENT VARIABLES
Environment variables can be used to store secrets securely and adjust your function’s behavior without updating code. An environment variable is a pair of strings that are stored in a function’s version-specific configuration.
- The AWS Lambda runtime makes environment variables available to customers code and sets additional environment variables that contain information about the function and invocation request.
- By specifying a key and value, AWS customers can set environment variables on the unpublished version of their function. When you publish a version, the environment variables are locked for that version along with other version-specific configurations.
- Lambda stores environment variables securely by encrypting them at rest. Customers can configure Lambda to use a different encryption key, encrypt environment variable values on the client side, or set environment variables in an AWS CloudFormation template with AWS Secrets Manager.
Lambda Function Versions is to manage the deployment of AWS Lambda functions. Customers can publish a new version of a function for beta testing without affecting users of the stable production version. The system creates a new version of clients Lambda function each time that they publish the function. The new version is a copy of the unpublished version of the function. The function version includes:
- The function code and all associated dependencies.
- The Lambda runtime that executes the function.
- All of the function settings, including the environment variables.
- A unique Amazon Resource Name (ARN) to identify this version of the function.
While publishing a version the code and most of the settings are locked to ensure a consistent experience for users of that version. The maximum number an alias point to Lambda function versions is two, and these versions need to meet the following criteria:
- Both versions must have the same IAM execution role.
- Both versions must have the same dead-letter queue configuration, or no dead-letter queue configuration.
- Both versions must be published. The alias cannot point to $LATEST.
AWS LAMBDA LAYERS
A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. It can be used to configure the Lambda function to pull in additional code and content in the form of layers. Layers allow customers to keep their deployment package small to make development easier.
- For Node.js, Python, and Ruby functions, customers can develop their function code in the AWS Lambda console as long as they keep their deployment package under 3 MB.
- A function can use up to 5 layers at a time. The total unzipped size of the function and all layers may not exceed the unzipped deployment package size limit of 250 MB.
- Customers can create their own layers, or use layers published by AWS and other AWS customers. Layers support resource-based policies for granting layer usage permissions to specific AWS accounts, AWS Organizations, or all accounts.
AWS LAMBDA DATABASE ACCESS
Amazon RDS Proxy help improve applications to pool and share database connections, improve scalability, and makes database resilient from failures by automatically connecting to a standby DB instance while preserving application connections.
- A database proxy manages a pool of database connections and relays queries from a function. This enables a function to reach high concurrency levels without exhausting database connections.
- Using the Lambda console customers can create Amazon Relational Database Service (Amazon RDS) database proxy for their function.
- RDS Proxy also allows customers to enforce AWS IAM (Identity and Access Management) authentication to databases, and securely store credentials in Secrets Manager. RDS Proxy is fully compatible with MySQL and can be enabled for most applications with no code change.
Job queues are generally mapped to one or more compute environments. The compute environments contain the Amazon ECS container instances that are used to run containerized batch jobs. Within a job queue, the associated compute environments each have an order that is used by the scheduler to determine where to place jobs that are ready to be executed.
- If the first compute environment has free resources, then the job is scheduled to a container instance within that compute environment.
- If the compute environment is unable to provide a suitable compute resource, the scheduler attempts to run the job on the next compute environment.
UNMANAGED COMPUTE ENVIRONMENTS
Unmanaged Compute Environments environment, in this case customers are responsible for managing their own compute resources.
- Customers need to make sure that the AMI in use for their compute resources meets the Amazon ECS container instance AMI specification.
- Once the unmanaged compute environment is created, customers can use the DescribeComputeEnvironments API operation to view the compute environment details.
- Find the Amazon ECS cluster that is associated with the environment and then manually launch your container instances into that Amazon ECS cluster.
MANAGED COMPUTE ENVIRONMENTS
Managed compute environments allow customers to describe their business requirements. In a managed compute environment, AWS Batch manages the capacity and instance types of the compute resources within the environment, based on the compute resource specification that they define when they create the compute environment.
- AWS customers have two choices to use Amazon EC2: On-Demand Instances or Spot Instances.
- Managed compute environments launch Amazon ECS container instances into the VPC and subnets that the clients specify when they created the compute environment.
AWS ELASTIC BEANSTALK
AWS Elastic Beanstalk is an orchestration service offered by Amazon Web Services for deploying applications which orchestrates various AWS services, including EC2, S3, Simple Notification Service, CloudWatch, autoscaling, and Elastic Load Balancers. An Elastic Beanstalk application is a logical collection of Elastic Beanstalk components, including environments, versions, and environment configurations. In Elastic Beanstalk an application is conceptually similar to a folder.
- Elastic Beanstalk enables customers to quickly deploy and manage applications in the AWS Cloud. Once customers load any application, Elastic Beanstalk then automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
- It enables web applications written in many popular languages and frameworks. Java, .NET, Node.js, PHP, Ruby, Python, Go, and Docker to deploy are the programming languages that customers have.
- Customers can interact with Elastic Beanstalk using three different methods. The Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or eb, a high-level CLI designed specifically for Elastic Beanstalk.