AWS Lambda is a serverless compute service that runs AWS customers code in response to events and automatically manages the underlying compute resources. It can be used to extend other AWS services with custom logic, or create, back-end services that operate at AWS scale, performance, and security. AWS Lambda can automatically run code in response to multiple events, such as HTTP requests via Amazon API Gateway, modifications to objects in Amazon S3 buckets, table updates in Amazon DynamoDB, and state transitions in AWS Step Functions.
- AWS Lambda executes customers code only when needed and scales automatically, from a few requests per day to thousands per second.
- It runs customers code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging.
- It can be used to build serverless applications composed of functions that are triggered by events and automatically deploy them using CodePipeline and AWS CodeBuild.
Customer can use any third party library or even native ones. With that they can package any code (frameworks, SDKs, libraries, and more) as a Lambda Layer and manage and share them easily across multiple functions.
- Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API which allows customers to use any additional programming languages to author their functions.
Lambda has built-in fault tolerance, and maintains compute capacity across multiple Availability Zones in each region to help protect customers code against individual machine or data center facility failures. Both AWS Lambda and the functions running on the service provide predictable and reliable operational performance.
- AWS Lambda is designed to provide high availability for both the service itself and for the functions it operates. There are no maintenance windows or scheduled downtime.
Provisioned Concurrency gives greater control over function start time for any application that is using AWS Lambda. It can configure the appropriate amount of concurrency that the customers application needs. Provisioned Concurrency gives greater control over the performance of clients serverless application.
- Provisioned Concurrency keeps functions initialized and hyper-ready to respond in double-digit milliseconds.
- Customers are able to increase the level of concurrency during times of high demand and lower it, or turn it off completely, when demand decreases.
AWS Step Functions provides serverless orchestration for modern applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps. Since AWS Lambda is severless customers can coordinate multiple AWS Lambda functions for complex or long-running tasks by building workflows with AWS Step Functions.
- Step Functions maintains application state, tracking exactly which workflow step users application is in, and stores an event log of data that is passed between application components. That means that if networks fail or components hang, their application can pick up right where it left off.
AWS Lambda allows clients to add custom logic to AWS resources such as Amazon S3 buckets and Amazon DynamoDB tables, to run your code in response to HTTP requests using Amazon API Gateway; or invoke your code using API calls made using AWS SDKs.
- It can be used to build data processing triggers for AWS services like Amazon S3 and Amazon DynamoDB, process streaming data stored in Kinesis, or create back end that operates at AWS scale, performance, and security.
AWS Lambda manages all the infrastructure to run clients code on highly available, fault-tolerant infrastructure. With that customers don’t have to update the underlying OS when a patch is released, or worry about resizing or adding new servers as your usage grows.
- AWS Lambda seamlessly deploys code, does all the administration, maintenance, and security patches, and provides built-in logging and monitoring through Amazon CloudWatch.
AWS Lambda invokes clients code only when needed and automatically scales to support the rate of incoming requests without requiring them to configure anything. There is no limit to the number of requests the code can handle; it starts running your the within milliseconds of an event, and since Lambda scales automatically, the performance remains consistently high as the frequency of events increases.
- Since the code is stateless, Lambda can start as many instances of it as needed without lengthy deployment and configuration delays.
Lambda@Edge is a feature of Amazon CloudFront that lets AWS customers run code closer to users of their application, which improves performance and reduces latency. With Lambda@Edge. With Lambda@Edge, AWS Lambda can run their code across AWS locations globally in response to Amazon CloudFront events, such as requests for content to or from origin servers and viewers. This makes it easier to deliver richer, more personalized content to your end users with lower latency.
- Once the customers upload their code to AWS Lambda, Lamda takes care of everything required to run and scale the code with high availability at an AWS location closest to their end user.
Customers can use AWS Identity and Access Management (IAM) to manage access to the Lambda API and resources like functions and layers. For users and applications in customers account that use Lambda, they manage permissions in a permissions policy that customers can apply to IAM users, groups, or roles.
- A Lambda function has a policy called an execution role, which grants permission to access AWS services and resources. In order to access this function Amazon CloudWatch Logs for log streaming is neccessery.
- Lambda also uses the execution role to get permission to read from event sources when clients use an event source mapping to trigger your function.
- Useing resource-based policies AWS customers can give permission to other accounts and AWS services permission to use your Lambda resources. Lambda resources include functions, versions, aliases, and layer versions.
Resource-based policies let clients grant usage permission to other accounts on a per-resource basis. It also can be uses as a resource-based policy to allow an AWS service to invoke a function. AWS clients can grant an account permission to invoke or manage a function To grant permissions to another AWS account, specify the account ID as the principal.
- Resource-based policies let customers grant usage permission to other accounts on a per-resource basis, to allow AWS service those accounts to invoke the customers function.
- Resource-based policies apply to a single function, version, alias, or layer version. They grant permission to one or more services and accounts.
- The resource-based policy grants permission for the other account to access the function, but doesn’t allow users in that account to exceed their permissions.
- Customers can grant an account permission to invoke or manage a function, and add multiple statements to grant access to multiple accounts, or let any account invoke their function.
- To limit access to a user, group, or role in another account, customers needs specify the full ARN of the identity as the principal.
- Customers able to create one or more aliases for their AWS Lambda function. A Lambda alias is like a pointer to a specific Lambda function version. Users can access the function version using the alias ARN.
An AWS Lambda function’s execution role grants it permission to access AWS services and resources. AWS customers provide this role when they create a function, and Lambda assumes the role when the function is invoked.
- Customers can create an execution role for development that has permission to send logs to Amazon CloudWatch and upload trace data to AWS X-Ray.
- AWS Lambda allows customers to add or remove permissions from a function’s execution role at any time, or add a permissions for any services that the function calls with the AWS SDK, and for services that Lambda uses to enable optional features.
- Managed Policies for Lambda Features
- AWSLambdaBasicExecutionRole:– Permission to upload logs to CloudWatch.
- AWSLambdaKinesisExecutionRole:– Permission to read events from an Amazon Kinesis data stream or consumer.
- AWSLambdaDynamoDBExecutionRole:– Permission to read records from an Amazon DynamoDB stream.
- AWSLambdaSQSQueueExecutionRole:– Permission to read a message from an Amazon Simple Queue Service (Amazon SQS) queue.
- AWSLambdaVPCAccessExecutionRole:– Permission to manage elastic network interfaces to connect your function to a VPC.
- AWSXRayDaemonWriteAccess:– Permission to upload trace data to X-Ray.
Resources and Conditions
Each API action supports a combination of resource and condition types that varies depending on the behavior of the action. Every IAM policy statement grants permission to an action that’s performed on a resource. When the action doesn’t act on a named resource, or when one grant permission to perform the action on all resources, the value of the resource in the policy is a wildcard (
- Conditions are an optional policy element that applies additional logic to determine if an action is allowed. For common conditions supported by all actions, Lambda defines condition types that can be used to restrict the values of additional parameters on some actions.
- The Condition element (or Condition block) lets customers specify conditions for when a policy is in effect. The Condition element is optional. In the Condition element, customers build expressions in which they use condition operators (equal, less than, etc.) to match the condition keys and values in the policy against keys and values in the request context.
- Customers can use the Condition element of a JSON policy to test specific conditions against the request context.
- When a request is submitted, AWS evaluates each condition key in the policy returns a value of true, false, not present, and occasionally null (an empty data string).
Lambda provides managed policies that grant access to Lambda API actions and, in some cases, access to other services used to develop and manage Lambda resources. Lambda updates the managed policies as needed, to ensure that AWS client users have access to new features when they’re released. Customers can use identity-based policies, that apply to users directly, or to groups and roles that are associated with a user, in order to grant users in their account access to Lambda. They can also grant users in another account permission to assume a role in the account and access the Lambda resources.
- AWSLambdaFullAccess:– Grants full access to AWS Lambda actions and other services used to develop and maintain Lambda resources.
- AWSLambdaReadOnlyAccess:– Grants read-only access to AWS Lambda resources.
- AWSLambdaRole:– Grants permissions to invoke Lambda functions.
AWS customers can use cross-account roles to give accounts that they trust access to Lambda actions and resources. Using resource-based policies is a better option to grant permission to invoke a function or use a layer.
When an application created in the AWS Lambda console, Lambda applies a permissions boundary to the application’s IAM roles. The permissions boundary limits the scope of the execution role that the application’s template creates for each of its functions, and any roles that the customer add to the template.
- The permissions boundary prevents users with write access to the application’s Git repository from escalating the application’s permissions beyond the scope of its own resources.
- The application templates in the Lambda console include a global property that applies a permissions boundary to all functions that they create.
- The role that AWS CloudFormation assumes to deploy the application enforces the use of the permissions boundary. That role only has permission to create and pass roles that have the application’s permissions boundary attached.
- An application’s permissions boundary enables functions to perform actions on the resources in the application.
- To access other resources or API actions, customers need to expand the permissions boundary to include those resources.
- Permissions boundary – Extend the application’s permissions boundary when you add resources to your application, or the execution role needs access to more actions.
- Execution role – Extend a function’s execution role when it needs to use additional actions.
- Deployment role – Extend the application’s deployment role when it needs additional permissions to create or configure resources.
AWS Batch can be integrated with commercial and open-source workflow engines and languages such as:
When a function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance will be allocated, which increases the function’s concurrency. Concurrency is the number of requests that your function is serving at any given time.
- Basic function settings include the description, role, and runtime that you specify when you create a function in the Lambda console.
- Concurrency is subject to a Regional limit that is shared by all functions in a Region. To ensure that a function can always reach a certain level of concurrency, the customer needs to configure the function with reserved concurrency.
- When a function has reserved concurrency, no other function can use that concurrency.
- When Lambda allocates an instance of a function, the runtime loads customers’ function’s code and runs initialization code that they define outside of the handler.
- Since Lambda integrates with Application Auto Scaling. Customers able to configure Application Auto Scaling to manage provisioned concurrency on a schedule or based on utilization.
A Lambda alias is a pointer to a specific Lambda function version. It can be used to create one or more aliases for AWS clients Lambda function. Users can access the function version using the alias ARN.
- Each alias has a unique ARN. An alias can only point to a function version, not to another alias.
- Event sources like Amazon S3 invoke the Lambda function. These event sources maintain a mapping that identifies the function to invoke when events occur.
- When using a resource-based policy to give service, resource, or account access to a function, the scope of that permission depends on whether customers applied it to an alias (version), or to the function.
Using routing configuration on an alias AWS clients can send a portion of traffic to a second function version.
- By configuring the alias to send most of the traffic to the existing version, and only a small percentage of traffic to the new version, customers can reduce the risk of deploying a new version.
There are two ways to determine the Lambda function version to configure traffic weights between two function versions, which has been invoked:
- CloudWatch Logs:– Lambda automatically emits a START log entry that contains the invoked version ID to CloudWatch Logs for every function invocation.
- Response payload (synchronous invocations):– Responses to synchronous function invocations include an x-amz-executed-version header to indicate which function version has been invoked.
Using Amazon Virtual Private Cloud (Amazon VPC), customers can create a private network for resources such as databases, cache instances, or internal services. They can configure a function to connect to private subnets in a virtual private cloud (VPC) in their account.
- While connecting a function to a VPC, Lambda creates an elastic network interface for each combination of security group and subnet in the function’s VPC configuration.
- Multiple functions connected to the same subnets share network interfaces, so connecting additional functions to a subnet that already has a Lambda-managed network interface is much quicker.
- If the Functions created are not active for a long period of time, Lambda reclaims its network interfaces, and the function becomes Idle. However, Invoking an idle function will reactivate it.
To give a function access to the internet route outbound traffic to a NAT gateway in a public subnet, internet access from a private subnet required for network address translation (NAT). The NAT gateway has a public IP address and can connect to the internet through the VPC’s internet gateway.
Lambda Environment Variables
Environment variables can be used to store secrets securely and adjust your function’s behavior without updating code. An environment variable is a pair of strings that are stored in a function’s version-specific configuration.
- The Lambda runtime makes environment variables available to customers code and sets additional environment variables that contain information about the function and invocation request.
- By specifying a key and value, AWS customers can set environment variables on the unpublished version of their function. When you publish a version, the environment variables are locked for that version along with other version-specific configurations.
- Lambda stores environment variables securely by encrypting them at rest. Customers can configure Lambda to use a different encryption key, encrypt environment variable values on the client side, or set environment variables in an AWS CloudFormation template with AWS Secrets Manager.
Lambda Function Versions is to manage the deployment of AWS Lambda functions. Customers can publish a new version of a function for beta testing without affecting users of the stable production version. The system creates a new version of clients Lambda function each time that they publish the function. The new version is a copy of the unpublished version of the function. The function version includes:
- The function code and all associated dependencies.
- The Lambda runtime that executes the function.
- All of the function settings, including the environment variables.
- A unique Amazon Resource Name (ARN) to identify this version of the function.
While publishing a version the code and most of the settings are locked to ensure a consistent experience for users of that version. The maximum number an alias point to Lambda function versions is two, and these versions need to meet the following criteria:
- Both versions must have the same IAM execution role.
- Both versions must have the same dead-letter queue configuration, or no dead-letter queue configuration.
- Both versions must be published. The alias cannot point to $LATEST.
A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. It can be used to configure the Lambda function to pull in additional code and content in the form of layers. Layers allow customers to keep their deployment package small to make development easier.
- For Node.js, Python, and Ruby functions, customers can develop their function code in the Lambda console as long as they keep their deployment package under 3 MB.
- A function can use up to 5 layers at a time. The total unzipped size of the function and all layers may not exceed the unzipped deployment package size limit of 250 MB.
- Customers can create their own layers, or use layers published by AWS and other AWS customers. Layers support resource-based policies for granting layer usage permissions to specific AWS accounts, AWS Organizations, or all accounts.
Lambda Database Access
Amazon RDS Proxy help improve applications to pool and share database connections, improve scalability, and makes database resilient from failures by automatically connecting to a standby DB instance while preserving application connections.
- A database proxy manages a pool of database connections and relays queries from a function. This enables a function to reach high concurrency levels without exhausting database connections.
- Using the Lambda console customers can create Amazon Relational Database Service (Amazon RDS) database proxy for their function.
- RDS Proxy also allows customers to enforce AWS IAM (Identity and Access Management) authentication to databases, and securely store credentials in Secrets Manager. RDS Proxy is fully compatible with MySQL and can be enabled for most applications with no code change.
Job queues are generally mapped to one or more compute environments. The compute environments contain the Amazon ECS container instances that are used to run containerized batch jobs. Within a job queue, the associated compute environments each have an order that is used by the scheduler to determine where to place jobs that are ready to be executed.
- If the first compute environment has free resources, then the job is scheduled to a container instance within that compute environment.
- If the compute environment is unable to provide a suitable compute resource, the scheduler attempts to run the job on the next compute environment.
Unmanaged Compute Environments
Unmanaged Compute Environments environment, in this case customers are responsible for managing their own compute resources.
- Customers need to make sure that the AMI in use for their compute resources meets the Amazon ECS container instance AMI specification.
- Once the unmanaged compute environment is created, customers can use the DescribeComputeEnvironments API operation to view the compute environment details.
- Find the Amazon ECS cluster that is associated with the environment and then manually launch your container instances into that Amazon ECS cluster.
Managed Compute Environments
Managed compute environments allow customers to describe their business requirements. In a managed compute environment, AWS Batch manages the capacity and instance types of the compute resources within the environment, based on the compute resource specification that they define when they create the compute environment.
- AWS customers have two choices to use Amazon EC2: On-Demand Instances or Spot Instances.
- Managed compute environments launch Amazon ECS container instances into the VPC and subnets that the clients specify when they created the compute environment.
AWS Elastic Beanstalk
AWS Elastic Beanstalk is an orchestration service offered by Amazon Web Services for deploying applications which orchestrates various AWS services, including EC2, S3, Simple Notification Service, CloudWatch, autoscaling, and Elastic Load Balancers. An Elastic Beanstalk application is a logical collection of Elastic Beanstalk components, including environments, versions, and environment configurations. In Elastic Beanstalk an application is conceptually similar to a folder.
- Elastic Beanstalk enables customers to quickly deploy and manage applications in the AWS Cloud. Once customers load any application, Elastic Beanstalk then automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
- It enables web applications written in many popular languages and frameworks. Java, .NET, Node.js, PHP, Ruby, Python, Go, and Docker to deploy are the programming languages that customers have.
- Customers can interact with Elastic Beanstalk using three different methods. The Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or eb, a high-level CLI designed specifically for Elastic Beanstalk.
AWS Elastic Beanstalk allows customers to deploy their code through the AWS Management Console, Elastic Beanstalk Command Line Interface, Visual Studio, and Eclipse.
- The deployment policies they choose enable them to choose between speed and safety of deploying your applications while reducing the administrative burden.
AWS Elastic Beanstalk leverages Elastic Load Balancing and Auto Scaling to automatically scale customers applications in and out based on their application’s specific needs.
- Multiple availability zones give options to improve application reliability and availability by running in more than one zone.
- There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes.
- Most deployment tasks, such as changing the size of the fleet of Amazon EC2 instances or monitoring an application can be perform directly from the Elastic Beanstalk web interface (console)
AWS Elastic Beanstalk provides a unified user interface to monitor and manage the health of customers applications.
- The Elastic Beanstalk Health Dashboard allows customers to visualize overall application health and customize application health checks, health permissions, and health reporting in one unified interface.
- Elastic Beanstalk is integrated with Amazon CloudWatch and AWS X-Ray, because of that monitoring dashboard to view key performance metrics such as latency, CPU utilization, and response codes can be leveraged.
Customers who use AWS Elastic Beanstalk have the freedom to select the AWS resources, such as Amazon EC2 instance type including Spot instances, that are optimal for your application.
- Elastic Beanstalk allows customers to “open the hood” and retain full control over the AWS resources powering their application.
Elastic Beanstalk Component
The type of instance that client specify determines the hardware of the host computer used for their instance. Each instance type offers different compute, memory, and storage capabilities and are grouped in instance families based on these capabilities. Each instance type provides higher or lower minimum performance from a shared resource.
In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a Java WAR file.
- It is part of an application that can have many versions and each application version is unique.
- In a running environment, customers have the ability to deploy any application version they already uploaded to the application.
AWS customers need to choose an Elastic Beanstalk environment. When they launch an environment tier. The environment tier designates the type of application that the environment runs, and determines what resources Elastic Beanstalk provisions to support it.
- An application that serves HTTP requests runs in a web server environment tier.
- An environment that pulls tasks from an Amazon Simple Queue Service (Amazon SQS) queue runs in a worker environment tier.
A saved configuration is a template that AWS clients can use as a starting point for creating unique environment configurations.
- Customers can create and modify saved configurations, and apply them to environments, using the Elastic Beanstalk console, EB CLI, AWS CLI, or API.
- The API and the AWS CLI refer to saved configurations as configuration templates
An Elastic Beanstalk application is a logical collection of Elastic Beanstalk components, including environments, versions, and environment configurations. The Elastic Beanstalk application serves as a container for the environments that run web applications, and the versions of web app’s source code, saved configurations, logs, and other artifacts that you create while using Elastic Beanstalk.
- AWS Elastic Beanstalk enables customers to manage all of the resources that run their application as environments. Conceptually this application is similar to a folder.
An environment is a collection of AWS resources running an application version. Each environment runs only one application version at a time, however, clients have the option to run the same application version or different application versions in many environments simultaneously.
- AWS customers can create and manage separate environments for development, testing, and production use, and can deploy any version of their application to any environment.
- Environments can be long-running or temporary. For long-running workloads, customers can launch worker environments that process jobs from an Amazon Simple Queue Service (Amazon SQS) queue.
A platform is a combination of an operating system, programming language runtime, web server, application server, and Elastic Beanstalk components.
- AWS customers are able to design and target their web application to a platform.
- Elastic Beanstalk provides a variety of platforms that applications can build on.
Lightsail is an easy-to-use cloud platform that provides developers compute, storage, and networking capacity and capabilities to deploy and manage websites, web applications, and databases in the cloud. Lightsail includes everything customers need to launch their project quickly – a virtual machine, a managed database, SSD-based storage, data transfer, DNS management, and a static IP.
- AWS Lightsail scales out applications or websites over time and improves its availability and redundancy by adding other Lightsail resources, like load balancers, attached block storage and managed databases.
- Lightsail is ideal for simpler workloads, quick deployments, and getting started on AWS. It’s designed to start small, and then scale to grow. As their project grows, customers can use load balancers and attached block storage with their instance to increase redundancy and uptime and access dozens of other AWS services to add new capabilities.
- Customers can create preconfigured virtual private instances that include everything to easily deploy and manage your application, or create databases for which the security and health of the underlying infrastructure and operating system is managed by Lightsail.
- Using Lightsail AWS customers can run websites, web applications, business software, blogs, e-commerce sites, and more.
Lightsail offers virtual instances that are easy to set up and backed by the power and reliability of AWS.
- Lightsail enables customers click-to-launch a simple operating system (OS), a pre-configured application, or development stack – such as WordPress, Windows, Plesk, LAMP, Nginx, and more.
Lightsail managed databases enable customers to scale their databases independently of their virtual servers, that improve the availability of their applications, or run standalone databases in the cloud.
- Customers can deploy multi-tiered applications, all within Lightsail, by creating multiple instances that are connected to a central managed database, and a load balancer that directs traffic to the instances.
- Lightsail managed database plans bundle together memory, processing, storage, and transfer allowance into a single, predictable monthly price.
Lightsail’s simplified load balancing routes web traffic across instances so that customers websites and applications can accommodate variations in traffic, be better protected from outages, and deliver a seamless experience to their visitors.
- Lightsail load balancers include integrated certificate management, providing free SSL/TLS certificates that can be provisioned and added to a load balancer in just a few clicks.
- Customers can request and manage certificates directly from the Lightsail console – and AWS manage renewals on their behalf.
Amazon Lightsail uses a focused set of features like instances, managed databases and load balancers to make it easier to get started.
- Customers can integrate their Lightsail project with some of the 90+ other services in AWS through Amazon VPC peering.
- Customers are able to manage the services in AWS using the AWS management console, while still keeping their day-to-day management in the Lightsail console.
Lightsail instances are specifically engineered based on AWS for web servers, developer environments, and other database use cases. Such workloads don’t use the full CPU often or consistently, but occasionally need a performance burst. Lightsail uses burstable performance instances that provide a baseline level of CPU performance with the additional ability to burst above the baseline.
- This design enables customers to get the performance they need, when they need it, while protecting them from the variable performance or other common side effects that might typically experience from over-subscription in other environments.
Lightsail offers a 1-click secure connection to customers’ instance’s terminal right from their home browser. It supports SSH access for Linux/Unix-based instances and RDP access for Windows-based instances.
- To use 1-click connections, all customers nee launch the instance management screens, click Connect using SSH or Connect using RDP, and a new browser window opens and automatically connects to your instance.
- For those who prefer to use Linux/Unix-based instances using their own client, Lightsail will do the SSH key storing and management work for them, and provide them with a secure key to use in your SSH client.
Lightsail IP Addresses
Each Lightsail instance automatically gets a private IP address and a public IP address. AWS Lightsail customers can use the private IP to transmit data between Lightsail instances and AWS resources privately, and they can use the public IP to connect to their instance from the Internet through a registered domain name or through an SSH or RDP connection local computer.
- They are able to attach a static IP to the instance, which substitutes the public IP with an IP address that doesn’t change even if the instance is
- A public IP (fixed IP Address) is dedicated to AWS clients Lightsail account. Customers can assign a static IP to an instance, replacing its public IP. stopped and started.
Supported operating systems
Lightsail offers a range of operating systems and application templates that are automatically installed when a new Lightsail instance is created. The Application templates include WordPress, Drupal, Joomla!, Ghost, Magento, Redmine, LAMP, Nginx (LEMP), MEAN, Node.js, Django, and more.
- Customers are able to install additional software on their instances by using the in-browser SSH or their own SSH client.