Introduction Cloud Computing
The first time the term “cloud computing” was used by University of Texas professor Ramnath Chellappa in a talk on a “new computing paradigm” in 1997. Although the term may have been used a year earlier in Compaq (HP), its root belongs to some old ideas coupled with new business, technical and social perspectives. The concept of “time sharing” where multiple users can share access to data and CPU time is the premise of cloud computing in the 1950s. In 1969 J.C.R. Licklider whose vision was for everyone to be interconnected and be able to access programs and data at any site, from anywhere in the world, developed the Advanced Research Projects Agency Network (ARPANET), which network that became the basis of the internet. The operating system released by IBM in the 70s allowed admins to have multiple virtual systems, and “Virtual Machines” (VMs) on a single physical node. Most of the basic functions of any virtualisation software that we see nowadays can be traced back to this early VM operating system.
Cloud computing is often associated with virtualized infrastructure or hardware on demand, utility computing, Cloud computing delivers IT applications to customers by enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources. This major driver of cloud computing is the underlying virtualization technology. This technology enables computers to run multiple operating systems and applications with the same hardware simultaneously that allows more efficient use of the resources. IT outsourcing, platform and software as a service, and many other things that now are the focus of the IT industry.
Cloud computing provides many different services, it’s not only used as ‘Software-as-a-Service‘ models but also through utility pricing models known as ‘Infrastructure-as-a-Service’, and ‘platform-as-a-service’. Infrastructure-as-a-Service is built on VM technology by allowing “multiple tenants” to live in the same server by spreading the costs with other customers. Using cloud computing, organizations can use shared resources instead of building and maintaining their own infrastructure. Cloud computing also allows for flexible and elastic delivery of IT capabilities using Internet technologies. The earliest available services of modern cloud computing include Salesforce.com and Amazon Web Services as the lead adopters, but in recent years it has become one the most popular business models for many of the large and small technology companies such as Apple, Microsoft, IBM, HP, Google and Rackspace, Ubet, Lyft and so on.
Cloud infrastructure providers are also offering application services in the form of Platform-as-a-Service, Analytics, Mobile services, deployment and management services and even marketplaces for third-parties to offer products and services on top of their infrastructure. These providers able to offer the services by hosting them using the core building blocks of infrastructure and creating tools for developers to have easy access to common-usage applications, because of this Cloud computing is not only used to describe the core building blocks, but also it includes a very broad set of services that are built on top of the infrastructure. Which can range from platform services to user-facing applications.
The very first milestones of cloud computing was the arrival of a website called Salesforce.com in 1999. Which is responsible for pioneering the concept of delivering enterprise applications via a simple website. Using a new approach, Salesforce delivered their software by charging customers to buy a software license upfront, they would charge through a monthly subscription model. By offering this model, also known as Software-as-a-Service (Saas), it became possible to access software hosted remotely through a simple website that could scale depending on the usage. As a result these companies became procurators of cloud computing to their customers, but didn’t offer actual infrastructure that could be used for a generic purpose.
The next development was the entrance of Amazon Web Services (AWS) in 2002 to the cloud market, which provided a suite of cloud-based services. The cloud market has also started to diversify into two large segments, managed and unmanaged cloud. A managed cloudmanaged cloud is one where the provider will support the underlying infrastructure by offering monitoring, troubleshooting and around the clock customer service. In an unmanaged cloud, the infrastructure is self-service, unreliable and in any case of a failure, it is the responsibility of the customer to have mechanisms in place to restore their operations. The wide adoption of as-a-Service models has given birth to a broad range of services that have adopted the utility-based approach of Infrastructure-as-a-Service models and that have borrowed two main aspects: the ability to scale gracefully without requiring managing the underlying infrastructure and flexibility on pricing modes by the use of subscriptions or pay-per-use models.
The next major milestone for Cloud computing was the launch of Elastic Cloud Compute (EC2), which was developed by Amazon Web Services, and what later became the blueprint for Infrastructure-as-a-Service (IaaS). IaaS enables individuals and small businesses to rent servers (instances) which they would run their applications by paying only for the resources that they used by the hour with the promise that they would be able to scale in size at any given point with virtually unlimited capacity also known as Pay-as-you-go. This capability was already supported internally by Amazon through the use of virtualization. However, traditionally such services incurred in capital expenditures with large initial investments and lower maintenance fees.
The other major milestone came in 2009, when Google, Microsoft and other major companies started to offer browser-based enterprise applications, though services such as Google Apps and Microsoft Azure. Most experts seem to agree that cloud computing will ultimately transform today’s computing landscape forever. Since Amazon had an early start in the IaaS market, and were virtually alone in that market for some time, it is able to hold the large portion of cloud computing market share. However, not long after other small companies such as Rackspace entered with similar product offerings, then after eventually Microsoft and Google caught up in 2010 and 2012 respectively.
Migration to Cloud
The cloud Total Addressable Market is expanding rapidly due to a mass migration from traditional data center solutions to flexible systems that the cloud is best suited to offer. IaaS providing companies have started offering products well beyond commoditized infrastructure in the form of Platform-as-a-Service, FaaS (functions as a service), and other cloud offerings higher in the cloud pyramid Because of these factors Clouding is becoming a socio-economic issue. As the advancement of this technology moving faster than ever before, the gap between Cloud users and non-users is getting bigger and wider.
Some large or small companies that have already established Cloud Computing as part of their IT strategy is growing daily. Nevertheless, there remains a major part of companies that use the Cloud, preferring to use strictly to existing on-premise models. While some companies that have worked out their Cloud strategies, started slowly to scale up and run large workloads on the new infrastructures. However, their conservative peers are in danger of being left behind from this emerging technology. As a result of this retrogressive approach they are taking on coulding soon come back to haunt them. After all, if the economy is largely dependent on digital infrastructure, companies that do not adapt to Clouding quickly, they could well find themselves excluded. At least four out of five companies are on the way to using Cloud Computing as an integral part of their IT or they are already using it.
The major leaders of Cloud computing including Amazon, Microsoft, and Google started the service by providing the combination of raw cloud infrastructure and core building blocks such as compute, storage and networking to their customers. However, these companies quickly shifted their focus towards higher levels of the cloud pyramid. Although the main drivers for this shift are the ability to lock-in customers to their platforms and generate larger profit margins, Amazon changes its priorities toward customer satisfaction. Which pressures other large competitors in the space by lowering the price continuously while improving the core building blocks in an effort to drive growth in a rapidly expanding market.
Cloud providers like Amazon Web Services (AWS) can either charge for their services through utility-pricing plans or also commonly through subscription models. Saying that this model usually offers products and services with much lower prices than traditional models in the software industry, it is changing how the service is rendered to the customer. Cloud computing providers accelerated the rate of creation of new products and startups around the world. The other major benefit of cloud computing services is that it reduces the investment required to run data centers and retain technical staff to maintain and support the infrastructure.
Amazon Web Services (AWS)
Amazon Web Services (AWS) is one of the leading Cloud computing service providers in the market today. It was officially launched in 2006 at the Massachusetts Institute of Technology Emerging Technologies (MITET) conference by Amazon CEO Jeff Bezos. The very first service AWS offered to the public was Elastic Cloud Compute (EC2). The forefather of Elastic Cloud Compute at Amazon was Chris Pinkmawas who was an engineer in charge of Amazon’s global at the time. Since then, EC2 has grown to become the cornerstone of Amazon’s Web Services product offering, and has seen exponential growth in the space. It has also launched a vast lineup of product offerings around the Web Services division such as Simple Storage Service, Database, and much more. Amazon by far has the largest and most diverse products and services offering of all Cloud IaaS providers.
The 2014 new products announcement made by AWS, show it was targeted towards developers, specifically around tooling, source code management, and Platform services. The shift shows that the company is looking for new areas to expand their growth while driving higher margin revenue sources and ensuring lock-in to the platform. Some of the main strengths of AWS are the rapid delivery of features and services to its customers. AWS offers 165 services, which is way better than any other cloud service platform. Almost all services that are offered by AWS has more in-depth functions, allowing for faster, easier, cost-effective app building, and active migration. For instance, AWS Kubernetes, Faraget, and Elastic container services provide different ways of running containers.
Another example is Amazon Elastic Compute Cloud (Amazon EC2) that offers a wide variety of “instance types”, which cater to different use cases and balance memory, cost savings, and speedy handling of workloads. With AWS, individuals and businesses can:
- Customers can host websites on cloud-based servers (instance)
- Using different storage models clients can safely store files, and access it from any location.
- Store information on managed databases like RDSL, DynamoDB, Redshift, QLDB and ElastiCache.
- Using the content delivery network (CDN) on AWS to deliver content any place in the world.
Common AWS Terms
- AWS IoT:- AWS IoT is a managed cloud AWS service that enables connected devices and securely interacts with cloud applications and other devices.
- Certificate Manager:- AWS Certificate Manager allows customers to provision, manage, and deploy Secure Sockets Layer (SSL) and Transport Layer Security (TLS) certificates for use with AWS services.
- CloudFormation:- AWS CloudFormation enables customers to create and update a collection of related AWS resources in a predictable fashion.
- CloudFront:- Amazon CloudFront enables customers to distribute content to end-users with low latency and high data transfer speeds.
- CloudSearch:- AWS CloudSearch is a fully managed AWS search service, which is mainly used for websites and applications.
- CloudTrail:- AWS CloudTrail helps customers to maximize the visibility into user activity by recording API calls made on their account.
- Data Pipeline:- AWS customers can use Data Pipeline for a lightweight orchestration service for periodic, data-driven workflows.
- DMS:- AWS Database Migration Service (DMS) allows customers to migrate databases to the cloud easily and securely while minimizing downtime.
- DynamoDB:- Amazon DynamoDB is a scalable NoSQL data store that manages distributed replicas of AWS customers data with high availability.
- EC2:- Amazon Elastic Compute Cloud (EC2) is the very first product AWS offers to its customers, which provides resizable compute capacity in the cloud.
- EC2 Container Service:- Amazon ECS allows clients to run and manage Docker containers across a cluster of Amazon EC2 instances.
- Elastic Beanstalk: AWS Elastic Beanstalk is used as an application container for deploying and managing applications.
- ElastiCache:- Amazon ElastiCache improves application performance by allowing AWS customers to retrieve information from an in-memory caching system.
- Elastic File System:- Amazon Elastic File System (Amazon EFS) is a file storage service for Amazon EC2 servers or instances.
- Elasticsearch Service:- Amazon Elasticsearch Service is a managed service that helps AWS clients to deploy, operate, and scale Elasticsearch in a popular open-source search and analytics engine.
- Elastic Transcoder:- Amazon Elastic Transcoder enables customers to convert their media files in the cloud easily, at low cost, and at scale
- EMR:- Amazon Elastic MapReduce allows AWS customers to perform big data tasks such as web indexing, data mining, and log file analysis.
- Glacier:- Amazon Glacier is one of the low-cost AWS storage services that provides secure and durable storage for data archiving and backup.
- IAM:- AWS Identity and Access Management (IAM) enables clients to securely control access to their AWS services and resources.
- Inspector:- Amazon Inspector allows AWS customers to analyze the behavior of their applications they run in AWS resources and helps them to identify potential security issues.
- Kinesis:- Amazon Kinesis services helps customers to work with real-time streaming data in the AWS cloud without any hustle.
- Lambda:- Amazon Lambda is an AWS service that runs on customers code in response to events, and it automatically manages the compute resources and services for them.
- Machine Learning:- Amazon Machine Learning is a service that enables customers to easily build smart applications.
- OpsWorks:- AWS OpsWorks is an AWS DevOps platform that manages applications of any scale or complexity on the AWS cloud.
- RDS:- Amazon Relational Database Service (RDS) helps customers to easily set up, operate, and scale familiar relational databases in the cloud.
Cloud computing is classified on the basis of location, or based on the service the cloud providers offering.
- Public:- In this case the entire computing infrastructure is located on the premises of a cloud computing company that offers the cloud service.
- Private:- In the case of Private (On-premise) Hosting, the whole customer computing infrastructure is built by the users themselves and is not shared. As a result the security and control level is highest while using a private network.
- Hybrid:- Hybrid hosting used both private and public clouds, depending on its purpose. Clients can host their most important applications on their own servers/instance to keep them more secure and secondary applications elsewhere.
- Community Cloud:- A community cloud hosting is shared between organizations with a common goal or that serve into a specific community.
Cloud as Service
- Infrastructure as a Service (IaaS):- IaaS is an instant computing infrastructure that provisioned and managed over the internet. Customers can rent IT infrastructure like instances or VM’s from a cloud provider on a pay-as-you-go basis.
- IaaS quickly scales up and down with demand, that allows customers to pay only for what you use
- Platform as a Service (PaaS):- PaaS is a complete development and deployment environment in the cloud, with resources that enable customers to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications.
- The major purpose of PaaS is to create web or mobile apps, without setting up or managing the underlying infrastructure of servers/instances, storage, network and databases needed for development.
- SaaS(Software-as-a-Service):- SaaS is a software licensing model that delivers software licensing to users on a subscription basis and is centrally hosted. SaaS enables customers to host and manage the software application and underlying infrastructure including handling any maintenance such as software upgrades and security patching.
- FaaS (functions as a service):- Function as a service (FaaS) is a cloud computing service that provides a platform, which enables customers to develop, run, and manage application functionalities without the complexity of building and maintaining the infrastructure. Using this model clients can achieve using a “serverless” architecture. FaaS is typically used when building microservices applications
- Instead of handling the hassles of virtual servers, containers, and application runtimes, they upload narrowly functional blocks of code, and set them to be triggered by a certain event. FaaS applications consume no IaaS resources until an event occurs, reducing pay-per-use fees.
A Well-Architected framework has developed to help cloud architects to build the most secure, high-performance, resilient, and efficient infrastructure possible for their applications. A Well-Architected framework provides a consistent approach for customers and partners to evaluate architectures, and provides guidance to help implement designs that will scale with their application needs over time. The Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time. Amazon Well-Architected framework created based on five pillars
- Operational excellence
- Performance efficiency, and
- Cost optimization
1. Operational Excellence
The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.
- Managing and automating changes, Responding to events, and Defining standards to successfully manage daily operations.
- Perform operations as code: In the cloud, customers can apply the same engineering discipline that they use for application code to their entire environment.
- Annotate documentation: In an on-premises environment, documentation is created by hand, used by people, and hard to keep in sync with the pace of change.
- Make frequent, small, reversible changes: Design workloads to allow components to be updated regularly.
- Refine operations procedures frequently: As customers use operations procedures,look for opportunities to improve them.
- Anticipate failure: Perform “pre-mortem” exercises to identify potential sources of failure so that they can be removed or mitigated.
- Learn from all operational failures: Drive improvement through lessons learned from all operational events and failures.
The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
- Implement a strong identity foundation: Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with user AWS resources. (Identifying and managing who can do what with privilege management).
- Enable traceability:– Monitor, alert, and audit actions and changes to customers’ environment in real-time. Integrate logs and metrics with systems to automatically respond and take action.
- Apply security at all layers:-– Rather than just focusing on protection of a single outer layer, apply a defense-in-depth approach with other security controls. (Protecting information & systems.)
- Automate security best practices:– Automated software-based security mechanisms improve user ability to securely scale more rapidly and cost-effectively. (Establishing controls to detect security events.)
- Protect data in transit and at rest:– Classify customer’s data into sensitivity levels and use mechanisms, such as encryption, tokenization, and access control where appropriate. (Confidentiality and integrity of data.)
- Keep people away from data:– Create mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data.
The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. The ability to prevent, and quickly recover from failures to meet business and customer demand. Which include foundational elements around setup, cross project requirements, recovery planning,and how to handle change.
- Test recovery procedures:– In an on-premises environment, testing is often conducted to prove the system works in a particular scenario.
- Automatically recover from failure:– By monitoring a system for key performance indicators (KPIs), customers can trigger automation when a threshold is breached.
- Scale horizontally to increase aggregate system availability:– Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall system.
- Stop guessing capacity:– A common cause of failure in on-premises systems is resource saturation, when the demands placed on a system exceed the capacity of that system.
- Manage change in automation:– Changes to customers infrastructure should be done using automation.
4. Performance Efficiency
The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve. Selecting the right resource types and sizes based on workload requirements monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.
- Democratize advanced technologies:– Technologies that are difficult to implement can become easier to consume by pushing that knowledge and complexity into the cloud vendor’s domain.
- Go global in minutes:– Easily deploy user system in multiple Regions around the world with just a few clicks.
- Use serverless architecture:– In the cloud, serverless architecture removes the need for customers to run and maintain servers to carry out traditional compute activities.
- Experiment more often:– With virtual and automatable resources, customers can quickly carry out comparative testing using different types of instances, storage, or configurations.
- Mechanical sympathy:– Use the technology approach that aligns best to what customers are trying to achieve
5. Cost Optimization
The ability to run systems to deliver business value at the lowest price point. Understanding and controlling where money is being spent; Selecting the most appropriate and right number of resource types; Analyzing spend over time; and Scaling to meet business needs without overspending.
- Adopt a consumption model:– Pay only for the computing resources that customers require and increase or decrease usage depending on business requirements, not by using elaborate forecasting.
- Measure overall efficiency:– Measure the business output of the workload and the costs associated with delivering it.
- Stop spending money on data center operations:– AWS does the heavy lifting of racking, stacking, and powering servers, so customers can focus on user customers and organization projects rather than on IT infrastructure.
- Analyze and attribute expenditure:– The cloud makes it easier to accurately identify the usage and cost of systems, which then allows transparent attribution of IT costs to individual workload owners.
- Use managed and application level services to reduce cost of ownership:– In the cloud, managed and application level services remove the operational burden of maintaining servers for tasks such as sending email or managing databases.
The AWS Value features
Amazon Web Services provides a way to acquire and use infrastructure on-demand, so that users pay only for what they consume. This puts more money back into the business, so that customers can innovate more, expand faster, and be better positioned to take advantage of new opportunities.
- Increase speed and agility:– With the cloud, customers can provision all the resources they need almost instantly, saving months of time procuring them. In a similar fashion, if Customers want to scale up their infrastructure, they don’t have to wait; they can instantly
○ Increase Speed
- Elasticity :–Stop guessing about capacity, since cloud is elastic, customers provision would only be the resources that they need at any point of time.
○ Customers can scale up and down as required capacity with only a few minutes notice.
○ Scale on demand
○ Eliminate wasted capacity
- Flexibility :– With an operational expense model, users have zero up-front costs. As a result, they don’t have to think much before they start a new project. Even if it does not go well, users can get rid of all the resources just by paying the usage cost of them. The variable expense model facilitates innovation since users can experiment as many times as they want.
- Trade capital expense to variable expense:– No upfront fee, customers pay only for the consumed computing resources.
○ Increase Speed
- Auto Scaling :– A user of cloud computing benefits from the massive economics of scale since hundreds of thousands of customers are aggregated in the cloud.
- Benefit from massive economies of scale: By using cloud computing, customers can achieve a lower variable cost than on-premise servers.
- Data Center Saving :– With cloud computing users don’t have any overhead to manage the data center, and they can focus more on what the business needs.
- Stop spending money running and maintaining data centres:– Cloud computing lets customers understand focus on their customers rather than on racking, stacking, and powering servers.
- Pace of Innovations :– AWS Customers can use all the new product and features instantly, whenever they are released. There is no need to upgrade or do anything in order to use the new features. New feature is available automatically.
- Going Global in a Minute :– With AWS in just a few mouse clicks and a few minutes, customers can be ready to operate from a different region. Customers can deploy or host their application from any part of the globe almost instantly.
- Pay-As-You-Go Pricing:– AWS does not require minimum spend commitments or long-term contracts. Users replace large upfront expenses with low variable payments that only apply to what they use. With AWS users are not bound to multi-year agreements or complicated licensing models.
- Tiered Pricing (Use More, Pay Less):- For storage and data transfer, AWS follows a tiered pricing model. The more storage and data transfer customers use, the less they pay per gigabyte. In addition, volume discounts and custom pricing are available to customers for high volume projects with unique requirements.
- Cost Optimization:– Cost Explorer is a free tool that provides pre-configured reports for common AWS spend queries for current and historical periods, as well as forecasting. It also allows customers to customize the reports to meet their specific needs or to download their billing information for use in their own tools.
- Trusted Advisor:– Trusted Advisor inspects users AWS environment to find opportunities that can save them money, improve their system performance, increase their application reliability, and help them implement security best practices. Since 2013, customers have viewed over 2.6 million best-practice