Its Day 28 of my 100 Days of Cloud journey, and todays post is about the first 2 modules of my AWS Skillbuilder course on AWS Cloud Practitioner Essentials.
This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.
Let’s have a quick overview of what the first 2 modules covered, the technologies discussed and key takeaways.
Module 1 – Cloud Computing Concepts
Module 1 covers the core concepts of cloud computing and describes the different deployment models you can use when deploying your infrastructure. As a reminder, these are:
- Cloud Deployment – where you can migrate existing applications to the cloud, or you can design and build new applications which are fully hosted in the cloud.
- On Premise Deployment – also known as Private Cloud, this is where you host all infrastructure in your own On-Premises self-managed hardware in your own Datacenter, where you manage all costs associated with power, cooling and hardware refresh/upgrade.
- Hybrid Deployment – this is where you host some elements of your infrastructure on-premises and some elements in the cloud with Site-to-Site VPN connectivity between the sites.
Module 1 also covers the main benefits of cloud computing:
- Variable Expense – Instead of a massive upfront cost outlay (CapEx), you only pay for what you use and are billed monthly (OpEx).
- No Datacenter maintenance, so IT teams can focus on whats important.
- Visible Capacity – you pay for what you use, so can scale up or down based on demand.
- Economies of Scale – where the more people use the service, the lower the costs are.
- Increase Speed and Agility – ability to create platforms in minutes as opposed to waiting for Hardware, configuration, and testing.
- Go Global in Minutes – full scalable across global regional datacenters.
Module 2 – AWS Compute Services
Module 2 looks at the various AWS Compute Services offerings. Here’s a quick overview of these services:
This is the Amazon Core Compute Service where you can create virtual machines running Windows or Linux using an array of built-in operating systems and configurations. EC2 is highly flexible, cost effective and quick to get running. It comes in a range of instance types which are designed to suit different computing needs:
- General Purpose – provides balanced resources for Compute, Memory, CPU and Storage
- Compute optimized – ideal for compute intensive applications that require high processing. Examples of these would be batch processing, scientific modelling, gaming servers or ad engines.
- Memory Optimized – ideal for Memory-intensive applications such as open-source databases, in-memory caches, and real time big data analytics.
- Accelerated Compute – ideal for Graphics Processing, functions or data pattern matching.
- Storage optimized – High performance storage which are ideal for database processing, data warehousing or analytics workloads.
EC2 also different pricing models to suit your needs, such as dedicated, reserved or spot instances, on-demand pay as you go, or 1-or-3 year savings plans.
EC2 also provides auto-scaling functionality so you can scale up or down based on the demand of your workloads. You can set minimum, maximum, and desired capacity settings to meet both your demand and costs models.
Elastic Load Balancing
So you have your EC2 instances and have scaled out in response to workload demand. But how do you equally distribute the load among each server? This is where Elastic Load Balancer comes in.
- Automatically distributes incoming application traffic across multiple resources
- Although Elastic Load Balancing and Amazon EC2 Auto Scaling are separate services, they work together to help ensure that applications running in Amazon EC2 can provide high performance and availability.
Messaging and Queuing
This is based on a Microservices approach where services are loosely coupled together, and uses 2 main services:
- Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service where a publisher publishes messages to subscribers. Subscribers can be web servers, email addresses, AWS Lambda functions, or several other options.
- Amazon Simple Queue Service (Amazon SQS) is a message queuing service. Using Amazon SQS, you can send, store, and receive messages between software components, without losing messages or requiring other services to be available. In Amazon SQS, an application sends messages into a queue. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.
The term “serverless” means that your code runs on servers, but you do not need to provision or manage these servers. AWS Lambda is is a service that lets you run code without needing to provision or manage servers. I’ll look more closely at AWS Lambda in a future post where I’ll do a demo of how it works
AWS provides a number or Container services, these are:
- Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container management system that enables you to run and scale containerized applications on AWS. Amazon ECS supports Docker containers
- Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that you can use to run Kubernetes on AWS.
- AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and Amazon EKS. When using AWS Fargate, you do not need to provision or manage servers. AWS Fargate manages your server infrastructure for you
And that’s all for today! Hope you enjoyed this post, join me again next time for more AWS Core Concepts!