Its Day 41 of my 100 Days of Cloud Journey, and today I’m taking Day 4 and the final session of the Cloudskills.io Linux Cloud Engineer Bootcamp
This was run live over 4 Fridays by Mike Pfieffer and the folks over at Cloudskills.io, and is focused on the following topics:
Scripting
Administration
Networking
Web Hosting
Containers
If you recall, on Day 26 I did Day 1 of the bootcamp, , Day 2 on Day 33 after coming back from my AWS studies, and Day 3 was on Day 40.
The bootcamp livestream started on November 12th and ran for 4 Fridays (with a break for Thanksgiving) before concluding on December 10th. However, you can sign up for this at any time to watch the lectures to your own pace (which I’m doing here) and get access to the Lab Exercises on demand at this link:
Week 4 was all about Containers, and Mike gave us a run through of Docker and the commands we would use to download, run and build our own Docker Images. We then looked at how this works on Azure and how we would spin up Docker Containers in Azure. The Lab exercises include exercises for doing this, and also for running containers in AWS.
The Bootcamp as a whole then concluded with Michael Dickner running though the details around Permissions in the Linux File system and how they affect and can be changed for file/folder owners, users, groups and “everyone”.
Conclusion
That’s all for this post – hope you enjoyed the Bootcamp if you did sign up – if not you can sign up at the link above! I thought it was fun – the big takeaway and most useful day for me was defintely Day 3 when looking at LAMP and MEAN stack and how to run a Web Server on Linux using OpenSource technologies.
Until next time, when we’re moving on to a new topic!
Its Day 33 of my 100 Days of Cloud Journey, and today I’m taking Day 2 of the Cloudskills.io Linux Cloud Engineer Bootcamp
This is being run over 4 Fridays by Mike Pfieffer and the folks over at Cloudskills.io, and is focused on the following topics:
Scripting
Administration
Networking
Web Hosting
Containers
If you recall, on Day 26 I did Day 1 of the bootcamp, and started Day 2 only to realise the topic was AWS, so I went off on a bit of a tangent to get back here to actually complete Day 2.
The bootcamp livestream started on November 12th and continued on Friday November 19th. With the Thanksgiving break now behind us, it resumes on December 3rd and completes on December 10th. However, you can sign up for this at any time to watch the lectures to your own pace and get access to the Lab Exercises on demand at this link:
Week 2 started with Mike going through the steps to create a Linux VM in an AWS EC2 instance and similar to Day 1, installing a WebServer and then scripting that installation into a reusable bash script that can be deployed during VM creation.
I then got my first look at Google Cloud Platform, when Robin Smorenburg gave us a walkthrough of the GCP Portal, and the process to create a Linux VM on GCP both in the Portal and Google Cloud Shell. Robin works as a GCP Architect and can be found blogging at https://robino.io/.
Overall, the creation process is quite similar across the 3 platforms, in that the VM creation asks to create a key pair for certificate authentication, and both AWS and GCP allow SSH access from all IP addresses by default which then can be locked down to a specific IP Address or IP Address range.
Conclusion
That’s all for this post – I’ll update as I go through the remaining weeks of the Bootcamp, but to learn more and go through the full content of lectures and labs, sign up at the link above.
This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.
Let’s have a quick overview of what the final modules covered, the technologies discussed and key takeaways.
Module 9 – Migration and Innovation
Module 9 covers Migration strategies and advice you can use when moving to AWS.
We dived straight into the AWS Cloud Adoption Framework (AWS CAF) and looked at the 6 Perspectives, each of which have distinct responsibilities and helps prepare the right people across your organization prepare for the challenges ahead.
The 6 Perspectives of AWS CAF are:
Business – ensure that your business strategies and goals align with your IT strategies and goals.
People – evaluate organizational structures and roles, new skill and process requirements, and identify gaps.
Governance – how to update the staff skills and processes necessary to ensure business governance in the cloud.
Platform – uses a variety of architectural models to understand and communicate the structure of IT systems and their relationships.
Security – ensures that the organization meets security objectives for visibility, auditability, control, and agility.
Operations – defines current operating procedures and identify the process changes and training needed to implement successful cloud adoption.
We then moved on to the 6 R’s of Migration which are:
Rehosting – “lift and shift” move of applications with no changes.
Replatforming – “lift, tinker and shift”, move of applications while making changes to optimize performance in the cloud.
Refactoring – adding features to the app in the cloud environment that are not possible in the existing environment.
Repurchasing – this is redesigning the application from scratch, or replacing it with a cloud-based version.
Retaining – keeping some applications that are not suitable for migration in your existing environment.
Retiring – removing applications that are no longer needed
We then looked at the AWS Snow solutions (which is similar to Azure Data Box), which is where you use AWS-provided physical devices to transfer large amounts of data directly to AWS Data Centers as opposed to over the internet. These devices range in size from 8TB of storage up to 100PB, and can come in both storage and compute optimized versions.
Finally, the module looked at some of the cool innovation features available in AWS, such as:
Amazon Lex – based on Alexa, enables you to build conversational interfaces using voice and text.
Amazon Textract – machine learning that extracts data from scanned documents.
Amazon SageMaker – enables you to build train and deploy machine learning models.
AWS Deep Racer – my favourite one! This is an autonomous 1/18 scale race car that you can use to test reinforcement learning models.
Module 10 – The Cloud Journey
Module 10 is a short one but starts by looking at the AWS Well-Architected Framework which helps you understand how to design and operate reliable, secure, efficient, and cost-effective systems in the AWS Cloud.
The Well-Architected Framework is based on five pillars:
Operational excellence – the ability to run and monitor systems to deliver business value.
Security – the ability to protect information, systems and assets while delivering business value.
Reliability – the ability to automatically recover from disruptions or outages using scaling.
Performance efficiency – the ability to use computing resources efficiently to meet demand.
Cost optimization – the ability to run systems to deliver business value at the lowest cost.
Finally, we looked at the six advantages of cloud computing:
Trade upfront expense for variable expense – pay for only the resources you use using an OpEx model.
Benefit from massive economies of scale – achieve a lower variable cost by availing of aggregated costs.
Stop guessing capacity – no more predicting how much resources you need.
Increase speed and agility – flexibility to deploy applications and infrastructure in minutes, while also providing more time to experiment and innovate.
Stop spending money running and maintaining data centers – focus more on your applications and customers instead of overheads.
Go global in minutes – deploy to customers around the world
Module 11 – Exam Overview
The final module gives an overview of the AWS Certified Cloud Practitioner exam, giving a breakdown of the domains as shown below.
Image Credit – AWS Skillbuilder
The exam consists of 65 questions to be completed in 90 minutes, and the passing score is 70%. Like most exams, there are 2 types of questions:
A multiple-choice question has one correct response and three incorrect responses, or distractors.
A multiple-response question has two or more correct responses out of five or more options.
As always in any exam, the advice is:
Read the question in full.
Predict the answer before looking at the answer options.
Eliminate incorrect answers first.
And that’s all for today! Hope you enjoyed this mini-series of posts on AWS Core Concepts! Now I need to schedule the exam and take that first step on the AWS ladder. You should too, but more importantly, go and enroll for the course using the links at the top of the post – this is my brief summary and understanding of the Modules, but the course if well worth taking and I found it a great starting point in my AWS journey. Until next time!
This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.
Let’s have a quick overview of what the 2 modules I completed today covered, the technologies discussed and key takeaways.
Module 7 – Monitoring and Analytics
Module 7 deals with the AWS Offerings for monitoring, analytics and best practise optimization of your AWS account.
Amazon Cloudwatch enables you to monitor and manage various metrics and configure alarm actions based on data from those metrics. CloudWatch uses metrics to represent the data points for your resources. AWS services send metrics to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that show how performance has changed over time. With CloudWatch, you can create alarms that automatically perform actions if the value of your metric has gone above or below a predefined threshold.
Image Credit: AWS Skillbuilder
AWS CloudTrail records API calls for your account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, and more. You can think of CloudTrail as a “trail” of breadcrumbs (or a log of actions) that someone has left behind them. Recall that you can use API calls to provision, manage, and configure your AWS resources. With CloudTrail, you can view a complete history of user activity and API calls for your applications and resources. Events are typically updated in CloudTrail within 15 minutes after an API call. You can filter events by specifying the time and date that an API call occurred, the user who requested the action, the type of resource that was involved in the API call, and more. Within CloudTrail, you can also enable CloudTrail Insights. This optional feature allows CloudTrail to automatically detect unusual API activities in your AWS account.
AWS Trusted Advisor is a web service that inspects your AWS environment and provides real-time recommendations in accordance with AWS best practices. Trusted Advisor compares its findings to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits. For the checks in each category, Trusted Advisor offers a list of recommended actions and additional resources to learn more about AWS best practices.
Image Credit – AWS Skillbuilder
Module 8 – AWS Pricing and Support
Module 8 dives in the different AWS account models that you can choose from. The main options are:
AWS Free Tier, which is broken into a range of Always Free services, services that are free for 12 Months, and short-term Trials of different AWS Services.
On-Demand Pricing – this is the “Pay as You Use” model.
Reserved Pricing – where you pay up front for reserved instances and services at a discounted price
Tiered Pricing – this is where you pay less the more you use (for example, the more Amazon S3 storage space you use, the less you pay per GB.
The AWS Billing and Cost Management dashboard gives an overview where you can pay bills, monitor usage, and analyse and control costs.
I mentioned AWS Organizations in a previous post, where you can apply IAM and Policies to multiple AWS accounts. AWS Organizations can also be used for Consolidated Billing across multiple AWS accounts from a central location.
AWS Budgets gives you the option to create budgets to plan service costs and instance reservations, while AWS Cost Explorer helps you visualize and manage costs and usage over time (12 months of historical data).
We then moved to the different support plans available in AWS:
Basic Support – which is free for all AWS Customers, and includes a limited selection of Trusted Advisor checks, and the AWS Personal Health Dashboard.
Developer Support – includes access to best practice guidance and diagnostic tools
Business Support – includes all AWS Trusted Advisor checks and use-case guidance for all AWS offerings, features and services that best supports your business needs.
Enterprise Support – includes Architecture Guidance, Infrastructure Event management and a dedicated TAM (Technical Account Manager) who provides expertise in helping you design AWS solutions.
Finally, we looked at the AWS Marketplace which is a catalog of thousands of software listings from multiple third-party vendors that can be used in your AWS Environment. You can explore solutions by categories (such as IoT and Machine Learning) or by industry and use case.
And that’s all for today! Hope you enjoyed this post, join me again next time for the final part of AWS Core Concepts! And more importantly, go and enroll for the course using the links at the top of the post – this is my brief summary and understanding of the Modules, but the course if well worth taking if you want to get more in-depth.
This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.
Let’s have a quick overview of what the 2 modules I completed today covered, the technologies discussed and key takeaways.
Module 5 – Storage and Databases
Storage
First thingcovered was the Storage types available with EC2 Instances:
An instance store provides temporary block-level storage for an Amazon EC2 instance. An instance store is disk storage that is physically attached to the host computer for an EC2 instance, and therefore has the same lifespan as the instance. When the instance is terminated, you lose any data in the instance store.
Amazon Elastic Block Store (Amazon EBS) is a service that provides block-level storage volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the attached EBS volume remains available. It important to note that EBS stores data in a single Availability Zone, as it needs to be in the same zone as the EC2 instance it is attached to.
An EBS snapshot is an incremental backup. This means that the first backup taken of a volume copies all the data. For subsequent backups, only the blocks of data that have changed since the most recent snapshot are saved.
Moving on from that, we looked at Amazon S3 (Simple Storage Service) which provides object (or File Level storage). Objects are stored in buckets (which are like folders or containers). You can upload any type of file to an S3 bucket, which has unlimited storage space. You can set permissions on any object you upload. S3 provides a number of classes of storage which can be used depending on how you plan to store your data and how frequently or infrequently you access it:
S3 Standard: used for high availability, and stores data in a minimum of 3 Availability Zones.
S3 Standard-IA: used for infrequently accessed data, still stores the data across 3 Availability Zones.
S3 One Zone-IA: same as above, but only stores data in a single Availability Zone to keep costs low.
S3 Intelligent-Tiering: used for data with unknown or changing access patterns.
S3 Glacier: low-cost storage that is ideal for data archiving. Data can be retrieved within a few minutes
S3 Glacier Deep Archive: lower cost than above, data is retrieved within 12 hours
Next up is Amazon Elastic File System (Amazon EFS), which is a scalable file storage system that is used with AWS Cloud and On-premise resources. You can attach EFS to a single or multiple EC2 instances. This is a Linux File System, and can have multiple instances reading and writing to it at the same time. Amazon EFS file systems store data across multiple Availability Zones.
Database Types
Now we’re into the different types of Databases that are available on AWS.
Amazon Relational Database Service (Amazon RDS) enables you to run relational databases such as MySQL, PostgreSQL, Oracle and Microsoft SQL Server. You can move on-premises SQL Servers to EC2 instances, or else move the Databases on these servers to Amazon RDS instances. You also have the option to move MySQL or PostgreSQL databases to Amazon Aurora, which cost 1/10th of the cost of RDS and replicates six copies of your data across 3 Availability Zones.
Amazon DynamoDB is a serverless NoSQL or non-relational key-value database that uses tables and attributes. It has specific use cases and is highly scalable.
Amazon Redshift is a data warehousing service used for big data analytics, and has the ability to collect data from multiple sources and analyse relationships and trends across your data.
On top of the above, there is the AWS Database Migration Services (AWS DMS) which can help you migrate existing databases to AWS. The Source database remains active during the migration, and the source and destination databases do not need to be the same type of database.
Module 6 – Security
Shared Responsibility Model
We kicked off the Security Module by looking at the Shared Responsibility Model. This will be familiar to any Cloud service, where AWS is reponsible for some parts f the environment, and the customer is responsibled for other parts.
Image Credit – AWS Skillbuilder
The shared responsibility model divides into customer responsibilities (commonly referred to as “security in the cloud”) and AWS responsibilities (commonly referred to as “security of the cloud”).
Image Credit – AWS Skillbuilder
Identity and Access Control
Onwards to Identity! AWS Identity and Access Management (IAM) allows you to manage AWS services and resources securely. You do this by using a combination of users/groups/roles, policies and MFA. When you first create an AWS account, you have an account called “root”. You can then use the “root” account to create an IAM user that you will use to perform everyday tasks. This is the same concept as any Linux system, you should not use the “root” account. You can then add the user to an IAM group.
You then create policies that allows or denies permissions to specific AWS resources and apply that policy to the user or group.
You then have the concept of roles – roles are identities that can be used to gain temporary access to perform a specific task. When a user assumes a role, they give up all previous permissions they had.
Finally, we should always enable MFA to provide an extra layer of security for your AWS account.
Managing Multiple AWS Accounts
But what if we have multiple AWS accounts that we need to manage? This where AWS Organisations comes into play. AWS Organizations can consolidate and manage multiple AWS accounts. This is useful if you have separate accounts for Production, Development and Testing. You can then group AWS accounts into Organizational Units (OUs). In AWS Organizations, you can apply service control policies (SCPs) to the organization root, an individual member account, or an OU. An SCP affects all IAM users, groups, and roles within an account, including the AWS account root user.
Compliance
Now we move on to Compliance. AWS Artifact is a service that provides on-demand access to AWS security and compliance reports and select online agreements. AWS Artifact consists of two main sections: AWS Artifact Agreements and AWS Artifact Reports:
In AWS Artifact Agreements, you can review, accept, and manage agreements for an individual account and for all your accounts in AWS Organizations.
AWS Artifact Reports provide compliance reports from third-party auditors.
Security
Finally, we reach the deeper level security and defence stuff!
AWS Shield provides built-in protection against DDoS Attached. AWS Shield provides 2 levels of protection:
Standard – automatically protects all AWS customers from the most frequent types of DDoS attacks.
Advanced – on top of Standard, provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks. It can also integrate with services like Amazon Route 53 and Amazon CloudFront.
AWS also offers the following security services:
AWS WAF (Web Application Firewall), which uses machine-learning capabilities to filter incoming traffic from bad actors.
AWS Key Management Service (AWS KMS) which uses cryptographic keys to perform encryption operations for encrypting and decrypting data.
Amazon Inspector, which runs automated security assessments on your infrastructure based on compliance baselines.
Amazon Guard Duty, which provides threat intelligence by monitoring network activity and account behaviour.
And that’s all for today! Hope you enjoyed this post, join me again next time for more AWS Core Concepts! And more importantly, go and enroll for the course using the links at the top of the post – this is my brief summary and understanding of the Modules, but the course if well worth taking if you want to get more in-depth.
This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.
Let’s have a quick overview of what the 2 modules I completed today covered, the technologies discussed and key takeaways.
Module 3 – Global Infrastructure and Reliability
AWS operates Data Center facilities across the globe, giving customers the choice to select the correct region to host their AWS Infrastructure based on the following factors:
Compliance with Data Governance and Legal Requirements – this determines where your data can be stored based on data governance, for example certain types of EU Data cannot be stored in a US Data Centre at it won’t be covered by GDPR.
Proximity to Customers – the closer your infrastructure is to the customers or staff who will be consuming it, the lower the latency will be and that will give better performance.
Available services within a Region – Some services may not be available in the closest region to you, so you may need to select a different one. This information is available in the AWS Portal when you are creating the service.
Pricing – based on the tax laws of different nations, it may be up to 50% more expensive to host infrastructure in a certain nation or region.
Availability Zones
The need for availability and flexibility is key in any Cloud Architecture. AWS operates a number of Availability Zones, which are either a single data center or a group of data centers within a region. These are located tens of miles apart from each other and have low latency between them, so if a disaster occurs in one part of the region, the service is not affected if it needs to fail over to another data center.
Amazon Cloudfront
Amazon CloudFront is an example of a CDN (Content Delivery Network). Amazon CloudFront uses a network of edge locations to cache content and deliver content to customers all over the world. When content is cached, it is stored locally as a copy. This content might be video files, photos, webpages, and so on. Edge Locations are separate from regions, and run the AWS DNS Service called Amazon Route 53 (which I cover in more detail below).
AWS Outpost
AWS Outpost is where AWS installs an AWS mini-region in your own-premises data center. At first look, it looks to be the same type of service as Azure Stack.
So from this, we can say:
AWS has data centers in multiple regions across the world
Each Region contains Availability Zones that allows you to run highly available infrastructure across physically separated buildings which are tens of miles apart.
Amazon CloudFront runs in AWS Edge locations (separate from regions), hosting DNS (Amazon Route 53) and a Content Delivery Network (CDN) to deliver content closer to customers no matter where they are located.
Finally in this module, we looked at the different ways that you can create, manage, and interact with AWS Services:
AWS Management Console – a web-based interface for accessing and managing AWS services. The console includes wizards and automated workflows that can simplify the process of completing tasks.
AWS Command Line Interface – AWS CLI enables you to control multiple AWS services directly from the command line within one tool. AWS CLI is available for users on Windows, macOS, and Linux. AWS CLI makes actions scriptable and repeatable.
Software Development Kits – The SDKs allow you to interact with AWS resources through various programming languages.
AWS Elastic Beanstalk – takes application code and desired configurations and then builds the infrastructure for you based on the configurations provides
AWS CloudFormation – Infrastructure as Code tool, which uses JSON or YAML based documents called CloudFormation templates. CloudFormation supports many different AWS resources from storage, databases, analytics, machine learning, and more
Module 4 – Networking
Module 4 deals with networking, and the concept of Amazon Virtual Private Cloud, or VPC.
When I first heard of VPC’s, I assumed they were like Resource Groups in Azure. Well, yes and no – a VPC is effectively an isolated Virtual Network that you then carve up into Subnets and can then deploy resources such as EC2 instances into those subnets.
Because the VPC is isolated by default when you get it, you need to add an Internet Gateway to the perimeter which connects the VPC to the internet and provides Public Access.
If you need to connect your corporate network to the VPC, you have 2 options:
Virtual Private Gateway allows VPN Connectivity between your on-premises corporate or private network and the VPC.
AWS Direct Connect allows you to establish a dedicated private connection between your corporate network and the VPC. Think of this as the same as Azure ExpressRoute.
So now we have our VPC and access into it, we need to control that access to both the subnets and the EC2 instances running within the subnets. We have 2 methods of controlling that access:
A network access control list (ACL) is a virtual firewall that controls inbound and outbound traffic at the subnet level.
Each AWS account includes a default network ACL. When configuring your VPC, you can use your account’s default network ACL or create custom network ACLs.
By default, your account’s default network ACL allows all inbound and outbound traffic, but you can modify it by adding your own rules.
Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound.
A security group is a virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance.
By default, a security group denies all inbound traffic and allows all outbound traffic. You can add custom rules to configure which traffic to allow or deny.
Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets.
So again, none of this is unfamiliar when compared against the services Azure offer in comparison.
Finally, the module covered Amazon Route 53 which is the AWS DNS Service. However, Route 53 does much more than just standard DNS, such as:
Manage DNS records for Domain Names
Register new domain names directly in Route 53
Direct traffic to endpoints using several different routing policies, such as latency-based routing, geolocation DNS, geoproximity and weighted round robin.
And that’s all for today! Hope you enjoyed this post, join me again next time for more AWS Core Concepts!
This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.
Let’s have a quick overview of what the first 2 modules covered, the technologies discussed and key takeaways.
Module 1 – Cloud Computing Concepts
Module 1 covers the core concepts of cloud computing and describes the different deployment models you can use when deploying your infrastructure. As a reminder, these are:
Cloud Deployment – where you can migrate existing applications to the cloud, or you can design and build new applications which are fully hosted in the cloud.
On Premise Deployment – also known as Private Cloud, this is where you host all infrastructure in your own On-Premises self-managed hardware in your own Datacenter, where you manage all costs associated with power, cooling and hardware refresh/upgrade.
Hybrid Deployment – this is where you host some elements of your infrastructure on-premises and some elements in the cloud with Site-to-Site VPN connectivity between the sites.
Module 1 also covers the main benefits of cloud computing:
Variable Expense – Instead of a massive upfront cost outlay (CapEx), you only pay for what you use and are billed monthly (OpEx).
No Datacenter maintenance, so IT teams can focus on whats important.
Visible Capacity – you pay for what you use, so can scale up or down based on demand.
Economies of Scale – where the more people use the service, the lower the costs are.
Increase Speed and Agility – ability to create platforms in minutes as opposed to waiting for Hardware, configuration, and testing.
Go Global in Minutes – full scalable across global regional datacenters.
Module 2 – AWS Compute Services
Module 2 looks at the various AWS Compute Services offerings. Here’s a quick overview of these services:
EC2
This is the Amazon Core Compute Service where you can create virtual machines running Windows or Linux using an array of built-in operating systems and configurations. EC2 is highly flexible, cost effective and quick to get running. It comes in a range of instance types which are designed to suit different computing needs:
General Purpose – provides balanced resources for Compute, Memory, CPU and Storage
Compute optimized – ideal for compute intensive applications that require high processing. Examples of these would be batch processing, scientific modelling, gaming servers or ad engines.
Memory Optimized – ideal for Memory-intensive applications such as open-source databases, in-memory caches, and real time big data analytics.
Accelerated Compute – ideal for Graphics Processing, functions or data pattern matching.
Storage optimized – High performance storage which are ideal for database processing, data warehousing or analytics workloads.
EC2 also different pricing models to suit your needs, such as dedicated, reserved or spot instances, on-demand pay as you go, or 1-or-3 year savings plans.
EC2 also provides auto-scaling functionality so you can scale up or down based on the demand of your workloads. You can set minimum, maximum, and desired capacity settings to meet both your demand and costs models.
Elastic Load Balancing
So you have your EC2 instances and have scaled out in response to workload demand. But how do you equally distribute the load among each server? This is where Elastic Load Balancer comes in.
Automatically distributes incoming application traffic across multiple resources
Although Elastic Load Balancing and Amazon EC2 Auto Scaling are separate services, they work together to help ensure that applications running in Amazon EC2 can provide high performance and availability.
Messaging and Queuing
This is based on a Microservices approach where services are loosely coupled together, and uses 2 main services:
Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service where a publisher publishes messages to subscribers. Subscribers can be web servers, email addresses, AWS Lambda functions, or several other options.
Amazon Simple Queue Service (Amazon SQS) is a message queuing service. Using Amazon SQS, you can send, store, and receive messages between software components, without losing messages or requiring other services to be available. In Amazon SQS, an application sends messages into a queue. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.
Serverless
The term “serverless” means that your code runs on servers, but you do not need to provision or manage these servers. AWS Lambda is is a service that lets you run code without needing to provision or manage servers. I’ll look more closely at AWS Lambda in a future post where I’ll do a demo of how it works
Containers
AWS provides a number or Container services, these are:
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container management system that enables you to run and scale containerized applications on AWS. Amazon ECS supports Docker containers
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that you can use to run Kubernetes on AWS.
AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and Amazon EKS. When using AWS Fargate, you do not need to provision or manage servers. AWS Fargate manages your server infrastructure for you
And that’s all for today! Hope you enjoyed this post, join me again next time for more AWS Core Concepts!
Its Day 27 of my 100 Days of Cloud Journey, and today I’m in a bit of a spot ….
On Day 26, I started Day 1 of the Linux Cloud Engineer Bootcamp hosted by Cloudskills.io where I learned how to create Azure Linux Instances using Certificate-based authentication.
Day 2 of the Bootcamp started, and Mike is talking about Linux instances on AWS. And that stopped me in my tracks.
Why? Because I haven’t looked at AWS in all that much detail. So instead of continuing with the Linux Bootcamp, I’m going to go back to the start and learn about AWS from the start.
What I know ….
What I know about AWS at this point is that it is built primarily on 3 Core Services which are:
EC2 – EC2 (Or “Elastic Cloud Compute” to give its full title) this is the core AWS Compute Service. Similar to Virtual Machines in Azure, you can run Windows or Linux workloads in the cloud.
IAM – AWS IAM is how you manage permissions, think of it as the equivalent of the Azure Active Directory service as its used to grant access to resources in AWS. However, IAM also controls how AWS Services talk to each other.
S3 – S3 is AWS’s flexible storage service, which can be used to host a variety of data types such as websites, logs, databases, backups etc.
No matter what you do in AWS, at some point you will use the core trio of EC2, IAM and S3.
Its hard to pick “Core Services”, but the others that need to be looked at are:
RDS – AWS Hosted Database
Route 53 – DNS Routing and Domain Purchasing/Management
CloudWatch – Monitoring for AWS
CloudFormation – AWS Infrastructure-as-Code
OK, so that’s the core services. But it not enough to just know about them and how they compare to Azure, I want to get in depth and get to know how AWS works and feel as comfortable in that as I do in Azure. So its time to go learning again!
AWS Learning Path
Having looked at the options, I’ve established the best place to start is at the mothership. AWS offer Free Training to prepare for the AWS Certified Cloud Practitioner certification exam:
Having looked at the content, this is in effect the equivalent to the AZ-900 Azure Fundamentals Certification, which was the first Azure Certification I achieved. While this is a fundamentals exam and some people choose to skip this and go straight to the more technical certifications, I felt the AZ-900 was well worth taking for the giving a full overview and familiarity of Azure Services.
So that’s why I’m taking the same approach to the AWS Platform: learn from the ground up, gain an overview of all services and then go forward into the more technical aspects.
The AWS Training for the AWS Certified Cloud Practitioner can be found here: