100 Days of Cloud – Day 33: Linux Cloud Engineer Bootcamp, Day 2

Its Day 33 of my 100 Days of Cloud Journey, and today I’m taking Day 2 of the Cloudskills.io Linux Cloud Engineer Bootcamp

This image has an empty alt attribute; its file name is image-11.png

This is being run over 4 Fridays by Mike Pfieffer and the folks over at Cloudskills.io, and is focused on the following topics:

  • Scripting
  • Administration
  • Networking
  • Web Hosting
  • Containers

If you recall, on Day 26 I did Day 1 of the bootcamp, and started Day 2 only to realise the topic was AWS, so I went off on a bit of a tangent to get back here to actually complete Day 2.

The bootcamp livestream started on November 12th and continued on Friday November 19th. With the Thanksgiving break now behind us, it resumes on December 3rd and completes on December 10th. However, you can sign up for this at any time to watch the lectures to your own pace and get access to the Lab Exercises on demand at this link:


Week 2 started with Mike going through the steps to create a Linux VM in an AWS EC2 instance and similar to Day 1, installing a WebServer and then scripting that installation into a reusable bash script that can be deployed during VM creation.

I then got my first look at Google Cloud Platform, when Robin Smorenburg gave us a walkthrough of the GCP Portal, and the process to create a Linux VM on GCP both in the Portal and Google Cloud Shell. Robin works as a GCP Architect and can be found blogging at https://robino.io/.

Overall, the creation process is quite similar across the 3 platforms, in that the VM creation asks to create a key pair for certificate authentication, and both AWS and GCP allow SSH access from all IP addresses by default which then can be locked down to a specific IP Address or IP Address range.


That’s all for this post – I’ll update as I go through the remaining weeks of the Bootcamp, but to learn more and go through the full content of lectures and labs, sign up at the link above.

Hope you enjoyed this post, until next time!

100 Days of Cloud – Day 30: AWS Cloud Practitioner Essentials Day 3

Its Day 30 of my 100 Days of Cloud journey, and todays post continues my learning through the next 2 modules of my AWS Skillbuilder course on AWS Cloud Practitioner Essentials.

This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.

Let’s have a quick overview of what the 2 modules I completed today covered, the technologies discussed and key takeaways.

Module 5 – Storage and Databases


First thingcovered was the Storage types available with EC2 Instances:

  • An instance store provides temporary block-level storage for an Amazon EC2 instance. An instance store is disk storage that is physically attached to the host computer for an EC2 instance, and therefore has the same lifespan as the instance. When the instance is terminated, you lose any data in the instance store.
  • Amazon Elastic Block Store (Amazon EBS) is a service that provides block-level storage volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the attached EBS volume remains available. It important to note that EBS stores data in a single Availability Zone, as it needs to be in the same zone as the EC2 instance it is attached to.
    • An EBS snapshot is an incremental backup. This means that the first backup taken of a volume copies all the data. For subsequent backups, only the blocks of data that have changed since the most recent snapshot are saved.

Moving on from that, we looked at Amazon S3 (Simple Storage Service) which provides object (or File Level storage). Objects are stored in buckets (which are like folders or containers). You can upload any type of file to an S3 bucket, which has unlimited storage space. You can set permissions on any object you upload. S3 provides a number of classes of storage which can be used depending on how you plan to store your data and how frequently or infrequently you access it:

  • S3 Standard: used for high availability, and stores data in a minimum of 3 Availability Zones.
  • S3 Standard-IA: used for infrequently accessed data, still stores the data across 3 Availability Zones.
  • S3 One Zone-IA: same as above, but only stores data in a single Availability Zone to keep costs low.
  • S3 Intelligent-Tiering: used for data with unknown or changing access patterns.
  • S3 Glacier: low-cost storage that is ideal for data archiving. Data can be retrieved within a few minutes
  • S3 Glacier Deep Archive: lower cost than above, data is retrieved within 12 hours

Next up is Amazon Elastic File System (Amazon EFS), which is a scalable file storage system that is used with AWS Cloud and On-premise resources. You can attach EFS to a single or multiple EC2 instances. This is a Linux File System, and can have multiple instances reading and writing to it at the same time. Amazon EFS file systems store data across multiple Availability Zones.

Database Types

Now we’re into the different types of Databases that are available on AWS.

Amazon Relational Database Service (Amazon RDS) enables you to run relational databases such as MySQL, PostgreSQL, Oracle and Microsoft SQL Server. You can move on-premises SQL Servers to EC2 instances, or else move the Databases on these servers to Amazon RDS instances. You also have the option to move MySQL or PostgreSQL databases to Amazon Aurora, which cost 1/10th of the cost of RDS and replicates six copies of your data across 3 Availability Zones.

Amazon DynamoDB is a serverless NoSQL or non-relational key-value database that uses tables and attributes. It has specific use cases and is highly scalable.

Amazon Redshift is a data warehousing service used for big data analytics, and has the ability to collect data from multiple sources and analyse relationships and trends across your data.

On top of the above, there is the AWS Database Migration Services (AWS DMS) which can help you migrate existing databases to AWS. The Source database remains active during the migration, and the source and destination databases do not need to be the same type of database.

Module 6 – Security

Shared Responsibility Model

We kicked off the Security Module by looking at the Shared Responsibility Model. This will be familiar to any Cloud service, where AWS is reponsible for some parts f the environment, and the customer is responsibled for other parts.

Image Credit – AWS Skillbuilder

The shared responsibility model divides into customer responsibilities (commonly referred to as “security in the cloud”) and AWS responsibilities (commonly referred to as “security of the cloud”).

Image Credit – AWS Skillbuilder

Identity and Access Control

Onwards to Identity! AWS Identity and Access Management (IAM) allows you to manage AWS services and resources securely. You do this by using a combination of users/groups/roles, policies and MFA.  When you first create an AWS account, you have an account called “root”. You can then use the “root” account to create an IAM user that you will use to perform everyday tasks. This is the same concept as any Linux system, you should not use the “root” account. You can then add the user to an IAM group.

You then create policies that allows or denies permissions to specific AWS resources and apply that policy to the user or group.

You then have the concept of roles – roles are identities that can be used to gain temporary access to perform a specific task. When a user assumes a role, they give up all previous permissions they had.

Finally, we should always enable MFA to provide an extra layer of security for your AWS account.

Managing Multiple AWS Accounts

But what if we have multiple AWS accounts that we need to manage? This where AWS Organisations comes into play. AWS Organizations can consolidate and manage multiple AWS accounts. This is useful if you have separate accounts for Production, Development and Testing. You can then group AWS accounts into Organizational Units (OUs). In AWS Organizations, you can apply service control policies (SCPs) to the organization root, an individual member account, or an OU. An SCP affects all IAM users, groups, and roles within an account, including the AWS account root user.


Now we move on to Compliance. AWS Artifact is a service that provides on-demand access to AWS security and compliance reports and select online agreements. AWS Artifact consists of two main sections: AWS Artifact Agreements and AWS Artifact Reports:

  • In AWS Artifact Agreements, you can review, accept, and manage agreements for an individual account and for all your accounts in AWS Organizations.
  • AWS Artifact Reports provide compliance reports from third-party auditors.


Finally, we reach the deeper level security and defence stuff!

AWS Shield provides built-in protection against DDoS Attached. AWS Shield provides 2 levels of protection:

  • Standard – automatically protects all AWS customers from the most frequent types of DDoS attacks.
  • Advanced – on top of Standard, provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks. It can also integrate with services like Amazon Route 53 and Amazon CloudFront.

AWS also offers the following security services:

  • AWS WAF (Web Application Firewall), which uses machine-learning capabilities to filter incoming traffic from bad actors.
  • AWS Key Management Service (AWS KMS) which uses cryptographic keys to perform encryption operations for encrypting and decrypting data.
  • Amazon Inspector, which runs automated security assessments on your infrastructure based on compliance baselines.
  • Amazon Guard Duty, which provides threat intelligence by monitoring network activity and account behaviour.

And that’s all for today! Hope you enjoyed this post, join me again next time for more AWS Core Concepts! And more importantly, go and enroll for the course using the links at the top of the post – this is my brief summary and understanding of the Modules, but the course if well worth taking if you want to get more in-depth.

100 Days of Cloud – Day 29: AWS Cloud Practitioner Essentials Day 2

Its Day 29 of my 100 Days of Cloud journey, and todays post continues my learning through the next 2 modules of my AWS Skillbuilder course on AWS Cloud Practitioner Essentials.

This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.

Let’s have a quick overview of what the 2 modules I completed today covered, the technologies discussed and key takeaways.

Module 3 – Global Infrastructure and Reliability

AWS operates Data Center facilities across the globe, giving customers the choice to select the correct region to host their AWS Infrastructure based on the following factors:

  • Compliance with Data Governance and Legal Requirements – this determines where your data can be stored based on data governance, for example certain types of EU Data cannot be stored in a US Data Centre at it won’t be covered by GDPR.
  • Proximity to Customers – the closer your infrastructure is to the customers or staff who will be consuming it, the lower the latency will be and that will give better performance.
  • Available services within a Region – Some services may not be available in the closest region to you, so you may need to select a different one. This information is available in the AWS Portal when you are creating the service.
  • Pricing – based on the tax laws of different nations, it may be up to 50% more expensive to host infrastructure in a certain nation or region.

Availability Zones

The need for availability and flexibility is key in any Cloud Architecture. AWS operates a number of Availability Zones, which are either a single data center or a group of data centers within a region. These are located tens of miles apart from each other and have low latency between them, so if a disaster occurs in one part of the region, the service is not affected if it needs to fail over to another data center.

Amazon Cloudfront

Amazon CloudFront is an example of a CDN (Content Delivery Network). Amazon CloudFront uses a network of edge locations to cache content and deliver content to customers all over the world. When content is cached, it is stored locally as a copy. This content might be video files, photos, webpages, and so on. Edge Locations are separate from regions, and run the AWS DNS Service called Amazon Route 53 (which I cover in more detail below).

AWS Outpost

AWS Outpost is where AWS installs an AWS mini-region in your own-premises data center. At first look, it looks to be the same type of service as Azure Stack.

So from this, we can say:

  • AWS has data centers in multiple regions across the world
  • Each Region contains Availability Zones that allows you to run highly available infrastructure across physically separated buildings which are tens of miles apart.
  • Amazon CloudFront runs in AWS Edge locations (separate from regions), hosting DNS (Amazon Route 53) and a Content Delivery Network (CDN) to deliver content closer to customers no matter where they are located.

Finally in this module, we looked at the different ways that you can create, manage, and interact with AWS Services:

  • AWS Management Console – a web-based interface for accessing and managing AWS services. The console includes wizards and automated workflows that can simplify the process of completing tasks.
  • AWS Command Line Interface – AWS CLI enables you to control multiple AWS services directly from the command line within one tool. AWS CLI is available for users on Windows, macOS, and Linux. AWS CLI makes actions scriptable and repeatable.
  • Software Development Kits – The SDKs allow you to interact with AWS resources through various programming languages.
  • AWS Elastic Beanstalk – takes application code and desired configurations and then builds the infrastructure for you based on the configurations provides
  • AWS CloudFormation – Infrastructure as Code tool, which uses JSON or YAML based documents called CloudFormation templates. CloudFormation supports many different AWS resources from storage, databases, analytics, machine learning, and more

Module 4 – Networking

Module 4 deals with networking, and the concept of Amazon Virtual Private Cloud, or VPC.

When I first heard of VPC’s, I assumed they were like Resource Groups in Azure. Well, yes and no – a VPC is effectively an isolated Virtual Network that you then carve up into Subnets and can then deploy resources such as EC2 instances into those subnets.

Because the VPC is isolated by default when you get it, you need to add an Internet Gateway to the perimeter which connects the VPC to the internet and provides Public Access.

If you need to connect your corporate network to the VPC, you have 2 options:

  • Virtual Private Gateway allows VPN Connectivity between your on-premises corporate or private network and the VPC.
  • AWS Direct Connect allows you to establish a dedicated private connection between your corporate network and the VPC. Think of this as the same as Azure ExpressRoute.

So now we have our VPC and access into it, we need to control that access to both the subnets and the EC2 instances running within the subnets. We have 2 methods of controlling that access:

  • A network access control list (ACL) is a virtual firewall that controls inbound and outbound traffic at the subnet level.
    • Each AWS account includes a default network ACL. When configuring your VPC, you can use your account’s default network ACL or create custom network ACLs.
    • By default, your account’s default network ACL allows all inbound and outbound traffic, but you can modify it by adding your own rules.
    • Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound.
  • A security group is a virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance.
    • By default, a security group denies all inbound traffic and allows all outbound traffic. You can add custom rules to configure which traffic to allow or deny.
    • Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets.

So again, none of this is unfamiliar when compared against the services Azure offer in comparison.

Finally, the module covered Amazon Route 53 which is the AWS DNS Service. However, Route 53 does much more than just standard DNS, such as:

  • Manage DNS records for Domain Names
  • Register new domain names directly in Route 53
  • Direct traffic to endpoints using several different routing policies, such as latency-based routing, geolocation DNS, geoproximity and weighted round robin.

And that’s all for today! Hope you enjoyed this post, join me again next time for more AWS Core Concepts!

100 Days of Cloud – Day 28: AWS Cloud Practitioner Essentials Day 1

Its Day 28 of my 100 Days of Cloud journey, and todays post is about the first 2 modules of my AWS Skillbuilder course on AWS Cloud Practitioner Essentials.

This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.

Let’s have a quick overview of what the first 2 modules covered, the technologies discussed and key takeaways.

Module 1 – Cloud Computing Concepts

Module 1 covers the core concepts of cloud computing and describes the different deployment models you can use when deploying your infrastructure. As a reminder, these are:

  • Cloud Deployment – where you can migrate existing applications to the cloud, or you can design and build new applications which are fully hosted in the cloud.
  • On Premise Deployment – also known as Private Cloud, this is where you host all infrastructure in your own On-Premises self-managed hardware in your own Datacenter, where you manage all costs associated with power, cooling and hardware refresh/upgrade.
  • Hybrid Deployment – this is where you host some elements of your infrastructure on-premises and some elements in the cloud with Site-to-Site VPN connectivity between the sites.

Module 1 also covers the main benefits of cloud computing:

  • Variable Expense – Instead of a massive upfront cost outlay (CapEx), you only pay for what you use and are billed monthly (OpEx).
  • No Datacenter maintenance, so IT teams can focus on whats important.
  • Visible Capacity – you pay for what you use, so can scale up or down based on demand.
  • Economies of Scale – where the more people use the service, the lower the costs are.
  • Increase Speed and Agility – ability to create platforms in minutes as opposed to waiting for Hardware, configuration, and testing.
  • Go Global in Minutes – full scalable across global regional datacenters.

Module 2 – AWS Compute Services

Module 2 looks at the various AWS Compute Services offerings. Here’s a quick overview of these services:


This is the Amazon Core Compute Service where you can create virtual machines running Windows or Linux using an array of built-in operating systems and configurations. EC2 is highly flexible, cost effective and quick to get running. It comes in a range of instance types which are designed to suit different computing needs:

  • General Purpose – provides balanced resources for Compute, Memory, CPU and Storage
  • Compute optimized – ideal for compute intensive applications that require high processing. Examples of these would be batch processing, scientific modelling, gaming servers or ad engines.
  • Memory Optimized – ideal for Memory-intensive applications such as open-source databases, in-memory caches, and real time big data analytics.
  • Accelerated Compute – ideal for Graphics Processing, functions or data pattern matching.
  • Storage optimized – High performance storage which are ideal for database processing, data warehousing or analytics workloads.

EC2 also different pricing models to suit your needs, such as dedicated, reserved or spot instances, on-demand pay as you go, or 1-or-3 year savings plans.

EC2 also provides auto-scaling functionality so you can scale up or down based on the demand of your workloads. You can set minimum, maximum, and desired capacity settings to meet both your demand and costs models.

Elastic Load Balancing

So you have your EC2 instances and have scaled out in response to workload demand. But how do you equally distribute the load among each server? This is where Elastic Load Balancer comes in.

  • Automatically distributes incoming application traffic across multiple resources
  • Although Elastic Load Balancing and Amazon EC2 Auto Scaling are separate services, they work together to help ensure that applications running in Amazon EC2 can provide high performance and availability.

Messaging and Queuing

This is based on a Microservices approach where services are loosely coupled together, and uses 2 main services:

  • Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service where a publisher publishes messages to subscribers. Subscribers can be web servers, email addresses, AWS Lambda functions, or several other options.
  • Amazon Simple Queue Service (Amazon SQS) is a message queuing service. Using Amazon SQS, you can send, store, and receive messages between software components, without losing messages or requiring other services to be available. In Amazon SQS, an application sends messages into a queue. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.


The term “serverless” means that your code runs on servers, but you do not need to provision or manage these servers. AWS Lambda is is a service that lets you run code without needing to provision or manage servers. I’ll look more closely at AWS Lambda in a future post where I’ll do a demo of how it works


AWS provides a number or Container services, these are:

  • Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container management system that enables you to run and scale containerized applications on AWS. Amazon ECS supports Docker containers
  • Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that you can use to run Kubernetes on AWS.
  • AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and Amazon EKS. When using AWS Fargate, you do not need to provision or manage servers. AWS Fargate manages your server infrastructure for you

And that’s all for today! Hope you enjoyed this post, join me again next time for more AWS Core Concepts!