100 Days of Cloud – Day 29: AWS Cloud Practitioner Essentials Day 2

Its Day 29 of my 100 Days of Cloud journey, and todays post continues my learning through the next 2 modules of my AWS Skillbuilder course on AWS Cloud Practitioner Essentials.

This is the official pre-requisite course on the AWS Skillbuilder platform (which for comparison is the AWS equivalent of Microsoft Learn) to prepare candidates for the AWS Certified Cloud Practitioner certification exam.

Let’s have a quick overview of what the 2 modules I completed today covered, the technologies discussed and key takeaways.

Module 3 – Global Infrastructure and Reliability

AWS operates Data Center facilities across the globe, giving customers the choice to select the correct region to host their AWS Infrastructure based on the following factors:

  • Compliance with Data Governance and Legal Requirements – this determines where your data can be stored based on data governance, for example certain types of EU Data cannot be stored in a US Data Centre at it won’t be covered by GDPR.
  • Proximity to Customers – the closer your infrastructure is to the customers or staff who will be consuming it, the lower the latency will be and that will give better performance.
  • Available services within a Region – Some services may not be available in the closest region to you, so you may need to select a different one. This information is available in the AWS Portal when you are creating the service.
  • Pricing – based on the tax laws of different nations, it may be up to 50% more expensive to host infrastructure in a certain nation or region.

Availability Zones

The need for availability and flexibility is key in any Cloud Architecture. AWS operates a number of Availability Zones, which are either a single data center or a group of data centers within a region. These are located tens of miles apart from each other and have low latency between them, so if a disaster occurs in one part of the region, the service is not affected if it needs to fail over to another data center.

Amazon Cloudfront

Amazon CloudFront is an example of a CDN (Content Delivery Network). Amazon CloudFront uses a network of edge locations to cache content and deliver content to customers all over the world. When content is cached, it is stored locally as a copy. This content might be video files, photos, webpages, and so on. Edge Locations are separate from regions, and run the AWS DNS Service called Amazon Route 53 (which I cover in more detail below).

AWS Outpost

AWS Outpost is where AWS installs an AWS mini-region in your own-premises data center. At first look, it looks to be the same type of service as Azure Stack.

So from this, we can say:

  • AWS has data centers in multiple regions across the world
  • Each Region contains Availability Zones that allows you to run highly available infrastructure across physically separated buildings which are tens of miles apart.
  • Amazon CloudFront runs in AWS Edge locations (separate from regions), hosting DNS (Amazon Route 53) and a Content Delivery Network (CDN) to deliver content closer to customers no matter where they are located.

Finally in this module, we looked at the different ways that you can create, manage, and interact with AWS Services:

  • AWS Management Console – a web-based interface for accessing and managing AWS services. The console includes wizards and automated workflows that can simplify the process of completing tasks.
  • AWS Command Line Interface – AWS CLI enables you to control multiple AWS services directly from the command line within one tool. AWS CLI is available for users on Windows, macOS, and Linux. AWS CLI makes actions scriptable and repeatable.
  • Software Development Kits – The SDKs allow you to interact with AWS resources through various programming languages.
  • AWS Elastic Beanstalk – takes application code and desired configurations and then builds the infrastructure for you based on the configurations provides
  • AWS CloudFormation – Infrastructure as Code tool, which uses JSON or YAML based documents called CloudFormation templates. CloudFormation supports many different AWS resources from storage, databases, analytics, machine learning, and more

Module 4 – Networking

Module 4 deals with networking, and the concept of Amazon Virtual Private Cloud, or VPC.

When I first heard of VPC’s, I assumed they were like Resource Groups in Azure. Well, yes and no – a VPC is effectively an isolated Virtual Network that you then carve up into Subnets and can then deploy resources such as EC2 instances into those subnets.

Because the VPC is isolated by default when you get it, you need to add an Internet Gateway to the perimeter which connects the VPC to the internet and provides Public Access.

If you need to connect your corporate network to the VPC, you have 2 options:

  • Virtual Private Gateway allows VPN Connectivity between your on-premises corporate or private network and the VPC.
  • AWS Direct Connect allows you to establish a dedicated private connection between your corporate network and the VPC. Think of this as the same as Azure ExpressRoute.

So now we have our VPC and access into it, we need to control that access to both the subnets and the EC2 instances running within the subnets. We have 2 methods of controlling that access:

  • A network access control list (ACL) is a virtual firewall that controls inbound and outbound traffic at the subnet level.
    • Each AWS account includes a default network ACL. When configuring your VPC, you can use your account’s default network ACL or create custom network ACLs.
    • By default, your account’s default network ACL allows all inbound and outbound traffic, but you can modify it by adding your own rules.
    • Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound.
  • A security group is a virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance.
    • By default, a security group denies all inbound traffic and allows all outbound traffic. You can add custom rules to configure which traffic to allow or deny.
    • Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets.

So again, none of this is unfamiliar when compared against the services Azure offer in comparison.

Finally, the module covered Amazon Route 53 which is the AWS DNS Service. However, Route 53 does much more than just standard DNS, such as:

  • Manage DNS records for Domain Names
  • Register new domain names directly in Route 53
  • Direct traffic to endpoints using several different routing policies, such as latency-based routing, geolocation DNS, geoproximity and weighted round robin.

And that’s all for today! Hope you enjoyed this post, join me again next time for more AWS Core Concepts!

100 Days of Cloud — Day 1: Preparing the Environment

Welcome to Day 1 of my 100 Days of Cloud Journey.

I’ve always believed that good preparation is the key to success, and Day 1 is going to be about setting up the environment for use.

I’ve decided to split my 100 days across 3 disciplines:

  • Azure, because it’s what I know
  • AWS, because its what I want to know more about
  • And the rest of it …. This could mean anything: GitOps, CI/CD, Python, Ansible, Terraform, and maybe even a bit of Google Cloud thrown in for good measure. There might even be some Office365 Stuff!

It’s not exactly going to be an exact 3-way split across the disciplines, but let’s see how it goes.

Let’s start the prep. The goal of the 100 Days for me is to try and show how things can be done/created/deleted/modified etc. using both GUI and Command Line. For the former, we’ll be going what it says on the tin and go clicking around the screen of whatever Cloud Portal we are using. For the latter, it’s going to be done in Visual Studio Code:

To download, we go to https://code.visualstudio.com/download , and choose to download the System Installer:

Once the download completes, run the installer (Select all options). Once it completes, launch Visual Studio Code:

After selecting what color theme you want, the first place to go is click on the Source Control button. This is important, we’re going to use Source Control to manage and track any changes we make, while also storing our code centrally in GitHub. You’ll need a GitHub account (or if you’re using Azure GitOps or AWS Code Commit, you can use this instead). For the duration of the 100 Days, I’ll be using GitHub. Once your account is created, you can create a new repository (I’m calling mine 100DaysRepo)

So now, let’s click on the “install git” option. This will redirect us to https://git-scm.com, where we can download the Git installer. When running the setup, we can do defaults for everything EXCEPT this screen, where we say we want Git to use Visual Studio Code as its default editor:

Once the Git install is complete, close and re-open Visual Studio Code. Now, we see we have the option to “Open Folder” or “Clone Repository”. Click the latter option, at the top of the screen we are prompted to provide the URL of the GitHub Repository we just created. Enter the URL, and click “Clone from GitHub”:

We get a prompt to say the extension wants to sign into GitHub — click “Allow”:

Clicking “Allow” redirects us to this page, click “Continue”:

This brings us to the logon prompt for GitHub:

This brings up “Success” message and an Auth Token:

Click on the “Signing in to github.com” message at the bottom of the screen, and then Paste the token from the screen above into the “Uri” at the top:

Once this is done, you will be prompted to select the local location to clone the Repository to. Once this has completed, click “Open Folder” and browse to the local location of the repository to open the repository in Visual Studio Code.

Now, let’s create a new file. It can be anything, we just want to test the commit and make sure it’s working. So let’s click on “File-New File”. Put some text in (it can be anything) and then save the file with whatever name you choose:

My file is now saved. And we can see that we now have an alert over in Source Control:

When we go to Source Control, we see the file is under “Changes”. Right-click on the file for options:

We can choose to do the following:

– Discard Changes — reverts to previous saved state

– Stage Changes — saves a copy in preparation for commit

When we click “Stage Changes”, we can see the file moves from “Changes” to “Staged Changes”. If we click on the file, we can see the editor brings up the file in both states — before and after changes:

From here, click on the menu option (3 dots), and click “Commit”. We can also use the tick mark to Commit:

This then prompts to provide a commit message. Enter something relevant to the changes you’ve made here and hit enter:

And it fails!!!

OK, so we need to configure a Name and Email ID in GitBash. So open GitBash and run the following:

git config — global user.name “your_name”
git config — global user.email “your_email_id”

So let’s try that again. We’ll commit first:

Looks better, so now we’ll do a Push:

And check to see if our file is in VS Code? Yes it is!

OK, so that’s our Repository done and Source Control and cloning with GitHub configured.

That’s the end of Day 1! As we progress along the journey and as we need them, I’ll add some Visual Studio Code extensions which will give us invaluable help along the journey. You can browse these by clicking on the “Extensions” button on the right:

Extensions add languages, tools and debuggers to VS Code which auto-recognize file types and code to enhance the experience.

Hope you enjoyed this post, until next time!!