Virtual Private Cloud - VPC

All servers are present in virtual network or virtual private cloud (VPC).

A VPC can be divided into subnets.

Subnet rules are specific to each subnet. We can make one subnet as public to internet and other as a private.

We can let the servers be present in different subnets (but same vpc) can talk / communicate with each other.

Every EC2 instance should reside under a VPC. AWS automatically creates one default VPC per region. We can create any number of new VPCs if we want.


VPC Peering

  • We can create Peers which can communicate with each other, regardless the subnet or VPC or Region in which they are residing in.
  • VPC Peering is not supported if both servers have same private IP in their respective subnets.


We can connect to an instance in VPC through an “Internet Gateway”. Anyone can try to connect if internet gateway is enabled in VPC.

In order to make it a little bit private, we can prefer “Virtual Private Gateway” to our VPC instead of internet gateway. Virtual Private Gate allows only selected traffic to VPC. But still the route taken by the traffic is internet.

If we want the route also to be private and dedicated (non-shared) connection from client’s data center premises, “AWS Direct Connect” is the solution.


AWS Support Plans

If we have any issues in EC2 instances or any other services, we can raise a support ticket to AWS support team.

This service is not free (except base plan). This is indeed a paid service offered in 4 types of plans.

1) Basic

This plan is free and anyone can make use of this if we have an AWS account. This plan includes free Customer Service, AWS whitepapers will be provided but no technical support is offered in this plan.

2) Developer

Covers the basic plan. Includes technical support by Certified Cloud Associates in business working hours via email only.

3) Business

Covers the basic & developer plans as well and 24x7 tech support by Certified Cloud Support Engineers via email, chat, phone.

4) Enterprise

This plan includes all the services offered by previous plans and along with that the tech support is provided by senior & industry level experienced Certified Cloud Support Engineers.

In this plan, A dedicated Technical Account Manager (TAM) is assigned to your AWS account and we can reach out to that person directly for resolving any issue.

AWS Pricing Benefits & Discounts

1) Pay as you go / On-Demand

Pay for usage. Pay only as long as you need it.

  • No large upfront expenses.
  • No long term contracts.

2) Pay less when you reserve

We can reserve some services (EC2, RDS etc…) and purchase them if we need for long term.

75% of on demand cost can be saved.

3 payment modes:

  • All upfront – Largest discount
  • Partial upfront – lesser discount
  • No upfront – smallest discount

3) Pay less by using more

We can get discount if we use more quantity of certain services like Amazon S3. If we use the S3 storage above the threshold set by AWS, then we get discount and bill amount will be reduced.

4) Pay less when AWS grows.

Over the years, AWS continuously kept decreasing the prices for their services offered due to huge increase in the customers for AWS.

All the aggregated users of AWS contribute to the growth of AWS and AWS will decrease the pricings. This is the benefit from massive economies of scale. (This point is important for AWS Cloud Practitioner Certification CLF – C01 exam)


AWS Health Dashboard

1) AWS Service Health Dashboard

  • Checks status of AWS resources, whether they are functioning normally or not.
  • This is a global service.

2) AWS Personal Health Dashboard

  • Check the status of resources that are being utilized by us.
  • Displays any issues that are impacting our resources.

If any services are scheduled to maintenance or updates or for testing events, then they will be notified here and we can check them here.


AWS Well Architected Framework

The 5 pillars in well architected framework of Amazon web services are:

1) Operational Excellence

  • Perform operations as code
  • Make frequent, small, reversible changes (agile).
  • Refine operations procedure frequently
  • Anticipate & learn from failure.

2) Security

  • Implement identity foundation
  • Enable traceability
  • Apply security at all layers
  • Keep people away from data as much as possible

3) Reliability

  • Testing the recovery procedure
  • Automatic recovery from failure

4) Performance Efficiency

  • Use serverless computing

5) Cost Optimization

  • Use managed services to reduce cost of ownership
  • Reduce spending on conventional data centers. Instead, use cloud services.


AWS Code Commit

  • Code Commit is a managed service provided by AWS for hosting GIT repositories.
  • AWS code commit is used for source control or version control repositories.
  • AWS Code Commit is Software as a Service model (SaaS).
  • Alternatives: Git Hub, Bit Bucket etc…


AWS Elastic Beanstalk

Elastic Beanstalk is AWS’ platform as a service (PaaS) model.

We can directly deploy our application in AWS cloud by just selecting the environment of our application (like PHP, Python, Java etc…)

Beanstalk will setup a platform for us by installing all the dependencies, configuring security group and launches an EC2 instance. Finally a public end point is provided with the application running and ready to serve.

  • Google App Engine is another Popular PaaS.

AWS Rekognition

  • Deep learning model based Computer Vision offered as Software as a service.
  • AWS Rekognition provides great visual analysis.
  • Performs Object detection, face detection & recognition, Face comparision etc…
  • Provides attributes/explanations of detected objects or faces in the image with a percentage of accuracy.
  • It has been sold and used by a number of United States government agencies.

AWS Cloud Formation – Infrastructure as Code (IaC)

Instead of manually launching the EC2 instances or creating a Database or S3 Buckets, we can automate this process using Infrastructure as Code

By running scripts, we can create all those things (environment). All we have to do is to write a template/code/script and run that same code whenever we require the same environment or infrastructure.

  • We can write this type of code in Terraform – generic for all cloud service providers.
  • Specifically for AWS, Amazon offers “Cloud Formation” service for IaC.

AWS Cloud Formation has GUI with which we can access or see code easily.

It has predefined templates of functionalities. We can just select them and code will be added automatically. (This is called as “Design Template” in AWS Cloud Formation dashboard)

Or if we want, we can code manually by typing the script on our own.

Amazon CloudFront by AWS

Cloud Front is a content delivery network (CDN) service provided and managed by AWS.

The Amazon CloudFront consists of many nodes in the network which are called Edge Locations that are dispersed across the globe.

To reduce latency, the content is copied to all the edge locations that are located all over the world at various places.

A client, instead of making a request to original server for a file, the request will be sent to nearest edge location and it will be served from there itself.

If a client requests for a file for the first time to original server, then it is served from original server and that file is copied to all the edge locations by aws cloud front. Next time, if any client requests for that same file, it will be served from nearest edge location.

If users need to upload files to S3 buckets, then they can upload to edge locations (nearest one) thereby reducing latency and those files will be transferred to S3 much faster. AWS Cloud Front will take care of this fast transfer. This is called “S3 Transfer Acceleration

Reverse Proxy & Content Delivery Networks (CDN)

Reverse proxy server receives request from client, forwards to actual servers & receives response and forwards response to clients.

So, the existence (IP address) of original backend servers cannot be known by clients.

Therefore minimizing web based attacks, DoS or DDoS attacks.

Reverse Proxy can provide caching functionality as well. Frequently requested files or same repeated requests are cached on reverse proxy server itself. These redundant requests can be served by reverse proxy server itself.

This is the same conceptual basis for Content Delivery Networks (CDN).

A CDN is a network that acts as a reverse proxy and provides caching functionality, security for ddos attacks and can also act as a secondary firewall.

AWS provides a CDN with name "Amazon CloudFront" (more about this in next post…)

AWS Lambda & AWS Athena

Both are serverless features fully managed by AWS.

AWS Lambda

This is event driven service.

We can configure to run a function code or perform an action if our target even has occurred. We only pay for the execution time for which our function code has run.

AWS Lambda is serverless and most cost effective service if we make use of it.

AWS Athena

AWS Athena is a service that allows to analyze various log files from S3 buckets using standard SQL Queries.

Serverless Computing

  • Serverless doesn’t mean that servers do not exist. Servers will exist and are managed by cloud service provider but we won’t have access to them.
  • The selected function or any specified piece of code will run and it will be billed on the basis of time of execution (in the order of milli seconds).
  • Serverless features are extremely cost effective because we don’t pay for the servers at all.
  • AWS provides many serverless computing services such as AWS Lambda, AWS Athena, AWS DynamoDB etc…

AWS Route 53 – DNS

Route 53 is AWS managed DNS (Domain Name System) service on cloud

Apart from translating names to IP, Route 53 can also perform health checks & monitoring, has routing capabilities.

It is a Global service. (means not specific to a single aws region, Route53 is same in all AWS regions)

Simple Queue Service – AWS SQS

AWS SQS is fully managed, reliable message queuing service.

Cost effective and enables loosely coupled system.

SQS is a message broker mechanism which contributes to the communication of different modules of our application.

SQS ensures orderly delivery of all the messages in the queue buffer to the corresponding receiver component.

A message broker service takes message from publisher & store it in a queue and forward it to the consumer.

It ensures: 

  • Guarantee of delivery.
  • Orderly delivery.

Simple Notification Service - AWS SNS

Messages / Alerts / Notification service fully managed by AWS.

Publish a message to all the endpoints that are subscribed to a topic.

We can create a topic and add subscribers to it whoever are interested for receiving notifications about that topic. 

We can publish message about a topic via notification to all the subscribed endpoints like mobiles (SMS or Push Notifications), emails, SQS.

AWS Cloud Watch

Real time monitoring Service of resources and their usage.

Collect & monitor log files, set alarms and automatically react to changes that occur in VPC.

Cloud watch alarms is a very good feature in AWS with which we can automate the response from our side if any certain events are triggered or specific conditions are fulfilled.

e.g. If CPU utilization is > 70%, then send an alarm / notification via email.

Monitors the performance & utilization. Create alarms for alerting if thresholds crossed. All metrics in a single dashboard.

AWS Dynamo DB Amazon - NoSQL Database

What are NoSQL Databases?

  • These are Non Relational or non SQL databases.
  • These don’t use tabular form for storing
  • Schema free, easy replication, can manage huge data.

e.g. Graph store, Key value pair etc…

AWS Amazon DynamoDB

  • Dynamo DB is a serverless NoSQL database service which is fully managed by AWS.
  • DynamoDB is extremely fast when it comes to query processing. It stores data in key value pairs. No schema is required.
  • DynamoDB delivers single-digit millisecond performance at any scale.
Exam tip: If you see the words "single digit millisecond" in the question, then most probably, the answer is DynamoDB

DynamoDB also offers a memory cache for even faster service which is called DynamoDB Accelerator.


Amazon Aurora

full post on aws aurora (amazon rds database engine) updating soon...

AWS RDS – Relational Database Service

  • AWS RDS is a relational database service which id fully managed by AWS.
  • We can directly connect and use the database.
  • Within few clicks, we can do automated updates, multi AZ deployments, back-ups.
  • Underlying OS & Security are managed by AWS but data inside the database and its security is customer’s responsibility.

AWS S3 Lifecycle

We can set up a rule or policy called S3 Lifecycle in which we can automate the transition of data from one storage class to another, depending upon time.

e.g. 

  • After 30 days of time, transfer to IA.
  • After 360 days of time, transfer to Glacier.
Using the lifecycle policies of S3, we can automate the transfer between storage classes and save unnecessary costs in AWS.

AWS Glacier & AWS Deep Archive Glacier

6) AWS Glacier

  • Used for long term back ups
  • May take several minutes to couple of hours for an object to get restored (downloaded/retrieved)
  • 99.999999999% durability
  • Very cheap price for storage when compared to S3 & IA.

7) Glacier Deep Archive

  • Used for back up data that will be hardly accessed
  • Retrieval takes around 12 hours.

For normal glacier, data that needs to be retrieved somewhat faster will be suitable. For glacier deep archive, data even with very long retrieval times is not a problem, then this is best.

Glacier deep archive is the cheapest storage class of all.

  • 1 TB in Glacier = 14$
  • 1 TB in Glacier Deep Archive = 10.99$
Exact price values are not asked in exam but you should know which service is cheaper & which one is costly.

(There are 5 other types of storage classes in AWS S3 in the previous post.)


S3 Storage Classes

Various types of storage classes are available with different pricing options depending upon durability and availability for storing in AWS S3.

1) AWS Standard S3

  • 99.999999999% durability
  • 99.99 availability
  • Highest Pricing 

2) AWS Standard Infrequent Access – IA

  • Same durability as standard S3
  • 99.90% availability
  • Pricing lower than standard S3

3) Intelligent Tiering

  • Automatically change storage class so that cost can be optimized.
  • Monitor the access of data and automatically move un-accessed data to infrequent tier.

If an object is not accessed for 30 days, then that object will be placed in Standard IA (infrequent access) tier. If any object from IA tier is accessed, then it will be moved back to standard S3 class.

Intelligent Tiering is used if data access patterns are unpredictable or undetermined.

  • 1 TB in Standard S3 = 22.88$
  • 1 TB in Standard IA = 12.5$
(No need to remember the exact price values for exam. But you need to know which class is costly and which one is cheaper)

4) One Zone IA

Storage classes like standard s3, standard IA will store data in minimum of 3 availability zones. This causes increment in overall cost. One zone IA will store only in one availability zone.

  • Cost is 20% less than standard S3.
  • Good for storing secondary back ups or any other easily re-creatable data.
  • Data will be lost if AZ is destroyed.

5) Reduced Redundancy Storage (RRS)

  • Store non critical, reproducible data at lower levels of redundancy than S3 standard.
  • 99.99% durability & 99.99% availability
There are 2 more important archiving storage classes discussed in the next post.

AWS S3 – Simple Storage Service

S3 is an Object storage service in which customers can store unlimited size of data (although unlimited storage is not possible in strictly speaking, AWS S3 offers way more than enough storage capacity for any type of requirement)

AWS guarantees 99.999999999% durability and 99.99% availability for S3

S3 stores in terms of Buckets and Objects.

Bucket is like a folder. Objects are like files inside the buckets (folders)

While creating a bucket, the name of bucket should be unique. Namespace is common for all the AWS users.

Inside a bucket, we can create folder also. Folder names need not be unique among all users.

We can configure the permissions (public or private) for any bucket or object.

If an object is made public, then we can access the object simply with a URL or API on a browser.

Maximum size allowed for a single object in S3 is 5 TB

Resource level Tags

Add tags to instances so that we can keep track of financial usage per each tag.

Used for managerial purposes. We can know which team is using how much resources and how much bill is getting generated.

This tags feature is really helpful for calculating the costs caused by different teams (like development team, testing team) and track which team is over utilizing than the budgeted amount.

AWS can automatically send billing reports with tags into S3 bucket.

Elastic Load Balancer - AWS ELB

Single point of failure should always be avoided. So we use multiple servers for same application.

A load balancer will equally distribute the incoming traffic to multiple servers.

e.g. Consider there are 2 servers for our application deployed in AWS. If one request to the application is incoming, ELB forwards it to first server and the next incoming request will be forwarded to second server. Then, 3rd request will be sent to first server and the process is repeated. If there are more servers, then all the incoming requests are distributed one by one to each different servers. 

ELB is managed by AWS. Customer need not worry about high availability of ELB.

Even if one server is down, the load balancer will redirect the traffic to second server.

As a part of best practice method, it is advisable to use more than one elastic load balancer to maintain high availability even if any one of the ELBs goes down, the other one can distribute all the incoming traffic thereby avoiding single point of failure completely.

AWS Elastic Block Store - EBS

EBS is Persistent storage unlike Instance store.

EBS drives are based on network attached storage. So, they can be easily attached and detached. They are like plug and play type of drives which can be attached or detached to different EC2 instances in the same AZ.

Even if we terminate the EC2 instance, the attached EBS volumes will still be available and all the data in them can be used by attaching them with another EC2 instances.

AWS offers to take snapshots of EBS volumes for back ups.

  • Automatically replicated data among Availability zones.
  • Elastic in nature – can be dynamically increased.

AWS Instance Store

  • Temporary storage for EC2 instances.
  • Instance store gets storage from underlying physical disk.
  • Size of instance store varies upon instance type.

Data in instance store is Lost if instance is stopped. So EC2 instances which are backed by Instance store type storage can’t be stopped and started again. AWS only allows to restart or terminate these type of instances.

  • Data will not be lost if we reboot.
  • If instance store is in use, then backs ups are recommended.

Network Access Control List - NACL

Access control lists define the set of rules which govern the access to our resources in VPC (Virtual Private Cloud). We can block or allow external IPs to connect to our resources and allow or deny inbound or outbound traffic.

All subnets in a VPC must be associated with a NACL

Network ACL operates at subnet level. We can control inbound and outbound rules for subnets. Each rule in NACL will have a number associated with it and the lower number will have higher priority. We can edit the rule numbers to set the priority accordingly.

AWS automatically creates a default NACL in which inbound and outbound traffic is fully allowed to everyone.

We can create custom NACLs and attach them to subnets. After creating a custom NACL, all inbound and outbound traffic is fully denied to all by default. We need to configure the inbound/outbound rules as per our need.

Block Storage & Object Storage

Block Storage

  • Each block has fixed size
  • Data is stored in terms of blocks
  • Data is read or written with a whole block at the same time
  • No metadata is stored (except for block address)

Object Storage

  • No block system. Any type of file is stored directly as an object along with its metadata
  • Can be called with API on a browser directly
  • Slower than block storage


AWS Regions & Availability Zones - AWS Global Infrastructure

AWS aims to provide high availability and reliable deployment solution throughout anywhere in the world for their customers. AWS Global Infrastructure consists of AWS Regions and Availability Zones.

AWS Regions

  • An AWS region is a unique geographic location where the actual physical datacenters exist.
  • Each Region contains two or more availability zones
  • Regions are not linked with private links

Availability Zones (AZs)

  • In every AWS Region, 2 or more Availability Zones in different locations (around 30 miles apart) are maintained for enabling high availability.
  • AZs are physically separated and interconnected with high-speed private links
  • A customer’s data is copied into more than one availability zone. So that, even one data center at any AZ fails, other can serve the customer.
  • Each AZ can have more than one data center which are physically isolated from each other.
As of now (March 2021), there are total 25 AWS Regions with 80 Availability Zones across the globe.

Amazon EC2 Instance Types

The following are the types classified based on the basic specifications & configurations. Depending upon the requirement, we can choose the type of instance that will suit best for our use-case.

1) General Purpose

Equivalent distribution of computation power, memory, storage. network, etc...


2) Compute Optimized

High processing power with top end CPU power


3) Memory Optimized

Memory intensive tasks like high speed database query executions. These come with high RAMs.


4) Accelerated Computing

Graphics Processing, Data Pattern matching are use-cases. These offer high GPU.


5) Storage Optimized

Suitable for high workloads of data which requires high number of input/output operations per second (IOPS). e.g. Data Warehousing.

Amazon Elastic Compute Cloud - AWS EC2

 According to AWS, 

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.

An EC2 instance is nothing but a virtual machine we can create with our preferred choice of specifications in processor, storage, networking, operating system, and purchase model.

These can be used as servers for our business or hosting/deploying our applications or workstations for employees in an organization. Instead of buying a desktop/laptop computer for each of the employee, we can simply create an EC2 instance in AWS and give access to the employee.

We can easily resize, change configuration, install application softwares, terminate the instance and create a new instance whenever needed. We only pay for what we use. No need to buy all the computer hardware upfront.

There are 5 types of EC2 instances available as of now. Read about them in next post.

Cloud Deployment Models

1) Private Cloud

Solely for a single organization


2) Community Cloud

Several organizations who are contributing for a common goal.


3) Public Cloud

Open to everyone on the internet


4) Hybrid Cloud

Combination of more than one type of above deployment models.


Go to New Post - 

Go to Previous Post - Cloud Service Models

Cloud Service Models

1) Software as a Service - SaaS

Software applications like Microsoft Word, Microsoft Excel, Online text editors, online image editors etc...

2) Infrastructure as a Service - IaaS

Memory, Storage, Computation Power, Networks and other fundamental computing resources are provided through cloud as a service.

3) Platform as a Service - PaaS

All the required dependencies and interdependent modules for any program are collectively offered as a service in the cloud.

4) XaaS - X as a Service

This is latest addition to the Cloud Computing Technology. A generic definition which means anything (related to Computing) can be offered as a service. In XaaS, 'X' can be anything and everything.


Go to Next Post - Cloud Deployment Models

Go to Previous Post - Auto Scalability