↑
Documentation Index
Introduction
Introduction to AWS EC2
Introduction to AWS S3
Introduction to AWS RDS
Introduction to AWS DynamoDB
Introduction to AWS CLI
Introduction to AWS Lambda
How EC2, S3, RDS, and CLI Work Together?
Introduction to Amazon AWS
What Is AWS?
AWS (Amazon Web Services) is the world’s most widely used cloud computing platform.
It provides:
Computing power
Database hosting
Storage
Networking
AI/ML tools
DevOps infrastructure
Security & monitoring services
AWS follows a pay-as-you-go pricing model:
You pay only for the resources you actually use.
No upfront cost or contract required.
Companies use AWS to reduce hardware costs, deploy applications faster, and scale automatically.
Why Use Cloud Computing?
Traditional on-premise servers require:
buying hardware
managing physical machines
upfront large investment
Cloud providers like AWS handle the infrastructure so developers can:
deploy services instantly
scale up/down automatically
pay only for what they use
This lets teams focus on application logic instead of server maintenance.
AWS Global Infrastructure
AWS operates in a global network of:
Regions – geographic areas (e.g., Frankfurt, Tokyo, Virginia)
Availability Zones (AZ) – isolated data centers inside each region
Edge Locations – CloudFront CDN nodes
Regions let you choose where to deploy your applications.
AZs provide redundancy and high availability.
Key AWS Service Categories
AWS groups its services into several categories:
1. Compute — running applications
EC2: virtual machines
Lambda: serverless functions
ECS / EKS: container orchestration
2. Storage
S3: object storage
EBS: block storage for EC2
Glacier: long-term archival storage
3. Databases
RDS: managed SQL databases (MySQL, PostgreSQL, etc.)
DynamoDB: fully managed NoSQL
ElastiCache: Redis / Memcached
4. Networking
VPC: virtual private network
Route 53: DNS
CloudFront: CDN for global distribution
5. Security & Identity
IAM: identity & permissions
KMS: encryption keys
WAF: web firewall
6. DevOps & Monitoring
CloudWatch: metrics & logs
CloudTrail: account audit logs
CodePipeline / CodeBuild: CI/CD
7. AI & Machine Learning
SageMaker: train & deploy ML models
Rekognition: image/video recognition
8. Serverless (Key Modern Trend)
Lambda: run functions without servers
API Gateway: HTTP APIs for Lambda
DynamoDB: serverless database
Understanding the Shared Responsibility Model
AWS and customers share security responsibilities.
AWS handles:
data center physical security
hardware maintenance
low-level infrastructure
You (the customer) handle:
account security (IAM users, MFA)
application logic
data encryption configuration
OS-level configuration for EC2
Billing and Pricing on AWS
AWS uses a flexible pricing model:
On-Demand — pay per hour/second
Reserved Instances — cheaper if you commit 1–3 years
Spot Instances — very cheap but can be interrupted
For new users, AWS provides the Free Tier :
EC2 t2.micro – 750 hours/month
S3 – 5GB free
Lambda – 1 million free requests
Common Beginner Architectures
Simple Web App
EC2 or Lambda backend
RDS or DynamoDB
S3 for static files
CloudFront for CDN
Serverless Microservice
API Gateway
Lambda
DynamoDB
CloudWatch logs
Containerized Application
ECS / EKS
Fargate (serverless containers)
Elastic Load Balancer (ELB)
Summary of AWS Concepts
Concept
Description
Examples
Compute
Run applications
EC2, Lambda
Storage
Store objects or block data
S3, EBS
Database
Managed SQL/NoSQL
RDS, DynamoDB
Networking
Virtual networks & routing
VPC, CloudFront
Security
User & access management
IAM, KMS
DevOps
Monitoring and CI/CD
CloudWatch, CodeBuild
AWS is the largest and most mature cloud ecosystem, with services covering nearly every area of modern software development.
It is widely used for backend services, databases, hosting, DevOps automation, and large-scale distributed systems.
Introduction to AWS EC2
What Is Amazon EC2?
Amazon EC2 (Elastic Compute Cloud) is one of the core services of AWS.
It provides virtual servers in the cloud, called EC2 instances .
With EC2 you can:
launch Linux or Windows servers on demand,
choose instance size and hardware power,
pay only for the compute time you use,
scale up/down anytime,
host web servers, databases, apps, microservices, and more.
EC2 is the foundation of many AWS architectures.
Core Concepts
EC2 Instance : A virtual machine you run in the cloud.
AMI (Amazon Machine Image) : A template for your instance (OS + software).
Instance Types : Hardware configuration (CPU, RAM, network).
t2.micro – small, general-purpose
m5.large – balanced compute/memory
c6g – compute-optimized
p3 – GPU-accelerated
Key Pair : SSH private key used to securely connect to your instance.
Security Group : A firewall controlling allowed inbound/outbound traffic.
EBS Volume (Elastic Block Store) : Persistent disk attached to an instance.
Elastic IP : A static public IP you can attach to an instance.
EC2 Workflow: How It All Fits Together
A typical EC2 setup goes like this:
Choose AMI → Choose Instance Type → Configure Networking → Add Storage → Add Key Pair → Launch
After launching:
EC2 gives you a public IPv4 address (if enabled),
you SSH into Linux or RDP into Windows,
you install applications, web servers, databases, etc.
Key Concepts Explained in Detail
AMI (Amazon Machine Image)
An AMI contains:
Operating System (Ubuntu, Amazon Linux, Windows)
Optional pre-installed software
You can create your own AMI to clone servers.
Instance Types
Groups of instance types:
General Purpose: t2, t3, m5
Compute Optimized: c5, c6g
Memory Optimized: r5, x1e
GPU Instances: p3, g4dn
Every instance type varies in:
CPU (vCPUs)
RAM
Network bandwidth
Security Groups
Act like a firewall around your instance.
Example rules:
Allow SSH at port 22 (Linux)
Allow RDP at port 3389 (Windows)
Allow HTTP/HTTPS (ports 80/443) for web servers
Security groups are stateful:
If incoming traffic is allowed, outgoing response is automatically allowed.
EBS Volumes
Persistent block storage disks.
Your OS and data can live on one or multiple EBS volumes.
They survive instance termination if you choose so.
How EC2 Pricing Works
EC2 uses pay-as-you-go pricing with multiple models:
On-Demand Instances : pay hourly/secondly, no commitment.
Reserved Instances : 1-year or 3-year commitment → big discounts.
Spot Instances : unused EC2 capacity at up to 90% discount (can be interrupted).
Savings Plans : flexible, commitment-based discounts.
Connecting to an EC2 Instance
Linux EC2 instance via SSH :
ssh -i mykey.pem ec2-user@<public-ip>
Common usernames:
ec2-user → Amazon Linux
ubuntu → Ubuntu
centos → CentOS
admin → Debian
Windows EC2 instance uses RDP:
Use your RDP client → enter public IP → use decrypted administrator password
Elastic IPs
An Elastic IP is a static public IPv4 address that:
you can move between instances,
persists even if instance is stopped/restarted,
helps keep stable DNS when server changes.
You pay for an Elastic IP only when it’s allocated but not in use .
EC2 Ideal Use Cases
Deploying web applications (Node.js, Django, Flask, Rails, etc.)
Running backend APIs
Hosting game servers
Batch processing
Machine learning training
GPU workloads
Hosting company internal apps
Creating development/testing environments
EC2 vs. Other Compute Services
EC2 is powerful, but AWS has alternative compute services:
Service
Purpose
EC2
Full control of a virtual machine; flexible but requires management
Lambda
Serverless compute, run functions without managing servers
ECS / EKS
Container orchestration (Docker, Kubernetes)
Lightsail
Simplified VPS hosting, easier than EC2
Introduction to AWS S3 (Simple Storage Service)
What Is Amazon S3?
Amazon S3 (Simple Storage Service) is AWS’s globally scalable, durable, fully managed object storage service.
It allows you to store and retrieve any amount of data at any time, from anywhere on the Internet.
S3 is commonly used for:
Static website hosting
Backups and disaster recovery
Storing files, images, logs, documents, videos
Data lakes and big data analytics
CDN distribution with CloudFront
How S3 Stores Data: Buckets and Objects
Bucket :
A top-level container that holds your data.
Bucket names must be globally unique (shared across all AWS customers).
Each bucket lives in exactly one AWS region.
Object :
The file you store in S3.
Can be any type: image, PDF, video, log file, JSON, zipped backup, etc.
Each object has:
a key (the “full path” or file name)
data (up to 5 TB per object)
metadata
optional ACLs (access control lists)
Key (Object Key):
A unique identifier for an object inside a bucket.
Behaves like a file path: photos/2023/trip/image.png
S3 looks like a folder structure but internally is flat storage.
Accessing S3
You can access S3 using:
The AWS Web Console (GUI)
The AWS CLI
SDKs (Python, Java, Node.js, Go, Rust, C#)
The REST API
Basic AWS CLI examples:
# List buckets
aws s3 ls
# List objects inside a bucket
aws s3 ls s3://my-bucket/
# Upload a file
aws s3 cp localfile.txt s3://my-bucket/
# Download a file
aws s3 cp s3://my-bucket/file.txt localfile.txt
# Sync a whole folder
aws s3 sync ./localfolder s3://my-bucket/
Object Storage vs. Filesystem Storage
S3 is object storage , not a filesystem.
That means:
No “rename” (actually a copy + delete)
No traditional folders (just prefixes in object keys)
Objects are immutable (overwrites replace the whole object)
No partial file edits (you upload again)
This design makes S3 extremely scalable and durable.
S3 Storage Classes
Storage Class
Description
Use Case
S3 Standard
Highest availability and performance
General-purpose, frequently accessed data
S3 Standard-IA (Infrequent Access)
Lower cost, slightly lower availability
Backup, disaster recovery, long-term assets
S3 One Zone-IA
Data stored in only one AZ
Cost saving for easily reproducible data
S3 Glacier
Very low-cost archival storage
Long-term backups, regulatory data
S3 Glacier Deep Archive
Lowest cost storage class
Cold archives; retrieval may take hours
S3 Bucket Policies & Access Control
Bucket Policies :
JSON-based IAM policy documents applied to a bucket.
Used to control read/write permissions.
Can allow public access or restrict by IP address or AWS account.
IAM Policies :
Attached to users, roles, or groups.
Control what actions the entity can perform on buckets/objects.
ACLs (Access Control Lists) :
Legacy feature; usually recommended to avoid unless needed.
Block Public Access :
A global "safety switch" preventing accidental public exposure.
Static Website Hosting with S3
You can host websites directly from S3 buckets.
Examples:
index document: index.html
error document: error.html
To enable website hosting:
Enable “Static Website Hosting” for the bucket
Upload index.html
Make files public via bucket policy
Optional: use CloudFront (CDN) for caching and HTTPS
S3 Versioning
Versioning lets you preserve multiple versions of the same object.
Once enabled:
Objects get unique version IDs
Deleting creates a delete marker (you can restore)
Accidental overwrites can be fixed
Useful for:
Backups
Protection against accidental deletion
S3 Lifecycle Rules
Rules that automatically transition objects to cheaper storage classes or delete them.
Examples:
Move objects to Standard-IA after 30 days
Move to Glacier after 90 days
Delete after 1 year
Ideal for long-term storage cost optimization.
Event Notifications
S3 can trigger events when objects are created, deleted, or modified.
Events can notify:
Lambda functions
SQS queues
SNS topics
Common uses:
Image processing workflows
Document processing pipelines
Serverless audio/video processing
Use Cases
S3 is used by companies of all sizes for:
Cloud-native data storage
Application asset storage
Big data lake storage (with Athena, Glue, EMR)
Backup + DR
Image and video storage
Hosting static sites
Content distribution (via CloudFront)
Introduction to AWS RDS (Relational Database Service)
What Is Amazon RDS?
Amazon RDS (Relational Database Service) is AWS’s fully managed service for running relational databases in the cloud.
It allows you to run popular SQL databases without managing:
server setup,
backups,
patching,
OS updates,
scaling infrastructure.
Supported database engines:
Amazon Aurora (MySQL-compatible or PostgreSQL-compatible)
MySQL
PostgreSQL
MariaDB
Oracle
Microsoft SQL Server
RDS handles database administration so you can focus on your application.
RDS vs. Running Databases on EC2
You could install MySQL or PostgreSQL on an EC2 instance, but:
RDS
Database on EC2
Fully managed backups, updates, failover
You maintain OS, database engine, patches
Automatic scaling
Manual hardware changes
High availability options built in
You must configure replication/failover
Monitoring through CloudWatch
You install/manage your own monitoring stack
No direct OS access (more secure)
Full OS access (more flexibility)
Choose RDS when you want reliability and low operational burden.
Choose EC2-hosted DB only if you need advanced OS-level customization.
Key RDS Components
DB Instance
The actual running database server.
You choose instance size (CPU/RAM), database engine, and version.
DB Subnet Group
Defines which subnets in your VPC the database can run in.
Parameter Groups
Configuration settings for the DB engine (e.g., MySQL timeouts, PostgreSQL settings).
Security Groups
Firewall rules controlling who can access the database.
Only allow trusted EC2 instances or IPs.
Option Groups
Enable optional features (e.g., Oracle components, SQL Server extensions).
Storage (EBS-backed)
RDS stores database data on EBS volumes.
You choose:
General Purpose SSD
Provisioned IOPS SSD
Magnetic (legacy)
High Availability with Multi-AZ
Multi-AZ deploys two database instances in different Availability Zones.
Benefits:
Automatic failover if primary instance fails
No manual intervention required
Highly durable and stable setup
Primary instance = writes + reads
Standby instance = hot standby for failover
Not designed for read scaling — for that, use Read Replicas.
Read Replicas (Scaling Read Traffic)
Read replicas are separate DB instances used for scaling read operations.
They support:
MySQL
PostgreSQL
MariaDB
Aurora
Use cases:
Analytics queries
Reporting
Low-latency read distribution
Read Replicas can also be promoted into standalone instances.
Backups and Snapshots
RDS provides:
Automated Backups : daily snapshots + transaction logs
Manual Snapshots : user-triggered backups
Point-in-time recovery:
You can restore the database to any second within the retention period.
Extremely useful for recovering from accidental deletions or corruptions.
RDS Monitoring & Metrics
RDS integrates with Amazon CloudWatch.
Monitored metrics include:
CPU usage
Memory
Storage
DB connections
I/O throughput
Replication lag
You can enable Enhanced Monitoring for OS-level metrics.
Connecting to an RDS Database
When you create an RDS instance, AWS gives you an endpoint:
mydb.abcd1234xyz.eu-central-1.rds.amazonaws.com
Usage example (PostgreSQL):
psql -h mydb.abcd1234xyz.eu-central-1.rds.amazonaws.com \
-U myuser -d mydatabase
You must ensure:
VPC security group allows your IP or EC2 instance
You are connecting to the correct port:
MySQL: 3306
PostgreSQL: 5432
MariaDB: 3306
Oracle: 1521
SQL Server: 1433
Aurora: The Special RDS Engine
Aurora is a high-performance, cloud-native database compatible with MySQL and PostgreSQL.
Benefits:
Up to 5x performance of MySQL
Up to 3x performance of PostgreSQL
Replication latency < 10ms
Distributed, fault-tolerant storage
Serverless mode available
Aurora is ideal for large-scale, high-performance systems.
Pricing Summary
RDS pricing includes:
Instance hours (CPU/RAM)
Storage (GB-month)
Backups (beyond free tier)
I/O operations (for some storage types)
Data transfer
Multi-AZ costs (secondary instance)
Aurora pricing includes:
Instance hours
Storage & I/O per request
Introduction to AWS DynamoDB
What Is Amazon DynamoDB?
Amazon DynamoDB is a fully managed, serverless, NoSQL key–value and document database offered by AWS.
It provides:
Single-digit millisecond performance
Automatic scaling for workloads of any size
Integrated high availability across multiple AZs
Zero administration : no servers, no patching, no provisioning
DynamoDB is massively scalable and commonly used in:
High-traffic web apps
Gaming backends
IoT systems
Real-time bidding platforms
Shopping carts and user sessions
It is one of AWS’s most important NoSQL database services.
DynamoDB vs. Relational Databases
DynamoDB is NoSQL , meaning:
It does not use tables with fixed columns.
Rows (items) may have different attributes.
No joins, no complex SQL.
The schema is flexible.
Compared to RDS:
DynamoDB
RDS (SQL Databases)
Schema-less (flexible attributes)
Fixed schemas (columns, constraints)
Horizontal scaling built-in
Vertical scaling mostly (larger DB instance)
Serverless
You manage DB instances
No joins
Joins and complex SQL supported
Key-value + document model
Relational model
Core DynamoDB Concepts
Table : The top-level container of items.
Item :
A single record in the table.
Equivalent to a “row” in SQL but flexible.
Attributes :
Key–value pairs inside an item.
You can store strings, numbers, lists, maps, boolean, null, etc.
Primary Key : Determines how data is accessed.
Partition Key only (simple key)
Partition Key + Sort Key (composite key)
Partition Key :
Determines physical storage location.
Items with the same partition key go into the same partition.
Sort Key :
Allows ordering and range queries within a partition.
Useful for time-series data, logs, user activity.
Table: Users
Partition Key: user_id
Table: Orders
Partition Key: customer_id
Sort Key: order_timestamp
Reading and Writing Data
The most common operations:
PutItem — create/replace an item
GetItem — fetch by primary key
UpdateItem — update specific attributes
DeleteItem — delete an item
Query — get items by partition key, optional sort key conditions
Scan — read entire table (expensive)
Example using AWS CLI:
# Put an item
aws dynamodb put-item \
--table-name Users \
--item '{"user_id": {"S": "123"}, "name": {"S": "Alice"}}'
# Get an item
aws dynamodb get-item \
--table-name Users \
--key '{"user_id": {"S": "123"}}'
Query vs. Scan
Query :
Fast, efficient
Requires partition key
Can use sort key filters
Scan :
Reads entire table
Slow for large datasets
Expensive (consumes capacity)
Best practice: avoid Scan whenever possible .
DynamoDB Scaling Model
DynamoDB scales automatically across multiple servers.
Two capacity modes:
Provisioned Capacity :
You specify read/write capacity units (RCU/WCU).
Auto Scaling can adjust capacity based on traffic.
On-Demand Capacity :
Pay per request.
No planning needed.
Great for unpredictable workloads.
Partitions scale automatically:
More traffic → more partitions
Even distribution of partition keys is important
Indexes: GSI and LSI
DynamoDB provides two types of secondary indexes:
LSI (Local Secondary Index) :
Shares the same partition key
Different sort key
Created when the table is created (cannot be added later)
GSI (Global Secondary Index) :
Completely separate partition and sort keys
Can be added at any time
Useful for alternate access patterns
Table: Orders
Partition Key: customer_id
Sort Key: order_timestamp
GSI: lookup by product_id
LSI: lookup by status within same customer_id partition
DynamoDB Streams
DynamoDB can stream real-time changes to other AWS services.
DynamoDB Streams captures:
new items
updates
deletes
Common use cases:
Trigger AWS Lambda whenever an item changes
Replication across regions
Event-driven architectures
Streams are strongly integrated with Lambda.
DynamoDB Security
Security is managed through:
AWS IAM — who can read/write tables
Encryption at rest — KMS-managed
Encryption in transit — HTTPS
VPC Endpoints — private network access
DynamoDB is secure by default and designed for multi-tenant AWS architecture.
Typical Use Cases
DynamoDB is widely used in:
Shopping cart systems
User profiles and sessions
Real-time bidding/ad tech
Leaderboards and gaming
IoT telemetry
Chat/messaging apps
Serverless backends (Lambda + API Gateway)
Its key–value and high-performance nature makes it ideal for high-scale workloads.
Introduction to the AWS CLI (Command Line Interface)
What Is the AWS CLI?
The AWS CLI (Command Line Interface) is a unified tool that lets you manage AWS services directly from your terminal.
It allows you to:
automate cloud workflows,
control AWS services via scripts,
query resources,
deploy applications,
manage infrastructure.
Installing the AWS CLI
The recommended version is AWS CLI v2 .
Basic installation examples:
# Linux
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
# macOS (pkg installer)
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
# Windows
Download the installer from AWS website and run it.
aws --version
Configuring the AWS CLI
Before using the AWS CLI, configure your credentials:
aws configure
You will be asked for:
AWS Access Key ID
AWS Secret Access Key
Default region (e.g., eu-central-1)
Output format (json, text, yaml)
Configuration files are stored in ~/.aws/:
~/.aws/
credentials # stores keys
config # stores region/output profile settings
AWS CLI Structure
The AWS CLI uses a consistent structure:
aws <service> <command> [parameters]
aws s3 ls
aws ec2 describe-instances
aws dynamodb put-item
aws iam list-users
Using AWS CLI Profiles
You can create multiple profiles (e.g., dev, staging, production):
aws configure --profile dev
aws s3 ls --profile dev
Common AWS CLI Examples
aws s3 ls
aws s3 cp file.txt s3://my-bucket/
aws s3 sync ./site s3://my-bucket/site/
aws ec2 describe-instances
aws ec2 start-instances --instance-ids i-1234567890
aws ec2 stop-instances --instance-ids i-1234567890
aws dynamodb list-tables
aws dynamodb get-item --table-name Users --key '{"id": {"S": "1"}}'
aws iam list-users
aws iam create-user --user-name alice
Using the AWS CLI with JSON
Most AWS CLI commands return JSON by default.
You can format or extract values using --query (JMESPath syntax):
aws ec2 describe-instances --query "Reservations[*].Instances[*].InstanceId"
aws ec2 describe-instances | jq
AWS CLI Pagination
Some commands return multiple pages of results.
Use --no-paginate to show all at once:
aws ec2 describe-instances --no-paginate
AWS CLI with Environment Variables
You can also set credentials using environment variables:
export AWS_ACCESS_KEY_ID="ABC..."
export AWS_SECRET_ACCESS_KEY="XYZ..."
export AWS_DEFAULT_REGION="eu-central-1"
This is common in CI/CD pipelines.
AWS CLI Automation & Scripting
The CLI is perfect for automation:
Bash scripts
CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins)
Infrastructure automation
Deployment scripts
Example script launching an EC2 instance:
#!/bin/bash
aws ec2 run-instances \
--image-id ami-0abcd1234 \
--instance-type t2.micro \
--count 1
Command Completion
The AWS CLI supports command auto-completion:
complete -C '/usr/local/bin/aws_completer' aws
Then you can press TAB to auto-suggest services and parameters.
Introduction to AWS Lambda
What Is AWS Lambda?
AWS Lambda is a fully managed, serverless computing service that lets you run code without provisioning or managing servers.
With Lambda:
You upload your function code.
AWS runs it only when needed.
You pay only for the compute time your code uses (per millisecond).
Lambda is a core part of AWS’s serverless architecture and is heavily used with:
API Gateway
DynamoDB
S3 event triggers
CloudWatch events
Step Functions
Common use cases include microservices, background tasks, ETL jobs, scheduled jobs, and event-driven applications.
How Lambda Works
Lambda follows an event-driven execution model :
Event → Lambda Function → Response / Side Effect
Examples of events:
S3 file upload
HTTP request via API Gateway
DynamoDB table update
Cron schedule via CloudWatch
IoT sensor message
AWS automatically handles:
Server creation
Scaling
Load balancing
Monitoring
Logging
Supported Languages
Lambda supports many runtimes:
Python
Node.js
Java
Go
.NET (C#)
Ruby
Custom Runtime (provided.al2)
Custom runtime lets you run Rust, PHP, C++, Zig, Haskell, etc. with a bootstrap script.
Basic Lambda Function Structure
A Lambda function is simply a handler that receives:
an event (input)
a context object (runtime metadata)
def handler(event, context):
name = event.get("name", "World")
return {"message": f"Hello, {name}!"}
exports.handler = async (event) => {
return { message: "Hello from Lambda!" };
};
Triggers: What Can Invoke a Lambda?
Lambda can be triggered by over 200 AWS services. Most common ones:
Service Trigger Example
S3 Run code when a file is uploaded
API Gateway Handle HTTP requests (serverless API)
DynamoDB Streams Process table insert/update/delete events
CloudWatch Events Scheduled cron jobs
SQS Process messages from a queue
SNS Handle push notifications/topics
EventBridge React to system-wide events
Kinesis Stream processing
Lambda Pricing (Pay Only for Usage)
Lambda pricing is based on:
Number of invocations
Execution time (measured per millisecond)
Allocated memory
There is no charge when your function is idle.
Free tier:
1 million requests per month
400,000 GB-seconds compute
Lambda is extremely cost-efficient for event-driven workloads.
Execution Environment
Each invocation runs inside an isolated execution environment.
Cold starts occur when AWS needs to create a new environment.
Warm starts reuse an existing environment → much faster execution.
Key characteristics:
Linux-based environment
/tmp directory with 512MB storage
Execution timeout: max 15 minutes
Permissions & IAM Roles
Each Lambda function uses an execution role (IAM role).
This determines what AWS resources the function can access.
Examples:
Read from S3 → needs s3:GetObject
Write to DynamoDB → needs dynamodb:PutItem
Push logs → needs logs:CreateLogStream + logs:PutLogEvents
Security best practice:
Follow least privilege.
Give only the permissions the function needs.
Deploying Lambda Functions
You can deploy Lambda using:
AWS Console
AWS CLI
ZIP Upload
Container Images (up to 10GB)
Infrastructure-as-Code :
CloudFormation
Serverless Framework
Terraform
AWS SAM (Serverless Application Model)
CDK (Cloud Development Kit)
CLI example:
aws lambda update-function-code \
--function-name myFunction \
--zip-file fileb://function.zip
Logging & Monitoring
Lambda integrates with:
CloudWatch Logs (stdout/stderr logs)
CloudWatch Metrics (invocations, errors, duration)
X-Ray for distributed tracing
You can view logs via:
aws logs tail /aws/lambda/myFunction --follow
How EC2, S3, RDS, and AWS CLI Work Together to Host an Application
Overview: Hosting an Application on AWS
AWS provides many services that can collaborate to host and operate a complete application stack.
The most common trio for a web application is:
Amazon EC2 → compute (runs your backend/server)
Amazon S3 → static storage (files, images, static websites)
Amazon RDS → managed database (MySQL, PostgreSQL, MariaDB, etc.)
These three services, combined with the AWS CLI , form a complete deployment and management toolkit.
Role of Each AWS Service in an Application
Amazon EC2 – Runs your application code, such as:
Node.js backend
Python Flask/Django
Java Spring Boot
Go or Rust API
React/Vue SSR servers
Amazon S3 – Stores:
images
videos
user uploads
static frontend (React / Vue SPA)
backups
Amazon RDS – Provides a fully managed relational database:
PostgreSQL
MySQL
MariaDB
Oracle / SQL Server (enterprise)
AWS CLI – Automates everything from your terminal:
uploading website files
starting EC2 instances
managing RDS snapshots
deployments and maintenance
Typical Full-Stack Architecture
A very common AWS architecture for hosting an app looks like this:
┌─────────────────────┐
│ S3 Bucket │ ← stores static files, frontend, uploads
└─────────┬───────────┘
│
▼
CloudFront CDN (optional)
│
▼
┌────────────────────────────────┐
│ EC2 Server │ ← backend/API server
│ (Node.js, Python, Java, ...) │
└─────────────────┬──────────────┘
│
▼
RDS Managed Database
(PostgreSQL, MySQL, ...)
Each component serves a different concern:
S3: static assets + frontend
EC2: app logic and API
RDS: persistent relational data
Step-by-Step Workflow: Hosting an App Using EC2, S3, and RDS
This is a typical procedure you follow when hosting an application on AWS.
Create and Configure an RDS Database
Select DB engine (e.g., PostgreSQL).
Choose instance type (t3.micro, t3.small, etc.).
Set master username/password.
Create a security group that allows EC2 to access RDS.
Get your database endpoint, e.g.:
mydb.abcd1234xyz.eu-central-1.rds.amazonaws.com
Upload Your Website or Assets to S3
aws s3 mb s3://my-frontend-site
aws s3 sync ./dist s3://my-frontend-site
Enable static hosting if it’s a frontend app:
aws s3 website s3://my-frontend-site --index-document index.html
Launch an EC2 Instance
Choose an AMI (Amazon Linux, Ubuntu, etc.).
Choose instance type (t2.micro free-tier eligible).
Attach a security group that allows:
HTTP (port 80)
HTTPS (port 443)
SSH (port 22)
SSH into your EC2 instance:
ssh -i mykey.pem ec2-user@ec2-1-2-3-4.compute.amazonaws.com
Deploy Application Code to EC2
sudo yum install git -y
git clone https://github.com/my/app.git
cd app
npm install
npm start
Configure environment variables on EC2 so your backend can use RDS:
export DB_HOST="mydb.abcd1234.rds.amazonaws.com"
export DB_USER="admin"
export DB_PASS="mypassword"
Integrate EC2 with S3
If your app needs to upload or read files from S3, give EC2 an IAM role with S3 permissions.
Your application may use AWS SDK:
const s3 = new AWS.S3();
await s3.upload({ Bucket: "my-frontend-site", Key: "upload.png", Body: file });
Use AWS CLI for Automation and Deployment
aws ec2 describe-instances
aws rds describe-db-instances
aws s3 sync ./public s3://my-frontend-site
Common automated tasks:
Backup database
Roll out new S3 versions
Check EC2 server health
Restart backend services
You can also script deployments:
#!/bin/bash
aws s3 sync ./dist s3://my-frontend-site
ssh ec2-user@server "cd app && git pull && systemctl restart app"
How EC2, S3, and RDS Collaborate Inside the App
They form a standard three-layer architecture:
Browser
▲
│ GET static HTML/CSS/JS
│
▼
S3 Static Website Hosting
▲
│ AJAX / REST API calls
▼
EC2 Backend Server
▲
│ SQL queries
▼
RDS Database
Where each AWS service fits:
S3: Hosts the frontend + assets
EC2: Processes business logic and API requests
RDS: Stores persistent relational data
AWS CLI automates the entire lifecycle:
Upload new S3 static build
Restart EC2 backend safely
Create RDS snapshots before deployment
Scale infrastructure
Security and IAM Considerations
To let EC2 talk to S3 and RDS securely, you typically configure:
IAM Role for EC2 :
Allows S3 read/write
Allows reading secrets or using Parameter Store
Security Groups :
EC2 → RDS: only allow port 5432/3306
S3: IAM-based, not security-group-based
HTTPS everywhere using:
ACM certificates
Load Balancer (ALB)
Never store passwords in your EC2 instance directly; use:
AWS Systems Manager Parameter Store
AWS Secrets Manager