Top 50+ AWS Services Explained in 10 Minutes
Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AWS service sprawl becomes manageable when grouped into compute, storage, databases, analytics, ML, security, and provisioning.
Briefing
AWS has grown from a handful of core services into a sprawling catalog—so many that overlapping capabilities can feel like shopping the same products in different aisles. The practical takeaway is that most AWS offerings cluster into a few job families: compute, storage, databases, analytics, machine learning, security, and deployment/provisioning. Once those categories click, the “200+ services” start to look less like confusion and more like a menu of interchangeable building blocks.
On the compute side, Elastic Compute Cloud (EC2) remains the baseline: rent virtual machines by choosing an operating system and resource size, then run web servers or application backends. As traffic grows, Elastic Load Balancing spreads requests across instances, while CloudWatch collects logs and metrics that can feed Auto Scaling policies to create capacity automatically. For teams that want less infrastructure work, Elastic Beanstalk adds an abstraction layer—deploy a template and application code and let autoscaling run underneath. When even that is too much, AWS Lambda shifts to serverless: upload code, define the triggering event, and pay only for requests and execution time.
Containerization adds another path. Elastic Container Registry stores Docker images; Elastic Container Service (ECS) orchestrates containers; and Elastic Kubernetes Service (EKS) runs Kubernetes when more control is needed. Fargate makes containers behave more like serverless workloads by removing the need to manage EC2 instances. App Runner (introduced in 2021) further simplifies deployment by letting developers point to a container image while AWS handles orchestration and scaling.
Storage and data management follow a similar “pick the right latency and structure” logic. Simple Storage Service (S3) is general-purpose object storage for files like images and videos. Glacier is cheaper archival storage with higher retrieval latency. Elastic Block Store targets high-throughput, data-intensive workloads but requires more configuration. Elastic File System offers fully managed file storage at a higher cost. For structured data, the database aisle spans NoSQL, relational, graph, caching, time series, and ledger-style immutability: DynamoDB for scalable document-style access; DocumentDB as a MongoDB API-compatible alternative; RDS for managed SQL with backups and patching; and Aurora for MySQL/PostgreSQL compatibility with performance and cost advantages, including a serverless option. Neptune supports graph workloads, ElastiCache provides Redis-like in-memory low-latency reads, and Time Stream handles time-series queries and analytics. A Quantum Ledger option supports cryptographically signed, immutable transaction histories.
Analytics and machine learning then build on where data lives and how it moves. Redshift acts as a data warehouse for structured analytics across enterprise sources. Lake Formation helps create data lakes for unstructured data. Kinesis captures real-time streams, while Apache Kafka (with Amazon MSK as the managed service) supports streaming pipelines. Glue provides serverless ETL to extract, transform, and load data from sources like Aurora, Redshift, and S3. For ML, SageMaker supports model training and deployment using TensorFlow or PyTorch, with managed Jupyter notebooks and GPU-backed training. For common use cases, managed APIs like the Recognition API (image classification) and Lex (conversational bots) reduce the need to build models from scratch.
Security and operations round out the stack. Identity and Access Management (IAM) defines access rules, Cognito handles user authentication and sessions, and SNS/SES send notifications and emails. CloudFormation provisions infrastructure from templates, and AWS Amplify provides front-end SDKs for connecting apps to AWS backends. Finally, cost control is treated as a core requirement: AWS Cost Explorer and budgets help prevent runaway spend—especially when scaling infrastructure and data processing.
In short, AWS’s complexity becomes manageable when each service is mapped to a specific development job: run compute, store data, query it, analyze it, secure it, deploy it, and keep costs predictable.
Cornell Notes
AWS’s large service catalog becomes navigable when grouped by core development jobs: compute, storage, databases, analytics, machine learning, security, and provisioning. EC2 plus load balancing, CloudWatch, and Auto Scaling cover traditional scaling; Elastic Beanstalk simplifies deployment; Lambda shifts to serverless pay-per-use. Containers add portability and control via ECR, ECS, EKS, Fargate, and App Runner. Data choices hinge on access patterns: S3/Glacier for objects, EBS/EFS for block/file needs, and databases ranging from DynamoDB and RDS/Aurora to Neptune, ElastiCache, Time Stream, and Quantum Ledger. Analytics and ML build on those foundations using Redshift, Lake Formation, Kinesis/Kafka, Glue, and SageMaker, with managed APIs like Recognition and Lex for common tasks.
How do EC2, load balancing, CloudWatch, and Auto Scaling work together when an application grows?
What’s the practical difference between Elastic Beanstalk and Lambda?
When should teams use containers and which AWS services map to the container lifecycle?
How do AWS storage services map to latency and workload type?
What database options cover different data models and performance needs?
Which AWS services support analytics and ML pipelines from streaming data to model deployment?
Review Questions
- Which combination of AWS services would you use to distribute traffic across multiple instances and automatically add capacity as demand rises?
- How do S3 and Glacier differ in access pattern and cost, and where would EBS or EFS fit instead?
- If an application is already containerized, what AWS services could deploy it with different levels of orchestration control (ECS/EKS/Fargate/App Runner)?
Key Points
- 1
AWS service sprawl becomes manageable when grouped into compute, storage, databases, analytics, ML, security, and provisioning.
- 2
EC2 plus Elastic Load Balancing, CloudWatch, and Auto Scaling provides a classic scaling path for always-on applications.
- 3
Elastic Beanstalk reduces deployment complexity by adding an abstraction layer over EC2 and autoscaling.
- 4
Lambda shifts to serverless execution with event triggers and pay-per-request/execution-time billing.
- 5
Containers are supported end-to-end via ECR (images) and ECS/EKS/Fargate/App Runner (orchestration and deployment).
- 6
Storage selection depends on access patterns: S3 for general objects, Glacier for cheap archival, EBS for high-throughput block needs, and EFS for managed file storage.
- 7
Cost control is a first-class concern, with AWS Cost Explorer and budgets used to prevent runaway spend.