Get AI summaries of any video or article — Sign up free
Top 50+ AWS Services Explained in 10 Minutes thumbnail

Top 50+ AWS Services Explained in 10 Minutes

Fireship·
6 min read

Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AWS service sprawl becomes manageable when grouped into compute, storage, databases, analytics, ML, security, and provisioning.

Briefing

AWS has grown from a handful of core services into a sprawling catalog—so many that overlapping capabilities can feel like shopping the same products in different aisles. The practical takeaway is that most AWS offerings cluster into a few job families: compute, storage, databases, analytics, machine learning, security, and deployment/provisioning. Once those categories click, the “200+ services” start to look less like confusion and more like a menu of interchangeable building blocks.

On the compute side, Elastic Compute Cloud (EC2) remains the baseline: rent virtual machines by choosing an operating system and resource size, then run web servers or application backends. As traffic grows, Elastic Load Balancing spreads requests across instances, while CloudWatch collects logs and metrics that can feed Auto Scaling policies to create capacity automatically. For teams that want less infrastructure work, Elastic Beanstalk adds an abstraction layer—deploy a template and application code and let autoscaling run underneath. When even that is too much, AWS Lambda shifts to serverless: upload code, define the triggering event, and pay only for requests and execution time.

Containerization adds another path. Elastic Container Registry stores Docker images; Elastic Container Service (ECS) orchestrates containers; and Elastic Kubernetes Service (EKS) runs Kubernetes when more control is needed. Fargate makes containers behave more like serverless workloads by removing the need to manage EC2 instances. App Runner (introduced in 2021) further simplifies deployment by letting developers point to a container image while AWS handles orchestration and scaling.

Storage and data management follow a similar “pick the right latency and structure” logic. Simple Storage Service (S3) is general-purpose object storage for files like images and videos. Glacier is cheaper archival storage with higher retrieval latency. Elastic Block Store targets high-throughput, data-intensive workloads but requires more configuration. Elastic File System offers fully managed file storage at a higher cost. For structured data, the database aisle spans NoSQL, relational, graph, caching, time series, and ledger-style immutability: DynamoDB for scalable document-style access; DocumentDB as a MongoDB API-compatible alternative; RDS for managed SQL with backups and patching; and Aurora for MySQL/PostgreSQL compatibility with performance and cost advantages, including a serverless option. Neptune supports graph workloads, ElastiCache provides Redis-like in-memory low-latency reads, and Time Stream handles time-series queries and analytics. A Quantum Ledger option supports cryptographically signed, immutable transaction histories.

Analytics and machine learning then build on where data lives and how it moves. Redshift acts as a data warehouse for structured analytics across enterprise sources. Lake Formation helps create data lakes for unstructured data. Kinesis captures real-time streams, while Apache Kafka (with Amazon MSK as the managed service) supports streaming pipelines. Glue provides serverless ETL to extract, transform, and load data from sources like Aurora, Redshift, and S3. For ML, SageMaker supports model training and deployment using TensorFlow or PyTorch, with managed Jupyter notebooks and GPU-backed training. For common use cases, managed APIs like the Recognition API (image classification) and Lex (conversational bots) reduce the need to build models from scratch.

Security and operations round out the stack. Identity and Access Management (IAM) defines access rules, Cognito handles user authentication and sessions, and SNS/SES send notifications and emails. CloudFormation provisions infrastructure from templates, and AWS Amplify provides front-end SDKs for connecting apps to AWS backends. Finally, cost control is treated as a core requirement: AWS Cost Explorer and budgets help prevent runaway spend—especially when scaling infrastructure and data processing.

In short, AWS’s complexity becomes manageable when each service is mapped to a specific development job: run compute, store data, query it, analyze it, secure it, deploy it, and keep costs predictable.

Cornell Notes

AWS’s large service catalog becomes navigable when grouped by core development jobs: compute, storage, databases, analytics, machine learning, security, and provisioning. EC2 plus load balancing, CloudWatch, and Auto Scaling cover traditional scaling; Elastic Beanstalk simplifies deployment; Lambda shifts to serverless pay-per-use. Containers add portability and control via ECR, ECS, EKS, Fargate, and App Runner. Data choices hinge on access patterns: S3/Glacier for objects, EBS/EFS for block/file needs, and databases ranging from DynamoDB and RDS/Aurora to Neptune, ElastiCache, Time Stream, and Quantum Ledger. Analytics and ML build on those foundations using Redshift, Lake Formation, Kinesis/Kafka, Glue, and SageMaker, with managed APIs like Recognition and Lex for common tasks.

How do EC2, load balancing, CloudWatch, and Auto Scaling work together when an application grows?

EC2 provides the virtual machines where an app runs, with choices for operating system and compute/memory capacity. Elastic Load Balancing distributes incoming traffic across multiple EC2 instances. CloudWatch collects logs and metrics from those instances, and its data can feed Auto Scaling policies. Auto Scaling then creates new instances when traffic and utilization cross defined thresholds, helping capacity track demand.

What’s the practical difference between Elastic Beanstalk and Lambda?

Elastic Beanstalk adds a deployment abstraction on top of EC2 and autoscaling: developers choose a template, deploy code, and AWS handles the underlying scaling and related setup. Lambda is serverless: developers upload code and specify an event trigger, and AWS runs the function in the background. Billing is tied to requests and execution time rather than maintaining always-on servers.

When should teams use containers and which AWS services map to the container lifecycle?

Containers are useful when applications need to run consistently across environments. Elastic Container Registry stores Docker images. Elastic Container Service (ECS) starts/stops and allocates resources for containers and can integrate with load balancers. Elastic Kubernetes Service (EKS) runs Kubernetes for teams needing Kubernetes-level control. Fargate removes the need to allocate EC2 instances for container workloads, making containers behave more like serverless functions. App Runner (2021) simplifies deployment by pointing to a container image while AWS handles orchestration and scaling.

How do AWS storage services map to latency and workload type?

S3 is general-purpose object storage for files like images and videos. Glacier is for archival storage with higher latency but much lower cost. Elastic Block Store (EBS) suits intensive data processing that needs high throughput, but it requires more manual configuration. Elastic File System (EFS) provides fully managed file storage with strong features, typically at a higher cost than simpler options.

What database options cover different data models and performance needs?

DynamoDB is a scalable document database with fast reads and horizontal scaling, but it’s not ideal for relational modeling. DocumentDB offers a MongoDB API-compatible approach for document-style data. RDS manages relational SQL databases with operational tasks like backups and patching. Aurora is compatible with PostgreSQL or MySQL and can deliver better performance at lower cost, including a serverless scaling option. Neptune targets graph workloads, ElastiCache provides Redis-like in-memory low-latency access, and Time Stream supports time-series queries and analytics. Quantum Ledger supports immutable, cryptographically signed transactions similar to decentralized blockchain concepts.

Which AWS services support analytics and ML pipelines from streaming data to model deployment?

Redshift is a data warehouse for structured analytics. Lake Formation helps build data lakes for unstructured data. Kinesis captures real-time streams, while Apache Kafka with Amazon MSK provides streaming infrastructure in a managed form. Glue performs serverless ETL to connect sources like Aurora, Redshift, and S3 and create jobs via Glue Studio without writing source code. SageMaker supports ML training and deployment using TensorFlow or PyTorch, including managed Jupyter notebooks with GPU-backed training. For common tasks, Recognition API handles image classification and Lex supports conversational bots.

Review Questions

  1. Which combination of AWS services would you use to distribute traffic across multiple instances and automatically add capacity as demand rises?
  2. How do S3 and Glacier differ in access pattern and cost, and where would EBS or EFS fit instead?
  3. If an application is already containerized, what AWS services could deploy it with different levels of orchestration control (ECS/EKS/Fargate/App Runner)?

Key Points

  1. 1

    AWS service sprawl becomes manageable when grouped into compute, storage, databases, analytics, ML, security, and provisioning.

  2. 2

    EC2 plus Elastic Load Balancing, CloudWatch, and Auto Scaling provides a classic scaling path for always-on applications.

  3. 3

    Elastic Beanstalk reduces deployment complexity by adding an abstraction layer over EC2 and autoscaling.

  4. 4

    Lambda shifts to serverless execution with event triggers and pay-per-request/execution-time billing.

  5. 5

    Containers are supported end-to-end via ECR (images) and ECS/EKS/Fargate/App Runner (orchestration and deployment).

  6. 6

    Storage selection depends on access patterns: S3 for general objects, Glacier for cheap archival, EBS for high-throughput block needs, and EFS for managed file storage.

  7. 7

    Cost control is a first-class concern, with AWS Cost Explorer and budgets used to prevent runaway spend.

Highlights

AWS’s “200+ services” map cleanly onto a handful of engineering jobs: run compute, store data, model/query it, analyze it, secure it, and deploy it.
Serverless (Lambda) replaces always-on servers by running code only when events arrive, with billing tied to requests and execution time.
Aurora’s compatibility with PostgreSQL and MySQL plus a serverless option positions it as a flexible alternative to traditional managed SQL.
Glue provides serverless ETL that can connect Aurora, Redshift, and S3 and generate jobs via Glue Studio without writing source code.
AWS Cost Explorer and budgets are presented as essential tools to keep cloud spending under control.

Topics

  • AWS Services Overview
  • Compute and Serverless
  • Containers and Orchestration
  • Storage and Databases
  • Analytics and Machine Learning

Mentioned

  • AWS
  • EC2
  • IoT
  • ECS
  • EKS
  • S3
  • EBS
  • EFS
  • RDS
  • ML
  • ETL
  • API
  • SNS
  • SES
  • IAM
  • Kinesis
  • MSK
  • RDS
  • SageMaker
  • GPU
  • SQL
  • NoSQL
  • RDS
  • ECS
  • EKS
  • ECR
  • Fargate
  • Lambda
  • CloudWatch
  • Auto Scaling
  • VPC