Get AI summaries of any video or article — Sign up free
I'm leaving the cloud! (...and why you probably should too) thumbnail

I'm leaving the cloud! (...and why you probably should too)

Simon Høiberg·
6 min read

Based on Simon Høiberg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

After moving more than 80% of workloads to dedicated Hzner servers, his AWS bill dropped from about $7,800/month to under $2,000/month, with further reductions expected as migrations finish.

Briefing

Moving most of his SaaS stack off AWS onto dedicated bare-metal servers cut monthly infrastructure costs by more than half and also sped up deployments—while reducing the risk of vendor lock-in that can force expensive, disruptive migrations. After shifting over 80% of his systems to Hzner, his average AWS bill fell from about $7,800 per month to under $2,000, with expectations that it could drop below $1,000 as the remaining migrations finish. The change matters because it directly affects both unit economics (costs that otherwise rise with usage) and operational resilience (the ability to keep running if a hyperscaler changes pricing or terms).

The new setup replaces AWS-managed serverless components with a more self-managed infrastructure layer. Code now runs in Docker containers orchestrated by Kubernetes on three dedicated Hzner servers. Data storage moved to PostgreSQL, caching to Redis, and background jobs/message queues to Bullmq. Monitoring and access logging are handled with Prometheus and Grafana. The cost model is also fundamentally different: Kubernetes, PostgreSQL, Redis, Bullmq, and Prometheus are open-source and free, with Grafana relying on its community edition. That leaves the dedicated servers as the primary recurring expense—around $200 per month for Hzner—plus whatever portion of infrastructure remains on AWS.

The trade-off is time and expertise. Migration required learning Kubernetes, Docker, and basic system administration, and infrastructure work spiked early in 2025. Over time, the workload returned to roughly prior levels, helped by better tooling and AI assistance. The financial logic hinges on that shift: the upfront migration effort is largely fixed, while the savings keep compounding month after month with no built-in usage ceiling like AWS’s per-request/per-resource pricing.

A second, arguably bigger reason centers on portability and continuity. AWS services such as DynamoDB, Lambda, and Fargate are robust, but they don’t run outside AWS. If AWS raises prices or stops serving a product configuration, the business can be “cooked,” because migration can’t be done quickly without careful planning. By moving to a stack that can run on any VPS or dedicated server, he aims to avoid being trapped by a single vendor’s infrastructure.

He still keeps some dependencies on AWS and other third parties. Stripe and OpenAI remain in the mix, and he’s experimenting with alternatives like Coinbase for payments and independent vendors for GPU-based AI inference. For the remaining AWS services, he’s tried to replace S3 with S3-compatible options such as Hzner storage buckets and Cloudflare R2, but vendor lock-in still persists. MinIO looked promising for self-hosted S3 compatibility, yet backup complexity and stability issues pushed him back to S3. Authentication also stayed on AWS for now: he tested Keycloak as an alternative to Cognito but returned to Cognito due to poor user/developer experience.

Finally, the move supports his business model. He sells both subscriptions and lifetime deals. Rising operational costs tied to user growth make lifetime pricing riskier; shifting to fixed-cost infrastructure makes financial planning easier and helps address customer questions about what happens if the company can’t host “lifetime” access. His goal is to make all products fully self-hostable so lifetime customers can run the software themselves if needed.

For deciding whether to do the same, he draws a line by stage: hobby projects should stay on cloud one-click platforms, mission-critical or highly scaling businesses may still benefit from managed hyperscalers, and the middle ground is where dedicated servers can be a practical, money-saving option—especially when teams want the freedom to use open-source tools internally.

Cornell Notes

The core shift is from AWS-managed, usage-priced infrastructure to dedicated servers that run a mostly open-source stack. After moving more than 80% of his SaaS systems to Hzner, monthly AWS costs dropped from about $7,800 to under $2,000, with potential to fall below $1,000 as migrations complete. The new architecture runs Docker containers on Kubernetes, stores data in PostgreSQL, uses Redis for caching, Bullmq for queues, and Prometheus/Grafana for monitoring. The biggest benefits are lower and more predictable costs plus reduced vendor lock-in risk, since the stack can run on other VPS/dedicated providers. The main downside is the upfront learning and migration effort, though he reports that ongoing infrastructure time returned to prior levels after an adjustment period.

What exact infrastructure stack replaced AWS for most of his SaaS workloads?

He moved code into Docker containers orchestrated by Kubernetes on three dedicated Hzner servers. For data, he uses PostgreSQL. For caching, he uses Redis. For message queues/background jobs, he uses Bullmq. For observability, he uses Prometheus and Grafana to monitor systems and access logs.

How did the cost picture change after leaving AWS for dedicated servers?

At the start of 2025, his average AWS spend was about $7,800 per month. After moving over 80% of infrastructure to Hzner, the AWS portion dropped to below $2,000 per month, with expectations to go under $1,000 as the last migrations finish. Hzner server costs are around $200 per month, and the remaining AWS spend is tied to the roughly 20% of services still not migrated.

Why does vendor lock-in drive the decision beyond simple cost savings?

AWS services like DynamoDB, Lambda, and Fargate are reliable, but they run only within AWS. If AWS changes pricing or stops serving a configuration, the service can fail, and migration can’t be rushed without risking business continuity. By using a stack that can run on any VPS or dedicated server, he reduces the need for emergency migrations.

What were the biggest downsides during the transition, and how did they change over time?

Migration required learning Kubernetes, Docker, and system administration, and infrastructure work spiked early in 2025. Over time, the time spent on infrastructure returned to roughly the same level as before (or less), aided by improved tooling and AI support. He frames the migration effort as an upfront cost with ongoing savings that keep growing month after month.

Which AWS services remain, and what replacement attempts did he make?

He still relies on AWS for parts of storage, authentication, and edge protection. For storage, he considered S3-compatible options like Hzner storage buckets and Cloudflare R2, but vendor lock-in remained a concern. He tried MinIO with a 15 TB drive attached to a dedicated server, but backup complexity and stability issues led him back to S3. For authentication, he used AWS Cognito and tested Keycloak, but returned to Cognito due to poor user/developer experience. For public entry, he uses AWS CloudFront as a proxy in front of Hzner for security features like firewalls and DDoS protection.

How does leaving the cloud connect to his lifetime-deal business model?

Lifetime deals create risk when operational costs rise with user growth. By moving toward fixed-cost infrastructure, he can plan finances more reliably. He also wants to make products fully self-hostable so lifetime customers can keep using the software even if the company can’t host it, turning “lifetime access” into something customers can independently sustain.

Review Questions

  1. Which parts of his architecture are explicitly open-source/free, and how does that affect the cost model compared with AWS usage pricing?
  2. What specific failure mode does vendor lock-in create, and why is migration not something he treats as quick or low-risk?
  3. What remaining AWS dependencies are hardest to replace, and what trade-offs did his experiments (like MinIO and Keycloak) reveal?

Key Points

  1. 1

    After moving more than 80% of workloads to dedicated Hzner servers, his AWS bill dropped from about $7,800/month to under $2,000/month, with further reductions expected as migrations finish.

  2. 2

    The new core stack runs Docker containers on Kubernetes, with PostgreSQL for storage, Redis for caching, Bullmq for queues, and Prometheus/Grafana for monitoring.

  3. 3

    The main financial advantage comes from shifting from usage-based cloud pricing to fixed server costs, making savings potentially unbounded as the business grows.

  4. 4

    Vendor lock-in is treated as a continuity risk: AWS-only services can force slow, expensive migrations if pricing or availability changes.

  5. 5

    Migration has an upfront learning curve (Kubernetes, Docker, system admin), but ongoing infrastructure time reportedly returned to prior levels after adjustment.

  6. 6

    He still keeps some critical dependencies on AWS and third parties—especially for storage, authentication, and edge security—while actively searching for replacements.

  7. 7

    Lifetime deals become easier to support when operational costs are more predictable and the software can be self-hosted by customers if needed.

Highlights

AWS spend fell from roughly $7,800/month to under $2,000/month after shifting over 80% of infrastructure to Hzner dedicated servers.
The architecture swap is concrete: Kubernetes + Docker, PostgreSQL, Redis, Bullmq, and Prometheus/Grafana replace AWS-managed services for most workloads.
Portability is the central resilience goal—he wants to avoid being trapped by AWS-only services that can’t run elsewhere.
MinIO and Keycloak were tested as alternatives to S3 and Cognito, but backup complexity and user/developer experience issues pushed him back to AWS for now.

Topics

Mentioned