Self-Hosting in 2026: The Economics and Philosophy

Self-Hosting in 2026: The Economics and Philosophy

The Pitch

I run 25+ services on hardware that costs less than a single Datadog subscription.

My entire self-hosted infrastructure – AI platform, Git forge, monitoring stack, automation workflows, databases, recipe manager, portfolio tracker, and more – runs on approximately 36 EUR per month. The SaaS equivalent of these services would cost somewhere between 200 and 350 USD per month. That’s a 5-10x cost multiplier for letting someone else run the same software on their computers.

But this isn’t a “cloud bad, self-host good” article. I’ve been running this setup for long enough to know the real trade-offs, and the economics are only part of the story. The philosophy – why it matters beyond money – is what keeps me investing my evenings and weekends into infrastructure that nobody is paying me to maintain.

Let me show you the real numbers, from a real setup, with honest accounting for both the savings and the costs.

The Inventory: What I Actually Run

Here’s the complete picture. These aren’t theoretical services I might deploy someday. They’re running right now, serving real traffic, storing real data.

AI Platform

ServiceWhat It Does
OpenWebUIUser-facing chat interface for interacting with LLMs. Configurable access control per user and per model.
LiteLLMUniversal AI API gateway. Routes requests to Ollama, OpenAI, Anthropic, Mistral, and Google. Handles caching, per-user spend tracking, and logging.
OllamaLocal LLM inference engine. Declarative model management treats your model inventory as code.

Data and Knowledge

ServiceWhat It Does
PostgreSQL HA (primary/replica with pgvector)High-availability relational database with vector embedding support for RAG pipelines.
MinIOS3-compatible object storage for RAG source documents.
NocoDBOpen-source Airtable alternative. I use it as an infrastructure CMDB – 17 host records across 12 columns tracking every device in my network.
ValkeyRedis-compatible caching layer, primarily serving LiteLLM for response caching and request batching.

DevOps and CI/CD

ServiceWhat It Does
ForgejoSelf-hosted Git forge (Gitea fork). All my code lives here, not on GitHub.
Forgejo RunnerCI/CD runner with Docker-in-Docker for building and testing.
n8nWorkflow automation with PostgreSQL persistence. Handles webhooks from Forgejo for inventory sync and other automations.

Monitoring and Observability

ServiceWhat It Does
VictoriaMetricsPrometheus-compatible time-series database for metrics.
VictoriaLogsLog aggregation backend.
GrafanaDashboards and visualization, backed by PostgreSQL.
vmalertAlerting rules engine for VictoriaMetrics.
VectorLog collector deployed across all production hosts.
cAdvisorContainer resource usage monitoring.
ArgusUptime monitoring service.

Networking and Security

ServiceWhat It Does
TraefikEdge reverse proxy with automatic Let’s Encrypt TLS certificates.
HAProxyInternal SDN reverse proxy routing 16+ HTTP services, plus SSH bastion forwarding.
TailscaleWireGuard-based VPN mesh. Advertises my 10.0.0.0/24 SDN subnet so I can access everything remotely.

Productivity and Life

ServiceWhat It Does
MealieRecipe manager and meal planner.
GhostfolioOpen-source portfolio and investment tracker.

Other

ServiceWhat It Does
DelugeBitTorrent client, running in its own isolated LXC container.
HugoThis blog. Static site generator deployed as a Docker container behind Traefik on a VPS.
DeQHomelab dashboard. Gives me a single pane of glass for the whole setup.

That’s 25+ services across AI, data, DevOps, monitoring, networking, and productivity. Every single one is open-source. Every single one runs on hardware I control.

The Hardware

Here’s where it gets interesting. This isn’t a rack of enterprise servers humming in a closet. The physical footprint is almost laughably small.

DeviceRoleKey SpecsCost
Mini PC (Proxmox host)Hypervisor running 13+ LXC containers and 2 VMsMulti-core CPU, 32GB RAM, link aggregation networking~400 EUR one-time
Raspberry Pi 5Docker host for edge services (Traefik, Mealie, DeQ, monitoring scrapes)8GB RAM~90 EUR
Raspberry Pi 3Lightweight services, Vector log forwarding1GB RAMAlready owned (~35 EUR)
Buffalo LinkStation LiveBackup storage via SMBv1-to-SMBv3 protocol bridge1TB, assembled in Japan, circa 2009Already owned (0 EUR)
Hetzner VPSPublic-facing services (Hugo portfolio, Traefik edge)Ubuntu 24.04, OpenTofu-provisioned~5 EUR/month

Five devices. Three of them are smaller than a paperback book.

The anti-waste angle matters to me. That Buffalo NAS is 17 years old. The insecure SMBv1 traffic is air-gapped inside a protocol bridge container while the rest of my network speaks modern SMBv3. It still works. Not fast, but working. Sturdy build, assembled in Japan. Seventeen years.

The Raspberry Pi 3 has 1GB of RAM. One gigabyte. It runs Vector for log forwarding using zswap compressed memory. It’s not doing heavy lifting, but it’s contributing. In 2026, there’s no excuse for throwing hardware away because the specs look embarrassing on paper.

The Money: Real Numbers

Monthly Running Costs

ItemMonthly CostNotes
Bare metal / Proxmox host~20 EURElectricity + amortized hardware cost
Hetzner VPS~5 EURCX-series cloud server, OpenTofu-provisioned
RPi electricity (RPi5 + RPi3)~3-5 EURRPi5 at ~5W idle, RPi3 at ~2W
API credits (4 AI providers)~7 EUROpenAI, Anthropic, Mistral, Google
Domain + DNS~1-2 EURFor n-gr.xyz
Total~36 EUR/month~430 EUR/year

Keep in mind: 1M tokens equals roughly 750,000 words, which is about 1,500 A4 pages of text. GPT-4.1-mini runs at 0.40 USD per million input tokens. That 7 EUR per month in API credits buys a lot of generation across four major providers.

What Would This Cost on SaaS?

Here’s the service-by-service comparison that made me do a double take when I first calculated it:

Self-HostedCloud AlternativeTypical Cloud Price
Forgejo + RunnerGitHub (Free–Team)0–4 USD/user/month
Grafana + VictoriaMetrics + VictoriaLogs + vmalertDatadog15-23 USD/host/month
n8nZapier / Make30-100+ USD/month
OpenWebUI + LiteLLMChatGPT Team25-30 USD/user/month
NocoDBAirtable20 USD/user/month
PostgreSQL HAAWS RDS Multi-AZ50+ USD/month
MinIOAWS S3~23 USD/TB/month
MealieRecipe SaaS5-10 USD/month
GhostfolioPortfolio tracker SaaS5-15 USD/month
Traefik + HAProxyCloudflare Pro / AWS ALB20+ USD/month
DelugeSeedbox service5-15 USD/month
TailscaleTailscale Team6 USD/user/month

Conservative SaaS equivalent: 200-350 USD/month (2,400-4,200 USD/year)

Self-hosted total: ~430 EUR/year

That’s 5-10x cheaper. (Running costs are in EUR; SaaS prices are in USD at published list prices. At current exchange rates the comparison is essentially like-for-like.)

And I’m being conservative. Datadog alone, if you’re monitoring multiple hosts with APM, can cost more per month than my entire annual infrastructure budget. I’ve seen Datadog bills that would make you cry. My VictoriaMetrics + Grafana stack does 90% of what Datadog does for the cost of the electricity it takes to run them.

A fair note on the Forgejo comparison: GitHub’s free tier gives you unlimited private repos and 2,000 CI/CD minutes per month, which covers most solo developers. I compare against GitHub Team because I wanted features like branch protection rules and higher CI/CD limits – but if you don’t need those, GitHub Free costs nothing. The real value of self-hosted Forgejo for me is zero external dependency and unlimited CI/CD on my own runner. Same with NocoDB versus Airtable – Airtable’s per-user pricing adds up fast when you just need a flexible database with a nice frontend.

The Hidden Cost: Your Time

Here’s where I stop being a self-hosting evangelist and start being honest.

This setup did not build itself. The time investment is real, it’s significant, and if you don’t account for it honestly, you’re lying to yourself about the economics.

The Numbers I Don’t Put in the Monthly Budget

Initial setup: 80-120 hours. I migrated from Docker Compose on a Raspberry Pi to a full Proxmox hypervisor managed with OpenTofu. That meant importing 13 LXC containers into Infrastructure as Code without destroying them. Building out SDN networking. Configuring three-tier firewall rules. Setting up HAProxy routing for 16+ services. Writing 30+ Ansible roles. None of this happened overnight.

Active development: 5-10 hours per week. When I’m adding new services, refactoring infrastructure code, or debugging issues, this easily consumes a full evening or two plus a weekend afternoon. I have 29 Ansible playbooks. Each one needed writing, testing, and debugging.

Steady state: 1-2 hours per week. Once things stabilize, the maintenance is lighter – checking dashboards, updating containers, reviewing alerts. But “stable” is relative. Something always needs attention eventually.

The incidents. I spent a Sunday debugging NAT rules for Tailscale subnet routing through a conntrack table that wasn’t behaving. I’ve watched cAdvisor spike to 255% CPU utilization and had to diagnose why. Docker-in-Docker memory thrashing in my Forgejo Runner forced me to design a defense-in-depth strategy with memory limits and swap controls. When I audited my containers, 57% had no memory limits set. These are the unglamorous hours that don’t make it into the blog posts about how great self-hosting is.

The Counter-Argument

But here’s the thing: this time is also learning time.

Every hour I spend debugging my PostgreSQL replication is an hour I understand database high-availability patterns better. Every firewall rule I write teaches me network security. Every Ansible role with Molecule tests is practice in infrastructure-as-code and TDD.

I’m not a system guy. I’ve never pretended to be. But running this homelab forced me to learn enterprise patterns – HA databases, SDN networking, observability pipelines, CI/CD, IaC – in a context where I can break things without consequences. The infrastructure became less of a black box and more of a system I understood because I was learning by doing.

If you’re a developer, and infrastructure feels like someone else’s problem, consider what it would mean for your career to actually understand how the systems you deploy to actually work. DevOps isn’t a role. It’s a mindset. And self-hosting is one of the best ways to develop it.

The time spent is not purely a cost. It’s an investment with compound returns. But only if you value the learning. If you don’t, the math changes completely.

The Philosophy: Why It Matters Beyond Money

The cost savings are real, but they’re not why I wake up on a Saturday morning excited to deploy a new service. The economics get you started. The philosophy is what keeps you going.

Data Sovereignty

My recipes, my portfolio data, my LLM conversations, my code repositories – none of it leaves my network unless I explicitly choose to route it through an external API. LiteLLM gives me that choice at the routing level: local inference via Ollama when privacy matters, cloud APIs when I need more capability.

This isn’t paranoia. It’s an architectural decision. When you use consumer ChatGPT, your conversations can be used for training unless you opt out. Even with ChatGPT Team’s default opt-out, you’re trusting a policy that has changed before and could change again. When you use Airtable, your data lives on their servers under their terms. When you self-host, the data lives on your hardware, under your control, full stop.

In an era where AI companies are in a land grab for training data, running your own inference and storing your own vectors isn’t just privacy theater. It’s a meaningful choice about where your intellectual output ends up.

Anti-Waste and Hardware Recycling

A 17-year-old NAS still serving as backup storage. A Raspberry Pi 3 with 1GB of RAM still forwarding logs. Technology doesn’t become useless because something newer exists. It becomes useless when you stop finding uses for it.

There’s something satisfying about looking at a device that should have been e-waste years ago and watching it contribute to a production system. It’s a small act of resistance against the upgrade cycle that tech companies want you on.

The Learning Accelerator

I’ve learned more about infrastructure in the past year of homelab work than in the previous five years of reading documentation. The difference is context. I’m not reading a tutorial about “how Proxmox works in general.” I’m learning about how my specific Proxmox setup works, in the context of my actual problem.

Self-hosting forces you to understand every layer of the stack. DNS resolution. TLS certificate management. Reverse proxy routing. Database replication. Container networking. Firewall rules. Log aggregation. Metrics collection. Alerting. CI/CD pipelines. Secret management. These aren’t abstract concepts when your Grafana dashboard is showing you that something is actually wrong.

Add AI pair programming to this – I use Claude Code as my infrastructure copilot – and the learning velocity is extraordinary. I can debug a conntrack issue with an assistant that understands my exact configuration, not a Stack Overflow answer from 2019 about someone else’s setup.

Progressive Complexity

You don’t need to start where I am now. I didn’t start where I am now.

I started with Docker Compose on a single Raspberry Pi. That was good enough for months. Then I outgrew it, got a mini PC, installed Proxmox, and started learning virtualization. Then I needed Infrastructure as Code, so I learned OpenTofu. Then I wanted proper networking, so I set up Proxmox SDN. Each layer was added when the previous one wasn’t enough anymore.

Keep things simple at first. You don’t need to replicate my setup to get started. A 20 EUR per month bare metal server running Docker Compose is a perfectly valid starting point. Skip Kubernetes. Skip the specialized vector database. Skip the three-tier firewall architecture. Add complexity when you outgrow simplicity, not before.

Full-Stack Ownership

There’s a particular kind of satisfaction in controlling every layer from DNS to firewall to database to CI/CD to monitoring. When something breaks, I know where to look. When I want to change something, I change it. I don’t file a support ticket. I don’t wait for a feature request to be prioritized. I don’t accept limitations because “that’s how the platform works.”

This isn’t for everyone, and that’s fine. But if you’re the kind of person who wants to understand how things work – really work, not just at the API level – self-hosting is one of the most direct paths to that understanding.

When NOT to Self-Host

I’d be doing you a disservice if I didn’t include this section. Self-hosting is powerful, but it’s not always the right answer.

When guaranteed uptime matters. There’s no SLA from your closet. If you’re running a business and customers depend on your service being available, cloud providers with 99.99% uptime guarantees exist for a reason. My homelab goes down when I’m debugging a firewall rule. That’s acceptable for personal services. It’s unacceptable for a production SaaS.

When time cost exceeds savings. If your hourly rate is 100 EUR and you spend 5 hours per week on maintenance, that’s 2,000 EUR per month in opportunity cost. My 430 EUR per year in direct costs looks less impressive when you add 500+ hours of annual labor. The math only works if you value the learning or genuinely enjoy the work. If it’s pure drudgery, pay for the SaaS.

When compliance requires it. SOC2, HIPAA, GDPR data processing agreements – cloud providers invest heavily in these certifications. Your homelab does not have SOC2 compliance. If your use case requires auditable security controls with formal certification, self-hosting creates more problems than it solves.

When you need elastic scale. My setup handles my personal workloads well. But when I pushed it – 57% of containers running without memory limits, Docker-in-Docker thrashing, cAdvisor at 255% CPU – I hit real resource ceilings. Cloud auto-scaling handles traffic spikes gracefully. Bare metal does not.

Email. Never self-host email. Deliverability, spam filtering, IP reputation, DMARC, DKIM, SPF – the operational overhead is enormous and the failure modes are silent. Use a proper email provider. This is one of the few absolute rules in self-hosting.

When you don’t value the learning. If you just want things to work and have zero interest in understanding the infrastructure beneath your applications, self-hosting will feel like unpaid system administration. And it kind of is. The difference is whether you frame those hours as education or as labor.

Getting Started: The Minimum Viable Homelab

If you’ve read this far and you’re curious, here’s how I’d suggest starting. Three tiers, in order of complexity and investment.

Tier 1: A Raspberry Pi and Docker Compose (~100 EUR)

Buy a Raspberry Pi 5 (or 4). Install Raspberry Pi OS. Install Docker. Write a docker-compose.yml with two or three services – maybe Mealie for recipes, a Grafana instance with VictoriaMetrics for basic monitoring, and Traefik for reverse proxying.

This is your playground. Everything runs on one device, configuration is a single YAML file, and if you break everything, you reflash the SD card and start over. The learning value is enormous relative to the investment. You’ll understand container networking, volume mounts, environment variables, and reverse proxy routing within a weekend.

Total investment: ~100 EUR for hardware + ~0 EUR ongoing (electricity is negligible for a Pi).

Tier 2: A Mini PC with Proxmox (~400 EUR)

When Docker Compose on a single device starts feeling constraining – maybe you want isolation between services, or you’re running into resource limits – get a mini PC and install Proxmox.

Now you have a hypervisor. LXC containers give you lightweight isolation. You can run a database in one container, your applications in another, and your monitoring in a third. Networking becomes more interesting with VLANs and bridges. You start thinking about infrastructure as code with OpenTofu.

This is where it goes from hobby to genuinely useful. At this tier, you can comfortably run 10-15 services with proper isolation, networking, and backup strategies.

Total investment: ~400 EUR for hardware + ~20 EUR/month for electricity and amortization.

Tier 3: Full Stack with VPS Edge, SDN, and IaC (~500 EUR + ongoing)

Add a cheap VPS for your public-facing edge (5 EUR/month from Hetzner). Connect it to your homelab via Tailscale. Set up Proxmox SDN for internal networking. Write Ansible roles for every service. Manage your infrastructure state with OpenTofu. Deploy a CI/CD pipeline with Forgejo and Forgejo Runner.

This is where you’re running enterprise patterns at home. HA PostgreSQL, three-tier firewall segmentation, log aggregation across all hosts, automated deployments, proper secret management. It’s also where the time investment goes from “weekend project” to “ongoing commitment.”

Total investment: ~500 EUR for hardware + ~36 EUR/month ongoing.

The Bottom Line

Self-hosting in 2026 is cheaper, more capable, and more accessible than it’s ever been. Open-source tools have matured to the point where a single person can run an infrastructure stack that would have required a dedicated team ten years ago. AI assistants can bridge the knowledge gaps that used to make infrastructure intimidating.

But the real value isn’t in the 430 EUR per year you save compared to SaaS pricing. It’s in what you learn by running it. Every service you deploy, every bug you debug, every firewall rule you write adds to a mental model of how distributed systems actually work. That knowledge compounds. It makes you a better developer, a better architect, a better engineer.

The question isn’t whether self-hosting saves money. It clearly does.

The question is whether you’re willing to invest the time. And whether you see that time as a cost or an investment depends entirely on what you value.

I value understanding. I value ownership. I value knowing that my data lives on my hardware, my services run on my terms, and my 17-year-old NAS still has a job.

Your move.