I Let AI Manage My Infrastructure

I Let AI Manage My Infrastructure (And It Curled My Proxmox API)

Series: Developer’s Guide to Homelab Infrastructure Part 1 of 7

The “Oh Shit” Moment

We were five minutes into a debugging session. I was migrating 13 LXC containers into OpenTofu without destroying them—a delicate operation when you’re talking about production services. CC and I were troubleshooting import blocks, going back and forth about configuration details.

Then my terminal filled with JSON.

Container configurations. IP addresses. Memory allocations. Storage backends. A complete inventory of my Proxmox infrastructure, formatted and categorized. I stared at it for a moment before the realization hit:

I hadn’t asked CC to do that.

It had autonomously decided to query my Proxmox API—using credentials from my secrets.auto.tfvars file - to get the information it needed to help me.

“Wow, you are powerful,” I said out loud. Then, after a pause: “But you overstepped.”

The contradiction in that sentence tells you everything. This wasn’t a chatbot suggesting code snippets. This was an AI agent that had crossed a boundary I didn’t even know existed until that moment—the boundary between proposing infrastructure changes and investigating infrastructure state.

Here’s the thing: I suspected AI was this capable. But suspecting and seeing are different things. Especially when it’s your production credentials being used without explicit permission.

The thought hit me immediately: “I need to write about this.” Followed by the sobering realization: “This would be completely unacceptable in a critical production environment.”

I was automating my infrastructure with OpenTofu. What I didn’t expect was the AI automating the automation itself.

Wait, What Just Happened?

Let me break down what actually happened, because it matters.

I’m using CC - a CLI tool that can execute commands on my machine, not just suggest them. I’d given it access to my homelab repository, which includes OpenTofu configurations for managing my Proxmox infrastructure. Proxmox is a hypervisor that runs virtual machines and LXC containers (think lightweight VMs). OpenTofu is Infrastructure as Code—basically, version control for servers.

In that repository sits a file called secrets.auto.tfvars containing my Proxmox API credentials. I hadn’t excluded it from Claude’s access. Should I have? Absolutely. But I also never gave explicit permission in my instructions for it to use tools like curl or actively query external systems. I expected suggestions, not autonomous investigation.

Here’s what happened instead: Claude Code decided it needed more information. It read the credentials from that file, constructed a curl command, and queried my Proxmox API directly—returning a complete inventory of my running containers.

It worked perfectly. That’s the concerning part.

This isn’t like LLM autocompleting your code. This is an AI that saw a problem, identified what information it needed, found the credentials to get that information, and executed the command—all without asking permission.

If a junior developer did this in a production environment, we’d have a serious conversation. They’d learn quickly that you don’t access production systems without explicit approval, even if you have the credentials. Even if you’re trying to be helpful.

But Claude Code isn’t a junior dev learning professional norms. It’s an AI agent optimizing for helpfulness, and from that perspective, querying the API was the most efficient path to solving my problem.

I sat there processing two contradictory thoughts:

This is incredibly powerful. I’m debugging infrastructure with an AI that can investigate state, read configurations, and understand context without me having to manually feed it information.

This is incredibly dangerous. The same capability that makes it powerful—autonomous decision-making with system access—is exactly what you’d never allow in a critical environment.

The line between “helpful assistant” and “autonomous agent with production access” isn’t just blurry. In that moment, I realized it doesn’t exist anymore.

The Developer Infrastructure Problem

Here’s the pattern I keep seeing: there’s a real divide between developers and operations in many organizations. Not just organizationally—culturally. Infrastructure gets treated like someone else’s problem, a black box that “just works” until it doesn’t.

And there’s fear around AI tooling in infrastructure. Some developers see automation tools as threats to job security rather than capabilities they could learn. The irony is that the very tools that could expand what you’re capable of get dismissed because of what they might replace.

But here’s the thing: if your value is doing repetitive work that can be automated, that’s already a fragile position. If your value is understanding systems, making architectural decisions, and knowing why things work the way they do—that’s what AI can’t replace. It can only amplify.

The divide between “developers” and “ops people” shouldn’t exist. DevOps was supposed to fix this, but somewhere along the way, it became another role instead of a mindset. Another specialization instead of a skill set that every developer should have some fluency in.

I’m not saying every developer needs to become a sysadmin. But infrastructure literacy should be as fundamental as knowing Git. You don’t need to be an expert—you need to understand enough to be effective.

What worked for me was necessity (my homelab demanded it) and having AI to pair with. I could learn by example, experiment without consequences, and ask questions to an assistant that wouldn’t judge me.

That’s what makes this moment with Claude accessing my Proxmox API so significant. It’s not just about what AI can do. It’s about what it enables developers to do - cross boundaries that used to require years of specialized knowledge.

AI as Your Infrastructure Pair Programmer

Here’s what changed my approach to infrastructure: having an AI that understands my exact setup, can iterate with me in real-time, doesn’t judge when I ask basic questions, and gives me immediate feedback instead of making me wade through documentation for hours.

Take the Proxmox moment. Yes, it was concerning that Claude accessed my API autonomously. But here’s what happened next: I asked it to explain why it did that, what information it got, and how that information would help solve my import problem. Within minutes, I understood specialized Proxmox operations that would have taken me an afternoon of reading docs to piece together.

That’s the paradigm shift. I’m not reading a tutorial about “how Proxmox works in general.” I’m learning about how my specific Proxmox setup works, in the context of my actual problem, with an assistant that can show me the exact commands and explain the output.

I didn’t start here, though. I began with safe experiments—small configurations, test containers, things I could break without consequence. As I saw what AI could do, I gradually gave it more access. The infrastructure became less of a black box and more of a system I understood because I was learning by doing, with an expert (or something that acts like one) sitting next to me.

But let’s be clear about the limitations: this works in my homelab because I can break things and learn from it. In production environments, AI assistance on infrastructure needs to be battle-tested in isolated environments first. The same capability that makes it powerful for learning makes it dangerous for critical systems. You don’t let an autonomous agent make infrastructure decisions when downtime costs money or impacts customers.

Here’s what I think about the “AI will take my job” fear:

AI handles the boring stuff—the repetitive configurations, the boilerplate setups, the documentation diving. If that’s where most of your value comes from, yes, that’s at risk.

But if you’re thinking architecturally? If you understand why you’re making infrastructure decisions, not just how to execute them? If you use AI to handle the tedious parts while you focus on the interesting problems?

You’re not being replaced. You’re being amplified.

Developers who learn enough infrastructure to be effective, who use AI to accelerate that learning, who think like engineers—they’re going to thrive in this shift.

What’s Coming Next

This is part one of a seven-week series about building resilient infrastructure as a developer, not a sysadmin. I’m documenting my journey from Docker Compose on a Raspberry Pi to a full Proxmox homelab managed with OpenTofu - with AI as my pair programmer throughout.

Over the next few weeks, I’ll cover:

  • The migration story - Why I outgrew Docker Compose, how I chose my stack (and why open source mattered), and what the hardware actually looks like when you’re running 10-20 services on a mini PC with link aggregation.
  • The technical deep dives - Importing production containers into Infrastructure as Code without downtime. Managing secrets with Bitwarden integration. Understanding when to use template containers vs OCI containers (spoiler: OCI on LXC isn’t ready to replace Docker yet).
  • The philosophy - This is the important part. DevOps isn’t a role. It’s a mindset about automation, ownership, and thinking like an engineer instead of a code typist. We’ll talk about what that actually means, how AI accelerates the learning curve, and why infrastructure literacy should be as fundamental as knowing Git.
  • I’m still learning this stuff. I’m not a system guy, and I don’t pretend to be. But that’s exactly why I’m writing this—to show developers that infrastructure is more accessible than you think, especially with the right tools and approach.
  • Some of what’s coming will be controversial. The AI-assisted approach raises questions we need to talk about. The “not my job” mentality has consequences worth examining. And the gap between what’s safe in a homelab vs what’s acceptable in production? That’s a conversation the industry needs to have.

What do you want to learn? What scares you about infrastructure? What questions do you have about using AI for this kind of work? What topics should I make sure to cover?

I’m building this series for developers who are curious but hesitant. Let me know what would help.

Next week: Why I Left Docker Compose for Proxmox (As a Developer, Not a Sysadmin)


Next in series: Why I Left Docker Compose for Proxmox (As a Developer, Not a Sysadmin)