AI didn't break your .env security. Runtime did.
software engineeringsecurityaidevtools

AI didn't break your .env security. Runtime did.

4/13/20265 min read

I went down the rabbit hole of "secure local secrets". Like most developers, I thought I had this under control. .env is ignored, no secrets in git, no obvious logging mistakes. I even started tightening things further, looking into tools like dotenvx to encrypt .env files and moving toward something more structured, like Infisical. I revisited KeePassXC. In theory, it's solid. In practice, UI issues made it unreliable. That's the problem with security tools: if they get in the way, people stop using them properly. So I tried to do everything "right". Then I realised the problem isn't where secrets are stored. It's when they are used. The moment your app starts, your secrets are decrypted and loaded into memory. They have to be. Otherwise, your app can't function. From that point on, your secret is just a string in a running process. This is where modern AI tooling changes the picture. Tools like Claude Code or GitHub Copilot don't operate in isolation. They live inside your IDE, your terminal, your workflow. They can read files, suggest changes, and interact with code paths that already use your secrets. They don't need access to your .env file. Runtime access is enough. Even if you add strict rules in settings.json, CLAUDE.md, or config.toml, you're still relying on behaviour, not enforcing a boundary. Those files guide the assistant. They don't isolate runtime, and they don't prevent secrets from existing in plaintext once your app is running. That's the key shift. We've all been trained to think: "If my .env is safe, my secrets are safe." But secrets are not most vulnerable when they are stored. They are most vulnerable when they are in memory. In modern development, many things observe memory indirectly: logs, subprocesses, tooling, debugging, and increasingly, AI agents and other systems observing your execution environment. None of this requires a human mistake. It just requires execution. That doesn't mean tools like dotenvx or Infisical are useless; far from it. They solve a real problem: reducing exposure at rest and avoiding plaintext secrets scattered across your system. But they don't change the fundamental constraint: if your code can use a secret, that secret exists in plaintext at runtime. So the goal changes. Not perfect secrecy, but controlled exposure: fewer secrets loaded, less time in memory, less propagation, stricter handling. This doesn't stop at development. In production, secrets still exist in plaintext at runtime. The difference is the blast radius. There are no AI tools to blame, only system level exposure. The same applies to containers. AI didn't create this problem. It just removed the illusion that we had already solved it.

Curious how others are handling this: Are you sticking with .env + encryption (dotenvx), introducing dynamic credentials, moving to something like Infisical, or going fully into secret managers?