Claude Code’s sandboxing is a complete joke. There should be no ‘off switch.’ Sandboxing should not be opt in. It should not have full read access over the file system by default.
I really want more security people to get involved in the LLM space because everyone seems to have just lost their minds.
If you look at this thing through a security lens it’s horrifying, which was a cause of frustration when Anthropic changed their TOS to ban use of alternative clients with a subscription. I don’t want to use that Swiss cheese.
The Claude sandbox is so antithetical to good security posture it almost seems intentional[0]. Having both "default read to the entire file system" and "the agent can and _will_ disable the sandbox, without even asking the user[1], in order to complete tasks" would not pass muster in a freshman level security course.
[0] assuming a human with security training was involved in the design/prompting of the sandbox development.
[1] Claude has well used mechanisms for asking the user before taking potentionally dangerous actions. Why it is not part of the "disable my own SANDBOX" branches of code is confusing.
This is exactly the kind of problem that led me to build a runtime governance layer for coding agents.
Hooks alone aren't a security boundary — Anthropic and Trail of Bits both say "guardrails, not walls." The missing piece is continuous behavioral measurement: tracking tool failures, subagent spawns, and risk drift in real time, then blocking dangerous calls before execution based on a live risk score — not just pattern matching.
I've been working on this at P-MATRIX (open source, Apache-2.0). The core idea: a 4-axis trust model that produces a real-time risk score R(t), and a Safety Gate that intercepts tool calls based on that score. Kill switch activates automatically when risk crosses a threshold.
Author here. I helped creating Falco (CNCF runtime security) and built this (Veto) to fix the path-based identity problem we all shipped a decade ago. The dynamic linker bypass in the "where it breaks" section is the part I'm most interested in discussing. It's a class of evasion that no current eval framework measures. Happy to answer questions about the BPF LSM implementation.
On the dynamic linker bypass specifically, have you looked at fapolicyd [1]? It uses fanotify(7) and the top of the README is:
> The restrictive policy was designed with these goals in mind:
> 1. No bypass of security by executing programs via ld.so.
> 2. Anything requesting execution must be trusted.
One correction on the table: SELinux and AppArmor shouldn't be grouped under "rename-resistant: No". AppArmor is path-based. SELinux labels are on the inode, a rename doesn't change the security context. The copy attack doesn't apply either: a process in sandbox_t creating a file in /tmp gets tmp_t via type transition, and the policy does not grant sandbox_t execute permission on tmp_t.
Fair point on SELinux: grouping it with AppArmor was imprecise. Thank you for spotting it. As you mentioned, SELinux labels are on the inode, so a rename does preserve the security context. I'll split the row in the table.
On the copy attack: the `sandbox_t` -> `tmp_t` type transition you describe is a real defense, but it's policy-dependent. It's my understanding that `sandbox_t` is one of the most locked-down SELinux domains, while most interactive users (AI agents included) run as `unconfined_t`, where `tmp_t` files are executable, and the copy attack succeeds. So, whether a copied binary gets an executable type (or not) actually depends on the transition rules in the loaded policy.
Instead, content-addressable enforcement doesn't depend on policy configuration. The hash follows the content regardless of where it lands or what label it gets.
Ah, and thank you for pointing to `fapolicyd`! It's the closest prior art to what we're doing at the exec layer, and its `ld.so` bypass prevention via policy rules addresses the exact dynamic linker evasion I wrote about.
Two architectural differences worth noting, which I guess you are already aware of.
First, `fapolicyd` is a userspace daemon... The kernel blocks until the daemon responds. This works, but the daemon itself becomes a single point of failure, isn't it? If it stalls or is killed, the system either deadlocks or fails open (hence the deadman's switch). Veto keeps hash computation and enforcement inside the BPF LSM hook. The BPF program can't (hopefully lol) crash and requires no context switch for the decision.
Second, `fapolicyd` defaults to an allowlist model: anything requesting execution must be in the trust database. That's a stronger default posture than our current denylist. We're starting with denylist because it's the lower-friction entry point for teams adopting agent security incrementally: you block known things without having to enumerate all good things first. In 2 words: different tradeoffs.
User `walterbell` is right. Padding changes the hash, so the modified binary wouldn't match the denylist. It also wouldn't match anything the system has seen before since it's now an unknown binary... The veto denylist approach is for catching known-bad binaries by identity. If you need to block unknown/modified binaries, you flip the model: allowlist known-good hashes and deny everything else. It's a different threat model, so it requires a different mode.
This article demonstrates exactly why Claude Code's built-in denylist isn't enough. The denylist is a suggestion, not enforcement. Claude can reason around it.
I built CRE (Claude Rule Enforcer) specifically because of this. It's an external enforcement layer, not part of the agent's context window, so it can't be reasoned around or escaped.
Two layers:
- L1: Regex pattern matching. Blocks destructive commands in under 10ms. The agent never even sees the rejection reasoning.
- L2: LLM advisory using a separate model. Checks intent against conversation context for grey areas.
The critical design choice: enforcement happens outside the agent's loop. Claude can't prompt-inject its way past a regex gate that runs before the tool call reaches the shell.
Humans just need to adapt their pattern recognition skills. It's a continuous and changing effort. For some, not detecting it is the sign that they need to update their own systems not that the sign is wrong.
For many it's not worth the effort to even try anymore. Particularly when the content of a submission is about LLMs: why worry?
The adversary can reason now, and our security tools weren't built for that.
Leo di Donato, who helped create Falco, the cloud native runtime security, wrote a technical deep dive into how Claude Code bypassed it's own denylist and sandbox. And introduces Veto, a kernel-level enforcement engine built into the Ona platform.
Didn't leave it out. It was grouped with AppArmor in the table, which was imprecise. I'm splitting the row. SELinux labels are on the inode, so renames preserve the context. Copy resistance is policy-dependent (works for `sandbox_t`, not for `unconfined_t`). See my reply to `botanicalfriend` user above.
I really want more security people to get involved in the LLM space because everyone seems to have just lost their minds.
If you look at this thing through a security lens it’s horrifying, which was a cause of frustration when Anthropic changed their TOS to ban use of alternative clients with a subscription. I don’t want to use that Swiss cheese.
reply