If I'm understanding the issue correctly, an action with read-only repo access shouldn't really be able to write 10GB of cache data to poison the cache and run arbitrary code in other less-restricted actions.
The LLM prompt injection was an entry-point to run the code they needed, but it was still within an untrusted context where the authors had forseen that people would be able to run arbitrary code ("This ensures that even if a malicious user attempts prompt injection via issue content, Claude cannot modify repository code, create branches, or open PRs.")
It can have better defaults but that's about it. If LLM tells user the LLM needs more permission user will just add them as people that are affected by bugs like that traded autonomy and intelligence to AI
If you execute arbitrary instructions whether via LLM or otherwise, that's a you problem.