Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why should Github do anything?

If you execute arbitrary instructions whether via LLM or otherwise, that's a you problem.

 help



If I'm understanding the issue correctly, an action with read-only repo access shouldn't really be able to write 10GB of cache data to poison the cache and run arbitrary code in other less-restricted actions.

The LLM prompt injection was an entry-point to run the code they needed, but it was still within an untrusted context where the authors had forseen that people would be able to run arbitrary code ("This ensures that even if a malicious user attempts prompt injection via issue content, Claude cannot modify repository code, create branches, or open PRs.")


I'm just wondering if there's a possible way to prevent this that wouldn't be intrusive or break existing features.

It can have better defaults but that's about it. If LLM tells user the LLM needs more permission user will just add them as people that are affected by bugs like that traded autonomy and intelligence to AI



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: