Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
WickyNilliams
1 day ago
|
parent
|
context
|
favorite
| on:
A GitHub Issue Title Compromised 4k Developer Mach...
No such mitigation exists for LLMs because they do not and (as far as anybody knows) cannot distinguish input from data. It's all one big blob
help
oofbey
1 day ago
[–]
Not true. The system prompt is clearly different and special. They are definitely trained to differentiate it.
reply
WickyNilliams
1 day ago
|
parent
|
next
[–]
Trained != Guaranteed. It's best effort
reply
PunchyHamster
1 day ago
|
parent
|
prev
[–]
....and there are plenty of attacks to circumvent it
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: