Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looks more like context drift than “personality.”

When two agents coordinate, they’re mostly relying on compressed summaries of each other’s outputs. If one introduces a wrong assumption, the other often treats it as ground truth and builds on top of it. I’ve seen similar behavior in multi-agent coding loops where the model invents a causal explanation just to reconcile inconsistent state.

It’s that multi-agent setups need a stronger shared source of truth (repo diffs, state snapshots, etc.). Otherwise small context errors snowball fast.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: