Large scale AI deployment has led to a complete change in what signal code actually conveys and what it means for maintainers. Code is no longer a yardstick for effort, care or expertise. If anything a large amount of it can be the opposite.
I read an article a while ago about how “taste is a moat” (https://wangcong.org/2026-01-13-personal-taste-is-the-moat.h...) and it kind of applies here. In that article a technically correct kernel patch was rejected since it actually just re-implemented functionality htat was available elsewhere. In the tldraw repo, users seem to clone the repo, spin up claude and then make a PR without any kind of “taste” involved.
What confuses me is the fact that tldraw is actually very good for trying to get the best out of models, and indeed, internal to tldraw, models are expected to be used and the author gets value out of them. And yet, people leave sloppy unvetted PRs. This is a social issue that we didn’t really have before since it was producing code was the difficult part. Now producing code and PRs is easy the signal v.s. Noise ratio has collapsed completely and it’s just not worth it for people to actually review this stuff.
It would be better for people to leave one line issues with video demonstrations and allow the internal team to /fix them: “In a world of AI coding assistants, is code from external contributors actually valuable at all? If writing the code is the easy part, why would I want someone else to write it?”. Is code really needed to convey problems with open source repos or is it something unnecessary that we are now unshackled from? In the case of tldraw a lot of the PRs are just the result of people running claude on issues and therefore they add absolutely zero value.
I read an article a while ago about how “taste is a moat” (https://wangcong.org/2026-01-13-personal-taste-is-the-moat.h...) and it kind of applies here. In that article a technically correct kernel patch was rejected since it actually just re-implemented functionality htat was available elsewhere. In the tldraw repo, users seem to clone the repo, spin up claude and then make a PR without any kind of “taste” involved.
What confuses me is the fact that tldraw is actually very good for trying to get the best out of models, and indeed, internal to tldraw, models are expected to be used and the author gets value out of them. And yet, people leave sloppy unvetted PRs. This is a social issue that we didn’t really have before since it was producing code was the difficult part. Now producing code and PRs is easy the signal v.s. Noise ratio has collapsed completely and it’s just not worth it for people to actually review this stuff.
It would be better for people to leave one line issues with video demonstrations and allow the internal team to /fix them: “In a world of AI coding assistants, is code from external contributors actually valuable at all? If writing the code is the easy part, why would I want someone else to write it?”. Is code really needed to convey problems with open source repos or is it something unnecessary that we are now unshackled from? In the case of tldraw a lot of the PRs are just the result of people running claude on issues and therefore they add absolutely zero value.