Highly selective enforcement along partisan lines to suppress dissent. Government officials forcing you to prove that your post is not AI generated if they don't like it. Those same officials claiming that it is AI generated regardless of the facts on the ground to have it removed and you arrested.
If you assume the use of law will be that capricious in general, then any law at all would be considered too dangerous for fear of use as a partisan tool.
Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing.
Most of the examples that you've used gain very little from added specificity. It's essentially linguistic laziness. That linguistic laziness is not identically consequential in all contexts.
I don't know if there's a word for this but this reads to me as like, software virtue signaling or software patronizing. It's bizarre to me to tell an engineer what their job is as a matter of fact and to claim a particular usage of a tool as mandated (a tool that no one really asked for, mind you), leveraging duty of all things.
I guess to me, it's either the case that LLMs are just another tool, in which case the already existing teachings of best practice should cover them (and therefore the tone and some content of this article is unnecessary) or they're something totally new, in which case maybe some of the already existing teachings apply, but maybe not because it's so different that the old incentives can't reasonably take hold. Maybe we should focus a little bit more attention on that.
The article mentions rudeness, shifting burdens, wasting people's time, dereliction. Really loaded stuff and not a framing that I find necessary. The average person is just trying to get by, not topple a social contract. For that, look upwards.
I've really seen both I suppose. A lot of devs don't take accountability / responsibility for their code, especially if they haven't done anything that actually got shipped and used, or in general haven't done much responsible adulting.
My intention isn't to argue a point, just to share my perspective when I read it.
I read your response here to be saying something like "I noticed that people are misunderstood about X, so I wanted to inform them". In this case "X" isn't itself very obvious to me (For any given task, why can't you expect that a cutting edge LLM would be able to write it without requiring your testing that?) but most importantly, I don't think I would approach a pure misunderstanding (tantamount to a skills gap) with your particular framing. Again, to me it reads as patronizing.
Love the pelican on the bicycle, though. I think that's been a great addition to the zeitgeist.
Putting aside the slop facade place atop the data....why would we trust the data?
reply