Hacker Newsnew | past | comments | ask | show | jobs | submit | just_once's commentslogin

It wasn't even that long ago that Trump fired the BLS Commissioner and nominated someone that would "restore GREATNESS" to the BLS.

Putting aside the slop facade place atop the data....why would we trust the data?


This is turning into just another reality show. There are no adults anymore.


Proud of myself for recognizing this was a bot without having to inspect further than this comment!


What does that look like? Can you describe your worst case scenario?


Highly selective enforcement along partisan lines to suppress dissent. Government officials forcing you to prove that your post is not AI generated if they don't like it. Those same officials claiming that it is AI generated regardless of the facts on the ground to have it removed and you arrested.


If you assume the use of law will be that capricious in general, then any law at all would be considered too dangerous for fear of use as a partisan tool.

Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing.


Worst case? Armed officers entering your home without warrant, taking away your GPU card?


They can do that anyway. What does that have to do with the content of the proposed law?


Most of the examples that you've used gain very little from added specificity. It's essentially linguistic laziness. That linguistic laziness is not identically consequential in all contexts.


Would you feel better if people said "people of African descent are much more likely to have a genetic disease sickle cell anemia"?


Name names, George. It's the only way.


I don't know if there's a word for this but this reads to me as like, software virtue signaling or software patronizing. It's bizarre to me to tell an engineer what their job is as a matter of fact and to claim a particular usage of a tool as mandated (a tool that no one really asked for, mind you), leveraging duty of all things.

I guess to me, it's either the case that LLMs are just another tool, in which case the already existing teachings of best practice should cover them (and therefore the tone and some content of this article is unnecessary) or they're something totally new, in which case maybe some of the already existing teachings apply, but maybe not because it's so different that the old incentives can't reasonably take hold. Maybe we should focus a little bit more attention on that.

The article mentions rudeness, shifting burdens, wasting people's time, dereliction. Really loaded stuff and not a framing that I find necessary. The average person is just trying to get by, not topple a social contract. For that, look upwards.


I've really seen both I suppose. A lot of devs don't take accountability / responsibility for their code, especially if they haven't done anything that actually got shipped and used, or in general haven't done much responsible adulting.


No doubt. Interesting to think about why that is without assuming it's a character flaw.


LLMs are just another tool, but they're disruptive enough that existing best practices need to be either updated or re-explained.

A lot of people using LLMs seem not to have understood that you can't expect them to write code that works without testing it first!

If that wasn't clearly a problem I wouldn't have felt the need to write this.


Yep, it's a real problem. No dispute there.

My intention isn't to argue a point, just to share my perspective when I read it.

I read your response here to be saying something like "I noticed that people are misunderstood about X, so I wanted to inform them". In this case "X" isn't itself very obvious to me (For any given task, why can't you expect that a cutting edge LLM would be able to write it without requiring your testing that?) but most importantly, I don't think I would approach a pure misunderstanding (tantamount to a skills gap) with your particular framing. Again, to me it reads as patronizing.

Love the pelican on the bicycle, though. I think that's been a great addition to the zeitgeist.


Hacker News is not representative of the average person in often very bad ways.


If AI isn't better than humans then there's no point.


If the target is superintelligence, then AI shouldn't be learning from humans.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: