What are we supposed to talk about in this thread exactly? The developers of this model are evil. Are we supposed to just write dry comments about benchmarks while OpenAI condones their models being deployed for autonomously killing people?
Yes I'm sure it makes a very nice bicycle SVG. I will be sure to ask the OpenAI killbots for a copy when they arrive at my house.
If it is actually that important, then maybe more effort should be made so it isn't "low quality." Cannot be very important to them if they're disinterested in presenting an intellectually compelling argument about it.
PS - If you think I am not sympathetic to what they're raising, you're very much mistake. But they're not winning anyone new over their side with this flamebait.
Sometimes you throw a brick through a window, not because it's an intellectual thing to do, but because of the hundred people who'll maybe smash the next hundred windows after you do yours.
and then, because any supportive response to all that window smashing is informative as collective intelligence...
and then, bc that all validates that the order that all these clever rules were upholding is illegitimate.
It's how a very stupid thing stands in for a million smart and well-understood things that everyone is also trying to say.
You are applying a problem which every AI company has, not unique to OpenAI. What about other nation-states making auto-AI robots which kill children, will you still choose to pick out OpenAI specifically? Maybe your concern is too late and dozens of countries already are training their own AIs to do that or worse.
first real comment, I thought that at first but this could lower the possible users that could be using chatGPT and that would be against us (shareholders)
"that lead to a chat bot being used to identify a school for girls as a valid target"
Has it been stated authoritatively somewhere that this was an AI-driven mistake?
There are myrid ways that mistake could have been made that don't require AI. These kinds of mistakes were certainly made by all kinds of combatants in the pre-AI era.
Targeting and accuracy mistakes happen plenty in wars that aren't assisted by AI. I don't think it's fair to assume that AI had a hand in the bombing of the school without evidence.
What attitude exactly are you talking about? The one that says that if you’re going to morally sell out it would be better if you at least tried not to kill children?