Hacker Newsnew | past | comments | ask | show | jobs | submit | Chris2048's commentslogin

This is the "There are starving children in Africa, eat your greens" argument.

Are discussion about petri dishes diverting relevant resources away from building safety initiatives?

Can I be allowed to torture small animals so long as human suffering persists?


No this is the stop playing with your dolls argument. It is rock solid. As for torturing animals, you do you I guess.

> consciousness (qualia)

I've never heard the word qualia used as a synonym for consciousness, only as a related but distinct concept.

> an ant has a greater qualia level than us

What? where does this come from?


Why do you quote only the end, the full sentence is: We cannot discard, for example, than an ant has a greater qualia level than

They're saying that since we don't know how to "measure consciousness" we can't be certain that an ant doesn't have more "consciousness" than us. Obviously it seems very unlikely, but we can't be certain


I don't share that interpretation, maybe they clarify what was meant themselves?

fwiw i share the same interpretation as the other commenter.

We don't understand the soul, we don't understand gods-will, we don't understand Qi, we don't understand Orgone energy etc.

As such, how can we build moral incentives around any of these things?

We must understand something about them, and what you seem to 'know' is that sentience is a thing (that exist), and it arises from the human mind - I don't think this is anymore proven than any of the other red-herring counterexample concepts I gave.

Or to summarise/TLDR - Sentience? It doesn't exist, it's a desperate attempt to maintain the human-centric concept of the soul, stripped of religious tones to appear more legitimate. If you disagree, prove otherwise.


Speaking on the concept of AST storage, VCS in general (not Beagle specifically);

Hopefully this is the path to protentional editing/editors - the underlying code is an AST, but what you see, and edit, is the human-friendly text representation of that AST. Of course, you need a solid transform not just from text/PL repr to AST, but also AST to PL, possibly keeping local metadata relevant to the second part (e.g. where to put non-generated-by-default-from-ast whitespace and other formatting), so this might call for new (or modified) prog-langs for this explicit purpose?


How does this compare to running the containers in one of the WSL VMs? Can't you do all the same things via the host VM?

You just paste in that YAML? Is this an official llm config format that is parsed out?

> rhetoric as a real answer

Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?

> we, deep down, know is bad

this feels like real rhetoric.


> Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?

You seem hung-up on my using the word rhetoric. Just so we’re on the same page here:

> rhetoric, n : the art of speaking or writing effectively: b)the study of writing or speaking as a means of communication or persuasion

The business writing class I took in college was called Business Rhetoric. It’s not a bad word.

If you’re crafting arguments to get other people to support specific actions or products or policies or whatever, that is unambiguously rhetoric.

> this feels like real rhetoric.

Sure? Rhetoric that implores people to value their principles over theoretical security concerns or FOMO or greed? I wouldn’t exactly call that rakish.

It’s a non-answer because if you really feel doing something is bad, consider yourself a consequential actor in the world whose contributions meaningfully advance the projects you work on, then why would you want to help someone be there first to do a bad thing? If you don’t feel it’s bad, then there’s no problem. You’re just living your life. That is clearly not the position expressed by the content I responded to. If there are actual concrete concerns that don’t essentially boil down to “well they’re going to make that money before I do,” then that would be an actual answer.


> It’s not a bad word.

When used in the negative sense it is, per https://dictionary.cambridge.org/dictionary/english/rhetoric

"disapproving -> clever language that sounds good but is not sincere or has no real meaning"

Are you implying you mean something other than this sense of the word?


Calling your criticism a stretch would be far too charitable. I made it clear what I meant and I’ve got better things to do than nitpick over semantics.

"Implying" seems kind of weak, the person you're responding to shared the definition they are using.

Yes, after the fact; that is after my response they provided a definition.

> the person you're responding to shared the definition they are using

No, technically they didn't. They provided a definition, they didn't say it was the one they are using here. If it's not pedantic tangent, it seem correct to assume that is the definition they are using, but that's what "Implying" means, so I trying to explicitly get a clarification on that.

"Why?" you might ask? Not every discussion is in good faith. The more that is assumed, the more leeway you allow for people to weasel out of countered arguments.


Yes. They provided their definition in response to your (mis?)reading of their original words. They are not the party bringing bad faith to this conversation.

Oh? And who is? provide receipts please.

Why is that the concern of the authors of this paper?

Why wouldn't it be? They worked on it.

Because it's not in scope.

> not once, ever, in the video speak of ethics

On the contrary, I dislike premature ethics discussion, where you end up wildly speculating what the tech might become and riffing off that, greatly padding whatever relative technical content you had. I don't want every technical paper to turn into that, ethics should be treated as a higher-level overview of concerns in a field, with a study dedicated to the ethical concerns of that field (by domain-specific ethics specialists).

Is your concern weapon automaton, or animal rights?


My concern is creating literal sentience in a box. I don't, personally, think it's unfounded for me to have that concern, given that we're growing masses of human neurons and teaching them to perform tasks.

I'm not going to start campaigning against it or changing my life. But it still makes me deeply uncomfortable, and that's allowed.


> and that's allowed

In what sense, and as opposed to what? What aren't you allowed to feel irrationally uncomfortable, or baselessly concerned with?


You mean one of the wider ones? Look a little like cyberdecks.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: