Why do you quote only the end, the full sentence is: We cannot discard, for example, than an ant has a greater qualia level than
They're saying that since we don't know how to "measure consciousness" we can't be certain that an ant doesn't have more "consciousness" than us. Obviously it seems very unlikely, but we can't be certain
We don't understand the soul, we don't understand gods-will, we don't understand Qi, we don't understand Orgone energy etc.
As such, how can we build moral incentives around any of these things?
We must understand something about them, and what you seem to 'know' is that sentience is a thing (that exist), and it arises from the human mind - I don't think this is anymore proven than any of the other red-herring counterexample concepts I gave.
Or to summarise/TLDR - Sentience? It doesn't exist, it's a desperate attempt to maintain the human-centric concept of the soul, stripped of religious tones to appear more legitimate. If you disagree, prove otherwise.
Speaking on the concept of AST storage, VCS in general (not Beagle specifically);
Hopefully this is the path to protentional editing/editors - the underlying code is an AST, but what you see, and edit, is the human-friendly text representation of that AST. Of course, you need a solid transform not just from text/PL repr to AST, but also AST to PL, possibly keeping local metadata relevant to the second part (e.g. where to put non-generated-by-default-from-ast whitespace and other formatting), so this might call for new (or modified) prog-langs for this explicit purpose?
> Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?
You seem hung-up on my using the word rhetoric. Just so we’re on the same page here:
> rhetoric, n : the art of speaking or writing effectively: b)the study of writing or speaking as a means of communication or persuasion
The business writing class I took in college was called Business Rhetoric. It’s not a bad word.
If you’re crafting arguments to get other people to support specific actions or products or policies or whatever, that is unambiguously rhetoric.
> this feels like real rhetoric.
Sure? Rhetoric that implores people to value their principles over theoretical security concerns or FOMO or greed? I wouldn’t exactly call that rakish.
It’s a non-answer because if you really feel doing something is bad, consider yourself a consequential actor in the world whose contributions meaningfully advance the projects you work on, then why would you want to help someone be there first to do a bad thing? If you don’t feel it’s bad, then there’s no problem. You’re just living your life. That is clearly not the position expressed by the content I responded to. If there are actual concrete concerns that don’t essentially boil down to “well they’re going to make that money before I do,” then that would be an actual answer.
Calling your criticism a stretch would be far too charitable. I made it clear what I meant and I’ve got better things to do than nitpick over semantics.
Yes, after the fact; that is after my response they provided a definition.
> the person you're responding to shared the definition they are using
No, technically they didn't. They provided a definition, they didn't say it was the one they are using here. If it's not pedantic tangent, it seem correct to assume that is the definition they are using, but that's what "Implying" means, so I trying to explicitly get a clarification on that.
"Why?" you might ask? Not every discussion is in good faith. The more that is assumed, the more leeway you allow for people to weasel out of countered arguments.
Yes. They provided their definition in response to your (mis?)reading of their original words. They are not the party bringing bad faith to this conversation.
On the contrary, I dislike premature ethics discussion, where you end up wildly speculating what the tech might become and riffing off that, greatly padding whatever relative technical content you had. I don't want every technical paper to turn into that, ethics should be treated as a higher-level overview of concerns in a field, with a study dedicated to the ethical concerns of that field (by domain-specific ethics specialists).
Is your concern weapon automaton, or animal rights?
My concern is creating literal sentience in a box. I don't, personally, think it's unfounded for me to have that concern, given that we're growing masses of human neurons and teaching them to perform tasks.
I'm not going to start campaigning against it or changing my life. But it still makes me deeply uncomfortable, and that's allowed.
Are discussion about petri dishes diverting relevant resources away from building safety initiatives?
Can I be allowed to torture small animals so long as human suffering persists?
reply