Isn't the whole point to understand? If the task is to write and you expect only final result, but you question if it looks legit enough, how is it fair judgement? People can deliver partial results and show progress as well, but you won't see it in some comments on the internet, but if something is expected to take many days, it's easy to show different stages of work. It's easy to accuse people of plagiarism or not thinking for themselves, and of course there are indicators when someone uses AI, but the problem is that you can't distinguish in reliable way, if something was created by AI or not.
Like, there is this computer game, authors used some models or something like that, generated by AI, but it was only used during prototyping and later it was replaced by proper models. No one would know about that, if authors would not tell about it. So, if someone writes in their own words what AI generated for him, is it still argument made by human or by AI? What if someone uses AI only as placeholder and replaces all that content, so you never actually see actual AI usage, but it was used in the process?
For me, premise that using AI in any form invalidates your work, starts with logical fallacy, so such arguments against using AI are weak. It's like saying that your work is wrong, because you used calculator, so your calculations can't be right, if done by machine, because it had to make mistake or that's wrong for ethical reasons or whatever.
Work generated by AI can be easily poor, because these models make mistakes and like to repeat in certain ways, but is it wrong that I'm writing comment with keyboard, instead of writing letters with pen? Is it wrong, when I use IDE or some CLI to write code with AI, instead of using vim and typing everything on my own? Is it wrong that someone uses spell-checking?
In the end it doesn't matter who seems smarter, when you're expected to use AI at work. Reality shows you actual expectations.
I am not saying that completely disallowing AI is the right decision. But if you see text that is clearly generated by AI and does not make any sense, it sure would be nice if you could just tell the students to actually read their sources instead of having to argue with them why they should do so. Similarly, I can see why HN moderators do not want to argue with the 100s of spam posters per day on /newest.
Anyway, my university did not ban AI, and now most students have degraded to proxies between teaching assistants and ChatGPT.
On the other hand you can make good, but controversial argument and if you use AI in any way, it might be rejected by moderator, just because some places have strict rules on AI. In some cases it might be rejected, even if no AI was involved, if any fragment of your text might look like not written by human and if they don't like your text.
At certain point it's no longer about AI specifically, but about power and showing who makes decisions.
I agree that there might be some threshold for obvious spam, but if you're making argument in good faith and you don't claim to have authority on some matter, there will be always people that think differently or disagree with you, because they have different interpretation or they need better sources, more evidence. It's actually typical, because different people use different perspectives, different assumptions, different tools. I don't believe that rules should be used to silence people that have different opinions and that's the biggest risk I see, because penalty for not following such rules, which are hard to measure correctly, creates power imbalance.
At some points it becomes dogma, not fair debate and not everyone likes to stick to dogma and it's hard to do creative or innovative work, if your work has to meet strict, but subjective, possibly incomplete criteria, to be considered valid work at all.
I don't really see the issue, as long as there's human thought behind whatever anyone posts. It's frustrating to argue against someone that lazily uses AI, but if argument is fair, then I don't care if that's written by AI or human, what difference does it make? It's frustrating, if someone is incoherent and makes dumb argument, but again, I don't care if it's dumb argument from human or machine.
For me it sounds just as yet another form of gatekeeping, so either you sound human or you're not good enough to post/comment. Like, really? How isn't that genetic fallacy? It doesn't matter what someone thinks, because someone used AI to make their thought clearer, so their whole argument is trash? Like it has to hurt to read and write, if you're not using English perfectly and your work is seen as inferior based on superficial factors like proper grammar and style?
It's dumb crusade, I did not use AI to write this comment, but I hate when people try to monopolize the truth and tell who is "better, smarter" based on irrelevant facts. Not using AI doesn't make anyone superior. Using AI also doesn't make you superior. Focus on what you mean, because that's what matters.
> so I would encourage you to create the content or communities you want to see.
There are hidden reasons behind centralized solutions, that make decentralized solutions unpopular. If anyone suggests "just go out and make it better", it's missing the point. That's like saying: "don't participate in society, just start your own". In theory it makes sense, in practice it's just ignorance and lack of awareness on how difficult and complex task it is.
Centralized solutions are often just a business, they're not transparent, they're not cooperative, they're not ethical, they're there to conquer market and there's big money behind them, they're part of surveillance capitalism.
These are just examples, but there's lot more, so in context of social media, it's intertwined with the rest of simulation called "real world", so almost no one is going to know what you're talking about, when mention Loops/Mastodon/Bluesky, people know dominant platforms and stick to them and they may do so as part of social pressure and because they compete for status. In this society, you won't gain status by using Loops. People are buying iPhone for status, even though something like Samsung or Fairphone would be good as well. People are buying luxury frames for glasses, because they want to show brand, they don't care that it's more expensive and quality is basically they same.
I don't know why your dragging centralization/decentralization, business model, Fairphone, etc. into the conversation now. It sort of feels like you're overthinking this.
I often talk to people about Signal irl, most download the app but some folks do. Some people actually want a For You feed and will bounce off Loops, Mastodon, or whatever. That's all fine. These spaces can have content about cars or guns or whatever else without eating the entire world.
You said:
> I want loops/mastodon to be a diverse place that has content from all over the internet.
Again, I think lots of people who are already in the Fediverse want that. But, if everybody who likes cars decides they won't join in until somebody who shares content about cars does, that car community may never show up.
You seem to have interests that you feel are undeserved. Just... regularly share things about what you think is cool. Just do it for funsies.
If you really feel strongly about wanting to make a diverse space, cross promote your stuff in spaces with other people who have the same interests. Share a post, share a video, ask them to follow you. Maybe even start an instance dedicated to the topic if that's your vision.
I'm not overthinking it, if you were right and everything was simple, statistics on popularity of decentralized platforms would look different. That's empirical proof. It's science, you can read what Adorno, Horkheimer, Marcuse, Shoshana Zuboff or Tristan Harri wrote about media, you can notice that right now it's all global Skinner box experiment or you may ignore all that and go with "vibes" on why decentralized platforms fail to gain real popularity.
One approach is based on science, other bases on unfounded feelings. Some people will use these decentralized platforms, but that's not the point. My point is that it's not as simple as just telling people around you "just use this". There are systemic reasons why most people don't use them and serious analysis starts once you get it. Without that it's just wishful thinking, so sure, you will get something on these platforms, but it's like one commenter here mentioned, he tried Loops, used it for a while and it's mostly trash for him, while better community will never appear there.
To get real traction and user base on such decentralized platforms, we would need to change the way society functions first. That's why it's impossibly hard challenge. Without foundations, such projects are doomed to fail, they just can't compete with mainstream, centralized platforms.
It doesn't feel like you can stay on topic here. I'm not trying to discuss the general viability of these platforms vs. centralized ones, or other social networks. Your complaint was:
> the people on it are just so far away from what me (and men my age) deem interesting and seem to be hostile to anything that doesnt fit their very restrictive ideals.
Okay. If you don't want to participate, don't. But, if your other comment about wanting to see a more diverse audience join was honest, then do. Either way.
There's traction. There's a user base. There are people enjoying and getting use out of it. There's plenty of communities and relationships that will go on just fine regardless of what you decide.
You seem to be frustrated about something, maybe that the fediverse isn't matching Facebook in size? It won't. It probably can't, since the commercial incentives aren't there. But, at no point does that invalidate what exists.
Ostrom provided the solution to the tragedy of the commons: self-organizing, collective governance. It is no coincidence that Agile is a convergent solution to the exact same principles. We were told for decades that the only options are the state or the market, Ostrom proved that false.
But we don't live in an evidence-based world, we live in one shaped by power dynamics. We have the blueprint for collective prosperity, but we choose extraction. In the US, this has gone so far that Christianity has been twisted into a prosperity gospel, a heresy that serves as a moral shield for raw capitalism. It allows the system to pretend that business interests are actually virtues.
The world is in a mess because we ignore the mechanics of the systems we build. Be it capitalism, feudalism, or authoritarian communism, they all fail the same way, they lead to elite overproduction (Turchin).
When you funnel all resources to the very top, you create too many aspiring elites with no productive role to play. They inevitably turn on the system and each other. These systems are mathematically destined to collapse. Ostrom polycentric governance is one of the few ways out.
Christianity had plenty of problems before capitalism became a thing. IMO both need to be heavily regulated, certainly not given special privileges or a blank check to consolidate power.
FWIW, the employees in question are at least in the 90th percentile of US salaries if not 99th percentile (L5 is ~ $250K, L6 is ~ $399K, and L7 is north of $500K)
In regards to fairness, many times these cuts are based what group you are in, rather than performance. You wonder, hypothetically, would the L5s and above all agree to accepting a 20% pay cut in exchange for not having layoffs. It strange that one person keeps the job paying $500K, while the other unlucky one will have trouble getting a new $150K job due to the terrible job market.
Isn't it expected that most, if not all, of the content will be produced by AI/AGI in the near future? It won't matter much, if you're lazy or not. It leads to the question, what we'll do instead? People may want to be productive, but we're observing in real-time how world is going shit for workers and that's basically fact for many reasons.
One reason is that it's cheaper to use AI, even if the result is poor. It doesn't have to be high quality, because most of the time we don't care about quality, unless something interests us. I wonder what kind of shift in power dynamics will occur, but so far it looks just like many of us will just lose a job. There's no UBI (or social credit proposed by Douglas), salaries are low and not everyone lives in good location, but corporations try to enforce RTO. Some will simply get fired and won't be able to find a new job (that won't be sustainable for personal budget, unless someone already has low costs of living and is debt-free or has somewhat wealthy family that will cover for you).
Well, maybe at least government will protect us? Low chance, world is shifting right and it will get worse, once we start to experience more and more results of global warming. I don't see scenario, where world is becoming better place in foreseeable future. We're trapped in society of achievement, but soon we may be not able to deliver achievements, because if business can get similar results for fraction of the price (that is needed to hire human workers), then guess what will happen?
These are sad times, full of depression and suffering. I hope that some huge transformation in societies will happen soon or that AI development slows down, so that some future generation will have to deal with consequences (people will prioritize saving their own and it won't be pretty, so it's better to just pass it down like debt).
Or you know, there's this crazy idea, every country or city can just invest into affordable housing and make it so that you can actually get some place to live, even if you're in poverty.
In practice it's just reasonable, it works in places like Vienna, it doesn't have to be luxurious housing, but it should be relatively ok, cheap to maintain, but also safe to live, with enough space to have children.
Why can't we regulate and subsidize such fundamental stuff? Everyone needs some place to live, some food, some water, some way to move, some basic services and utilities.
What's the reason that some wealthiest countries in the world can't provide even bare minimum for most people? It's only greed of the wealthy. It's too expensive for them to treat us all as humans, with some empathy. They prefer to keep things as it is, so about 2000 people controls about 90% of all capital.
> don't bring any additional benefit to just writing code in your normal programming language to do the same thing.
In some cases advantage is that you don't create new code and you just use some relatively standard tool. You just fetch some public package that handles various edge cases and you just prepare script that describes what you want to do with some program. This is useful, if you work in containerized environment and configuration exists as json or yaml. Often I just use jq or yq, instead of reinventing wheel to just read or write some values.
Does according to author, NATS is less reliable, if you want guaranteed ordering and to have multiple instances of worker, because NATS partitions lack "auto-balance" like in Kafka?
I would assume that NATS can be deployed in some Kubernetes cluster, so it
would not be uncommon to have 3 or 5 workers. In this case, what would happen, if someone wanted guaranteed ordering and used that deterministic subject token partitioning?
Let's assume that some worker would crash due to lack of storage space and it would keep restarting, so what happens to messages on this partition? Can they be processed, if specific worker is not available? Is it possible to react to this event and manually reassign these partitions to other workers? In case that there's no event, then maybe it's possible to write some script and run CronJob to manually check, if rebalance is needed?
I don't really care about jobs, but what is actually scary for me is power/wealth inequality with high unemployment and low social mobility.
I used to think that by becoming software engineer I would have good life. Now I'm no longer sure, if I will still have anything in decade or two, because what about debt? What about opportunities for younger people? What about poorer countries?
Like, there is this computer game, authors used some models or something like that, generated by AI, but it was only used during prototyping and later it was replaced by proper models. No one would know about that, if authors would not tell about it. So, if someone writes in their own words what AI generated for him, is it still argument made by human or by AI? What if someone uses AI only as placeholder and replaces all that content, so you never actually see actual AI usage, but it was used in the process?
For me, premise that using AI in any form invalidates your work, starts with logical fallacy, so such arguments against using AI are weak. It's like saying that your work is wrong, because you used calculator, so your calculations can't be right, if done by machine, because it had to make mistake or that's wrong for ethical reasons or whatever.
Work generated by AI can be easily poor, because these models make mistakes and like to repeat in certain ways, but is it wrong that I'm writing comment with keyboard, instead of writing letters with pen? Is it wrong, when I use IDE or some CLI to write code with AI, instead of using vim and typing everything on my own? Is it wrong that someone uses spell-checking?
In the end it doesn't matter who seems smarter, when you're expected to use AI at work. Reality shows you actual expectations.
reply