They're trying for the vertical integration monopoly.
The times it works, it works well for the company at great cost to society.
Imagine the world we'd have if comcast got to control your web browsing experience.
If ISPs got started today, they'd sell the open web at API prices that no one can afford. Then sell the ISP's lock-in 'internet' for a low monthly fee.
My question is why people who don't want comcast's internet think other vertical integrated lock-in is fine.
Our markets game only works for the benefit of society if we have fair markets.
VC-backed loss-leader dumping to starve competition model breaks the game.
It's weird that most people in these comments are speculating fraud.
Why aren't companies with real money to gain from stars gaming the system to the same degree? Why do the other metrics - issues and pull requests - match up with its popularity? Why would the bots starring the repo mean that those same bots are not popular? Those bots are controlled by their users.
The project is extremely active because this is what everyone being able to customize their computing looks like. A mess.
But it's a good mess.
Github was the old code sharing model clearly not designed for this. I'm sure a new model for code sharing will come to fix the growing pains.
A ton of people who would have never been able to customize their computing experience are finally able to. And it is magical for them.
This means that those same people will finally value having access to source and use of open protocols.
It was always valuable to us because we had the power to make it matter. It never mattered to them because they did not. Now they do.
The last era of computing was defined by dumbing down computing for the masses. Less information, less customizable, and more metric driven. Control in the hands of the companies.
This new era will look more free/libre, more personal, and less enshitified. Control in the hands of the users.
I think the implication here is that if we can’t find evidence of or motivation for companies paying to inflate their star counts, then we should cool it on the accusation that magically fraud has appeared in this case.
We also should remember that if this project had zero stars, we would hear crowing from these same people about how true and important that metric was. The idea that “open claw” paid for these stars somehow is mostly just reasoning backwards from the idea that no one would find this project interesting.
To support their favorite project without having to do anything for it except writing a chat message? I’m assuming that OpenClaw can create its own GitHub account and give stars without a lot of human work.
So about 150 thousand people starred OpenClaw, then asked their bot to sign up for an account to star it again? I'm not trying to be obtuse, I'm just trying to get a sense of what we're talking about. Because if it is 1 person botting 300,000 stars (or 4x75k, etc), that costs real money. There needs to be a motive for that to be believable. If it is 150kx2, then that's a much wider (though still pretty unmotivated) phenomenon that someone would have blabbed about.
There are a bunch of open source projects that I want to see take off; I've never felt the urge to star one twice. I doubt that has to do with it being easier to say "Go star this project on github for me, with your own account" than it is to make a new account on github (which is not hard). I don't think that comes from any great moral fortitude, it's just...IMO hard for me to explain without an actual motive.
What's being alleged in this thread is widespread fraud via botting with no evidence of means or motive. As someone pointed out above, the argument for Facebook buying react stars is WAY stronger...and it is still really flimsy.
These sorts of tools will only be able to positively identify a subset of genAI content. But I suspect that people will use it to 'prove' something is not genAI.
In a sense, the identifier company can be an arbiter of the truth. Powerful.
Training people on a half-solution like this might do more harm than good.
It will just be an arms race if we try to prove "not genAI." Detectors will improve, genAI will improve without marking (opensource and state actors will have unmarked genAI even if we mandate it).
Marking real from lense through digital life is more practical. But then what do we do with all the existing hardware that doesn't mark real and media that preexisited this problem.
I agree. A mechanism to voluntarily attach a certificate metadata about the media record from the device seems like a better idea. That still can be spoofed, though.
In the end, society has always existed on human chains of trust. Community. As long as there are human societies, we need human reputation.
You could take a picture or video with your phone of a screen or projection of an altered media and thereby capture a watermarked "verified" image or video.
None of these schemes for validation of digital media will work. You need a web of trust, repeated trustworthy behavior by an actor demonstrating fidelity.
You need people and institutions you can trust, who have the capability of slogging through the ever more turbulent and murky sea of slop and using correlating evidence and scientific skepticism and all the cognitive tools available to get at reality. Such people and institutions exist. You can also successfully proxy validation of sources by identifying people or groups good at identifying primary sources.
When people and institutions defect, as many legacy media, platforms, talking heads, and others have, you need to ruthlessly cut them out of your information feed. When or if they correct their mistake, just follow tit for tat, and perhaps they can eventually earn back their place in the de-facto web of trust.
Google's stamp of approval means less than nothing to me; it's a countersignal, indicating I need to put even more effort than otherwise to confirm the truthfulness of any claims accompanied by their watermark.
It is actively harmful to society. Slap SynthID on some of the photographic evidence from the unreleased Epstein files and instantly de-legitimize it. Launder a SynthID image through a watermark free model and it's legit again. The fact that it exists at all can't be interpreted in any other way than malice.
The unix commandline tools being the most efficient way to use an LLM has been a surprise.
I wonder the reason.
Maybe 'do one thing well'? The piping? The fact that the tools have been around so long so there are so many examples in the training data? Simplicity? All of it?
The success of this project depends on the answer.
Even so, I suspect that something like this will be a far too leaky abstraction.
But Vercel must try because they see the writing on the wall.
> The unix commandline tools being the most efficient way to use an LLM has been a surprise.
> I wonder the reason.
Because they are really, really well designed for humans.
Everyone is trying to reinvent the wheel and create "agent interfaces", but there is fundamentally no difference between what makes a text based interface easy for a human to use and what makes it easy for an agent to use.
If you want a better guess: It's because of the man pages for all the tools are likely duplicated across so many media for the LLM training that there's just an efficient pipeline. They go back to the 70s or whatever.
I'm not convinced. I don't want to rack servers and diagnose bad RAM like it's still the 90's, so I'm paying someone else for the privilege, especially to get POPs closer to customers than I want to drive or fly to setup, especially in foreign countries where I don't speak the language or know the culture. Fun for vacation but a recipe to waste time and money setting up a local corporate entity and a whole team when I can just pay GCP or AWS and have a server on the other side of the planet from me faster than I can book a plane flight and hotel reservation there.
There's also the maintenance of the server to be considered. Vercel or other PaaS/Lambda/GCP functions/etc serverless means there's just less crap for me to manage, because they're dealing with it, and yeah, they charge money for that service. Being able to tell Claude code, I setup ssh keys and sudo no password for you, go fix my shit; like, that works, but then the hard drive is full so I have to up size the VPS, and if you're stupid/brave, you can give Claude Code MCP access to Chrome so it can click the buttons in Hetzner to upsize for you, but that's time and tokens spent not working on the product so at the end of the day I think Vercel is gonna be fine. AI generating code means there are many many more people trying out making some sort of Internet company, but they'll only discover cheaper options only after paying for Vercel becomes painful.
These sorts of doom articles are interesting in that they are from the perspective of tech company valuations. Why is this the important perspective?
For the humanity perspective, this doom is very optimistic. It says that these LLMs currently disrupting the platforms cannot themselves be the next platforms.
Maybe no one will have 'the ability to make people do something that they don't want to do' sort of power with this next stage in computing.
I was hoping 'respectify' could mean respect for the users.
This is a very important problem space. Maybe the most important today - we desprately need a digital third place that isn't awful. But I think these attempts are misled.
The core issue seems to be that we want our communities to be infinite. Why? Well, because there is currently no way to solve the community discoverability problem without being the massive thing. But that is the issue to solve.
We need a lot of Dunbar's number sized communities. Those communities allow for 'skin in the game' where reputation matters. And maybe a fractal sort of way for those communities to share between them.
The problem is in the discoverability and in a gate keeping that is porous enough to give people a chance.
Solve that, and you solve the the third place problem we have currently. I don't have a solution but I wish I did.
Infinite communities are fundamentally what causes the tribalism (ironically), the loneliness, and the promotion of rage.
No one wants to be forced to argue correctly. Forcing people into a way to think via software is fundamentally authoritarian and sad.
The notion of "Limit the community to the Dunbar number" is a fascinating idea. I guess "infinite" isn't going to quite work. Keen observation.
We tried very hard to not "force" anyone to argue correctly. We are shooting more for "nudge in the right direction" and "educate". Many people don't know that they are arguing in bad faith, I think.
The perfect outcome here is that a community/blogger can, with minimal effort, have engaging, interesting conversations without much effort and without having to worry about things getting hijacked by unpleasant commenters.
> Forcing people into a way to think via software is fundamentally authoritarian and sad.
Completely agree.
I understand the problem, and while I see this as a good faith attempt to solve it, something doesn't quite sit right about the framing for me. Really, what's happening is just that certain rules of behavior and language being enforced. And that's fine! That's what communities are. You're allowed to do different kinds of things in different places.
I'd frame it that way rather than the current, more paternalistic framing. There isn't a universal way to be respectful, or to argue. People have different thresholds for aggression, sarcasm, and so on.
Just like signs at the library say "No talking" or "No eating", you might think of this as a way to put up certain signs for your particular community. Configurable knobs to create the kind of place you want. But it's not about "teaching" people anything. It's about saying, "Here, we do things this way. If you like that, come and play. If you don't, this place is not for you."
I'm presently in the process of building (read: directing claude/codex to build) my own AI agent from the ground up, and it's been an absolute blast.
Building it exactly to my design specs, giving it only the tool calls I need, owning all the data it stores about me for RAG, integrating it to the exact services/pipelines I care about... It's nothing short of invigorating to have this degree of control over something so powerful.
In a couple of days work, I have a discord bot that's about as useful as chatgpt, using open models, running on a VPS I manage, for less than $20/mo (including inference). And I have full control over what capabilities I add to it in the future. Truly wild.
> It's nothing short of invigorating to have this degree of control over something so powerful
I'm a SWE w/ >10 years, and you're right, this part has always been invigorating.
I suppose what's "new" here is the drastically reduced amount of cognitive energy I need build complex projects in my spare time. As someone who was originally drawn to software because of how much it lowered the barrier to entry of birthing an idea into existence (when compared to hardware), I am genuinely thrilled to see said barrier lowered so much further.
Sharing my own anecdotal experience:
My current day job is leading development of a React Native mobile app in Typescript with a backend PaaS, and the bulk of my working memory is filled up by information in that domain. Given this is currently what pays the bills, it's hard to justify devoting all that much of my brain deep-diving into other technologies or stacks merely for fun or to satisfy my curiosity.
But today, despite those limitations, I find myself having built a bespoke AI agent written from scratch in Go, using a janky beta AI Inference API with weird bugs and sub-par documentation, on a VPS sandbox with a custom Tmux & Neovim config I can "mosh" into from anywhere using finely-tuned Tailscale access rules.
I have enough experience and high-level knowledge that it's pretty easy for me to develop a clear idea of what exactly I want to build from a tooling/architecture standpoint, but prior to Claude, Codex, etc., the "how" of building it tended to be a big stumbling block. I'd excitedly start building, only to run into the random barriers of "my laptop has an ancient version of Go from the last project I abandoned" or "neovim is having trouble starting the lsp/linter/formatter" and eventually go "ugh, not worth it" and give up.
Frankly, as my career progressed and the increasingly complex problems at work left me with vanishingly less brain-space for passion projects, I was beginning to feel this crushing sense of apathy & borderline despair. I felt I'd never be able make good on my younger self's desire to bring these exciting ideas of mine into existence. I even got to the point where I convinced myself it was "my fault" because I lacked the metal to stomach the challenges of day-to-day software development.
Now I can just decide "Hmm.. I want an lightweight agent in a portable binary. Makes sense to use Go." or "this beta API offers super cheap inference, so it's worth dealing with some jank" and then let an LLM work out all the details and do all the troubleshooting for me. Feels like a complete 180 from where I was even just a year or two ago.
At the risk of sounding hyperbolic, I don't think it's overstating things to say that the advent of "agentic engineering" has saved my career.
I'm using kimi-k2-instruct as the primary model and building out tool calls that use gpt-oss-120b to allow it to opt-in to reasoning capabilities.
Using Vultr for the VPS hosting, as well as their inference product which AFAIK is by far the cheapest option for hosting models of these class ($10/mo for 50M tokens, and $0.20/M tokens after that). They also offer Vector Storage as part of their inference subscription which makes it very convenient to get inference + durable memory & RAG w/ a single API key.
Their inference product is currently in beta, so not sure whether the price will stay this low for the long haul.
You can definitely get gpt-oss-120b for much less than $0.20/M on openrouter (cheapest is currently 3.9c/M in 14c/M out). Kimi K2 is an order of magnitude larger and more expensive though.
What other models do they offer? The web page is very light on details
K2 is the only of the 5 that supports tool calling. In my testing, it seems like all five support RAG, but K2 loses knowledge of its registered tools when you access it through the RAG endpoint forcing you to pick one capability or the other (I have a ticket open for this).
Also, the R1-distill models are annoying to use because reasoning tokens are included in the output wrapped in <think> tags instead of being parsed into the "reasoning_content" field on responses. Also also, gpt-oss-120b has a "reasoning" field instead of "reasoning_content" like the R1 models.
> The nerds could always make a home with their linux desktop. Now everyone can. It'll change the equation.
Probelm is, to be able to do what you're describing, you still need the source code and the permission to modify it. So you will need to switch to the FOSS tools the nerds are using.
There are source-available software one is not permitted to distribute after modification. But what source-available software prevents the user from modifying the source for use by oneself?
I'm actually relieved they're doing it now because it's going to be a forcing function for the local LLM ecosystem. Same thing with their "distillation attack" smear piece -- the more of a spotlight people get on true alternatives + competition to the 900 lb gorillas, the better for all users of LLMs.
I really hope so. I moved to Codex, only to get my account flagged and my requests downgraded to 5.2 because of some "safety" thing. Now OpenAI demands I hand my ID over to Persona, the incredibly dodgy US surveillance company Discord just parted ways with, to get back what I paid for.
This timeline sucks, I don't want to live in a future where Anthropic and OpenAI are the arbiters of what we can and cannot do.
It definitely does suck. I had the same feelings about a year ago and the unpleasantness has definitely increased. But glass half full, we didn't have Kimi K2.5, GLM5, Qwen3.5, MiniMax 2.5, Step Flash 3.5, etc available and the cambrian explosion is only continuing (DeepSeek V4 should be out pretty soon too).
The real moment of relief for me was the first time I used DeepSeek R1 to do a large task that I would've otherwise needed Claude/OpenAI for about 12 months ago and it just did it -- not just decently, but with less slop than Claude/OpenAI. Ever since that point, I've been continuing to eye local models and parallel testing them for workloads I'd otherwise use commercial frontier models for. It's never a perfect 1:1 replacement, but I've found that I've gotten close enough that I no longer feel that paranoia of my AI workloads not being something I can own and control. True, I do have to sacrifice some capability, but the tradeoff is I get something that lives on my metal, never leaks data or IP, doesn't change behavior or get worse under my feet, doesn't rate limit me, can be fine tuned and customized. It's all lead to a belief for me that the market competition is very much functioning and the cat is out of the bag, for the benefit of all of us as users.
That's just because corporations got greedy and made their apps suck.
Strip away the ads, the data harvesting, add back the power features, and we'll be happy again. I'm more willing than ever to pay a one-time fee good software. I've started donating to all the free apps I use on a regular basis.
I don't want to own my own slop. That doesn't help me. Use your AI tools to build out the software if you want, but make sure it does a good job. Don't make me fiddle with indeterministic flavor-of-the-month AI gents.
> That's just because corporations got greedy and made their apps suck.
It is true for me with Linux. I code for a living and I can't change anything because I can't even build most software -- the usual configure/make/make install runs into tons of compiler errors most of the time.
Loss of control is an issue. I'm curious if AI tools will change that though.
I think there's room for both visions. Big Tech is generating more toxic sludge than ever, and yeah sure this is because they're greedy, but more precisely the root cause is how they lobbied Washington and our elected officials agreed to all kinds of pro-corporate, anti-human legislation. Like destroying our right to repair, like criminalizing "circumvention" measures in devices we own, like insane life-destroying penalties for copyright infringement, like looking the other way when Big Tech broke anti-trust laws, etc.
The Big Tech slop can only be fixed in one way, and actually it's really predictable and will work - we need to fix the laws so that they put the rights and flourishing of human beings first, not the rights and flourishing of Big Tech. We need to fix enforcement because there are so many times that these companies just break the law and they get convicted but they get off with a slap on the wrist. We need to legislate a dismantling of barriers to new entrants in the sectors they dominate. Competition for the consumer dollar is the only thing that can force them to be more honest. They need to see that their customers are leaving for something better, otherwise they'll never improve.
But our elected officials have crafted laws and an enforcement system which make 'something better' impossible (or at least highly uneconomical).
Parallel to this if open source projects can develop software which is easier for the user to change via a PR, they totally should. We can and should have the best of both worlds. We should have the big companies producing better "boxed" software. Plus we should have more flexibility to build, tweak and run whatever we want.
What you're describing is the expected and correct outcome inside a profit-oriented, capitalist system. So the only way I see out of this situation would be changing policy to a more socialist one, which doesn't seem to be so popular among the tech elite, who often think they deserve their financial status because of the 'value' they provide, without specifying what that value is (or its second-order consequences). Whether that's abusing a monopolistic market position they lucked into, making apps as addictive as possible, or building drones that throw bombs on newborns in hospitals.
I think we're after the same goal but have a different view of mechanism.
Regulation enforcement against the anti-market behaviors would bring a lot of good.
Putting too much power in any centralized authority - company or government - seems to lead to oppression and unhealthy culture.
Fair markets are the neatest trick we have. They put the freedom of choice in the hands of the individual and allow organic collaboration.
The framing should not be government vs company. But distributed vs centralized power. For both governance and commerce.
The entire world right now suffers from too much centralized power. That comes in the form of both corporate and government. Power tends to consolidate until the bureaucracy of the approach becomes too inefficient and collapses under its own weight. That process is painful, and it's not something I enjoy living through.
If you see through that lens, it has explaining power for the problems of both the EU countries and the US.
I'm not arguing for state capitalism. I consider the "company vs. government" framing as fundamentally flawed. I see it as "a few in power vs. Everyone gets exactly one vote".
I want things in society organized in a way that gives everyone agency, not just those adjacent to capital.
If a company employs me to extract value from my work, I want a vote in how that company operates. Not just one vote every four years in the hopes that policy will shift to benefit workers more over a few decades.
I want to be able to say no to doing a job without the existential threat of not getting another job offer ever, so I can base my decisions on my values, not my fear of not bein able to pay next month's rent.
Capitalism goes against that, because it centers profit hoarding and parasitic value extraction from the working class at the center of attention. It's an inhumane ideology at its core, and only even ever slightly successful in creating wealth because of all the socialist mechanisms wrapped around it to hold it together.
In essence: I want to abolish centralized power and class hierarchies.
These companies are engaged in a sort of AI dumping. Cheap inference below cost.
Price out competitors. Abuse your newfound dominance.
It's the big tech playbook.
I don't think it's going to work this time.
Tools like OpenClaw are an existential threat precisely because it allows the user control over their experience. The value in it cannot be captured by a monopoly.
LLMs don't seem to be a very good moat. At the same time, the software moat is eroding due to those same LLMs.
Telecom tech killed telecom dominance.
With some luck, Google tech will kill Google dominance.
Is that... Why Google released Antigravity, an IDE, no less, when even my non-tech dentist is using claude code in cli? And why Anthropic is pushing their desktop apps, skills, and all these integrations their models can build in a day?
Are they betting on their software, not their LLM deciding if they survive or not if competitive open source model is dropped? Oh boy, the market is going to have some fun times when realization hits.
I would love to subscribe to / pay for service that are just APIs. Then have my agent organize them how I want.
Imagine youtube, gmail, hacker news, chase bank, whatsapp, the electric company all being just apis.
You can interact how you want. The agent can display the content the way you choose.
Incumbent companies will fight tooth and nail to avoid this future. Because it's a future without monopoly power. Users could more easily switch between services.
Tech would be less profitable but more valuable.
It's the future we can choose right now by making products that compete with this mindset.
Biggest question I have is maybe... just maybe... LLM's would have had sufficient intelligence to handle micropayments. Maybe we might not have gone down the mass advertising "you are the product" path?
Like, somehow I could tell my agent that I have a $20 a month budget for entertainment and a $50 a month budget for news, and it would just figure out how to negotiate with the nytimes and netflix and spotify (or what would have been their equivalent), which is fine. But would also be able to negotiate with an individual band who wants to directly sell their music, or a indie game that does not want to pay the Steam tax.
I don't know, just a "histories that might have been" thought.
Love it, we can finally make the libertarian paradise of a patchwork of private roads possible by having your agent negotiate a path to where you want to go and make the appropriate micro payments.
I don't exactly mean APIs. (We largely have that with REST). I mean a Gopher-like protocol that's more menu based, and question-response based, than API-based.
If I can get videos from YouTube or Rumble or FloxyFlib or your mom’s personal server in her closet… I can search them all at once, the front end interface is my LLM or some personalized interface that excels in it’s transparency, that would definitely hurt Google’s brand.
That's right. And don't forget that the chips it runs on are manufactured by companies I might not agree with. Nor the mining companies that got the metal. Nor the energy company that powers it.
The wonderful thing about markets that work is that you can swap things out without being under their boot.
I worry about a LLM duopology. But as long as open weight models are nipping at their heels, it is the consumer that stands to benefit.
The train we're on means a lot of tech companies will feel a creative destruction sort of pain. They might want to stop it but are forced by the market to participate.
Remember that Google sat on their AI tech before being forced to productize it by OpenAI.
In a working market, companies are forced to give consumers what they want.
> And don't forget that the chips it runs on are manufactured by companies I might not agree with. Nor the mining companies that got the metal. Nor the energy company that powers it.
You see that this is a non sequitur right? No matter who makes the chips or mines the metal or supplies the power, the behavior of the thing won't be affected. That isn't the case when we're talking about who's training the LLM that's running your shit.
It's a good thing that there are so many LLM choices out there, then.
Maybe the fundamental disagreement is whether LLMs will be a commodity product or not.
I think they will be since there hasn't been an indicator that secret sauce lasts more than a few months. The open weight models are, at most, a year behind.
We're in a different environment. The last tech rules of e.g. network effect cannot be directly applied.
>Remember that Google sat on their AI tech before being forced to productize it by OpenAI.
Google knew this tech wasn't ready for prime-time, they already had plenty of revenue and didn't need to release shoddy shit, but were forced to roll out "AI" even with "hallucinations" and the resulting liabilities to keep up with the new hotness. The tech is still so shoddy, I can't believe people use it for anything beyond a curiosity.
> The wonderful thing about markets that work is that you can swap things out without being under their boot.
This is an illusion. You literally describe Zizek's "Desert of the real": Billionaires own the illusion and you are telling me I get to pick from a selection of choices carefully curated and presented to me.
> In a working market, companies are forced to give consumers what they want.
I want personal nuclear weapons, so the market hasn't been working for me. Time to roll back those pesky laws, regulations, and ethical boundaries. Prosecute executives who won't give me what I want.
Many consumers want things that are arguably harmful for everyone involved. Users asking Grok to generate a large amount of CSAM from kid pics on Twitter is but one example.
I agree, and it seems like the incumbents in this user-oriented space (OS vendors) would be letting the messy, insecure version play out before making an earnest attempt at rolling it into their products.
The times it works, it works well for the company at great cost to society.
Imagine the world we'd have if comcast got to control your web browsing experience.
If ISPs got started today, they'd sell the open web at API prices that no one can afford. Then sell the ISP's lock-in 'internet' for a low monthly fee.
My question is why people who don't want comcast's internet think other vertical integrated lock-in is fine.
Our markets game only works for the benefit of society if we have fair markets.
VC-backed loss-leader dumping to starve competition model breaks the game.
reply