Those rollouts are seeing massive cutbacks from what I've read, as half the country is straight up banning new solar. Good luck ever getting that off the books.
I don't think it will be that hard. Banning solar is a feel good thing now that doesn't affect many people - but that means when the next election is gone it won't be opposed when lobbyists (and greens) try to roll it back. Of course each state is different, so some it will take more than a few elections. In some states solar is already widespread enough that you can't ban it because too many people already have it and know enough about it to tell their friends. Those friends who live in other states will start to ask why they don't.
Remember you need to keep the 20 year plan in mind. If you only look to the end of 2026 things are hopeless, but look to 2050 (and compare to 2000) and things look much better.
That reaction has happened with every model release for the past few years. Maybe they aren’t the same people, but it’s always “old model was terrible, new model gets it right” then “new model was terrible, newer model gets it right,” ad infinitum.
A large proportion of my professional network were in the "AI for code generatin might just be a fad" camp pre Opus 4.5 (and the Codex/Gemini models that came out shortly after that), and now almost everyone seems to think that AI will have at least some place in professional development environments on an ongoing basis.
I've recently given it a go myself, and it certainly doesn't get it right all the time. But I was able to generate AI-assisted code that met my quality standards at roughly the same speed as coding it by hand.
FWIW I am definitely someone who uses AI. I have been using it for a few years now. There's no question that models have improved. I'd say the biggest leap was around the ChatGPT 3.5 -> 4.0, which radically reduced hallucination problems. The big issue of "it just made up a module that doesn't exist" more or less went away at that point. This was the big leap from "spits out text that might help you" to "can produce value".
Since then it has been incremental. I would say the big win has been that models degrade more slowly as context grows. This means, especially for heavily vibecoded-from-scratch projects, that you hit the "I don't even know wtf this is anymore" wall way later, maybe never if you're steering things properly.
I think because you can avoid hitting that wall for longer, people see this as radically different. It's debatable whether that's true or not. But in terms of just what the model does, like how it responds to prompts, I genuinely think it is only marginally better. And again, I think benchmarks confirm this, and I quite like Fodor's analysis on benchmarking here[0].
I use these models daily and I try new models out. I think that people over emphasize "model did something different" or "it got it right" when they switch over to a new model as "this is radically better", which I believe is simply a result of cognitive bias / poor measurement.
GP said there is no rule yet, so the answer is “today, yes.” If you’re asking about the future, the answer is “to be determined.” But I think you knew that.
Pardon my ignorance, what is GP? If you have other sources please share, I only read this article, which bluntly states "Your current vehicle stays surveillance-free, but shopping for a 2027 model means accepting this digital copilot.".
GP=Grandparent.. the comment above the comment on yours.. but there is none.. so I guess we can assume article? There are better ways to phrase like "the article" or even "OP" (Original Poster - assuming poster & author are the same). This isn't a reputable domain though, so probably time to move on.
‘US District Judge James “Jeb” Boasberg wrote in the new opinion that a “mountain of evidence suggests that the Government served these subpoenas on the Board to pressure its Chair into voting for lower interest rates or resigning.”’
Unless you mean it’s sad that we’re in this position to begin with, in which case I agree, but that ship sailed the moment people chose to re-elect Trump. (And arguably, when Democrats chose to stay home rather than vote for Hillary, just because they were pissed about Bernie. A tragedy of letting perfect be the enemy of good that Democratic voters are all too prone to.)
Pet peeve: this post misunderstands “TDD.” What it really describes is acceptance tests.
TDD is a tool for working in small steps, so you get continuous feedback on your work as you go, and so you can refine your design based on how easy it is to use in practice. It’s “red green refactor repeat”, and each step is only a handful of lines of code.
TDD is not “write the tests, then write the code.” It’s “write the tests while writing the code, using the tests to help guide the process.”
> TDD is a tool for working in small steps, so you get continuous feedback on your work as you go, and so you can refine your design based on how easy it is to use in practice.
I would like to emphasize that feedback includes being alerted to breaking something you previously had working in a seemly unrelated/impossible way.
I have to second the complaints about LLM writing. The tropes were grating, to the point where I hit the back button before ever learning what the difference between a boid and a noid is.
Ecto, I see that you’re reading and responding to comments. In your own words, concisely, and assuming I know what what boids are: what sets this apart?
You sure about that answer? Variants of boids have been implemented to leverage the GPU many times. I'm unclear how far typical GPU based examples deviate but then yours doesn't precisely imitate the original either. GPU accelerated boids is even one of the sample programs provided for testing Dawn when you compile it. [0]
Aside from "look ma, machine learning!" I noticed exactly one thing that sets your implementation aside from any other example I've seen before. It seems quite odd to me that you didn't select either neural networks or that feature for this answer.
Also the performance analysis section contains several questionable claims.
Companies that require sales calls are built around selling large numbers of licenses to companies with at least a hundred people—which is still a “small business.” They’re much more interested in selling to mid-market or enterprise.
They have sales departments, processes, and incentives geared around selling to those businesses. Because of those systems, it costs them a lot of money to sell to tiny businesses, and those businesses cost them more in support, too.
They’ll take your money, but they don’t really want you, and they’re not going to change things to suit you, because—to them—your entire market segment is more hassle than it’s worth.
Solar + battery is a miracle technology that’s being installed at an enormous rate. Technically, it’s fusion power, capturing energy from a fusion plant 8 light-minutes away. :-D
reply