Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

  > The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true.
This really resonates with me. If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them. We are using AI for a lot of small tasks inside big systems, or even for designing the entire architecture, and we still need to validate the answers by ourselves, at least for the foreseeable future. But outsourcing thinking reduces a lot of brain powers to do that, because it often requires understanding problems' detailed structure and internal thinking path.

In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.



If you don't have building codes, you can totally yolo build a small house, no calculator needed. It may not be a great house, just like vibeware may not be great, but also, you have something.

I'm not saying this is ideal, but maybe there's another perspective to consider as well, which is lowering barriers to entry and increased ownership.

Many people can't/won't/don't do what it takes to build things, be it a house or an app, if they're starting from zero knowledge. But if you provide a simple guide they can follow, they might end actually building something. They'll learn a little along the way, make it theirs, and end up with ownership of their thing. As an owner, change comes from you, and so you learn a bit more about your thing.

Obviously whatever gets built by a noob isn't likely to be of the same caliber as a professional who spent half their life in school and job training, but that might be ok. DIY is a great teacher and motivator to continue learning.

Contrast to high barriers to entry, where nothing gets built and nothing gets learned, and the user is left dependent on the powers that be to get what he wants, probably overpriced, and with features he never wanted.

If you're a rocket surgeon and suddenly outsource all your thinking to a new and unpredictable machine, while you get fat and lazy watching tv, that's on you. But for a lot of people who were never going to put in years of preparation just to do a thing, vibing their idea may be a catalyst for positive change.


To continue the analogy, there’s something called renting and the range of choices. If there’s no code and you can’t build your own house, you’re left with bad houses built by someone else. It’s more likely to be bad when the owner already knows he will not be living in them as building it right can be expensive and time consuming.

When slop becomes easier, there are a lot more people ready to push it to others than people that tries to produce guenuine work. Especially when theh are hard to distinguish superficially.


> If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.

I think past successes have led to a category error in the thinking of a lot of people.

For example, the internet, and many constituent parts of the internet, are built on a base of fallible hardware.

But mitigated hardware errors, whether equipment failures, alpha particles, or other, are uncorrelated.

If you had three uncorrelated calculators that each worked 99.99% of the time, and you used them to check each other, you'd be fine.

But three seemingly uncorrelated LLMs? No fucking way.


There's another category error compounding this issue: People think that because past revolutions in technology eventually led to higher living standards after periods of disruption, this one will too. I think this one is the exception for the reasons enumerated by the parent's blog post.


Agreed.

In point of fact, most technological revolutions have fairly immediately benefited a significant number of people in addition to those in the top 1% -- either by increasing demand for labor, or reducing the price of goods, or both.

The promise of LLMs is that they benefit people in the top 1% (investors and highly paid specialists) by reducing the demand for labor to produce the same stuff that was already being produced. There is an incidental initial increase in (or perhaps just reallocation of) labor to build out infrastructure, but that is possibly quite short-lived, and simultaneously drives a huge increase in the cost of electricity, buildings, and computer-related goods.

But the benefits of new technologies are never spread evenly.

When the technology of travel made remote destinations more accessible, it created tourist traps. Some well placed individuals and companies do well out of this, but typically, most people living near tourist traps suffer from the crowds and increased prices.

When power plants are built, neighbors suffer noise and pollution, but other people can turn their lights on.

We haven't yet begun to be able to calculate all the negative externalities of LLMs.

I would not be surpised if the best negative externality comparisons were to the work of Thomas Midgley, who gifted the world both leaded gasoline and CFC refrigerants.


The LLMs are not uncorrelated, though, they're all trained on the same dataset (the Internet) and subject to most of the same biases


Agreed.

This is why I differentiated "uncorrelated" from "seemingly uncorrelated." Sorry if that wasn't clear.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: