A fair amount of negative comments here, but Yann might very well be the person who brings the Bell Labs culture back to life. It’s been badly missing, and not just in Europe.
Your setting is off cloud media until the company arbitrarily turns it on for you. Seems crazy now, won’t be ten years from now. They’re just boiling the frog all the way.
> the barrier to entry to build a frontend gets lower
My impression is the opposite: frontend/UI/UX is where the moat is growing because that's where users will (1) consume ads (2) orchestrate their agents.
I agree with you - we are saying the same thing, by restricting their API or making less developer friendly, they want you to be captive in their UI. This might not be true for Anthropic or OpenAI - another child commenter made a comment about ads in CLI, I would not be surprised if in a while we will have product placements in LLM responses exactly as we have it in movies - not a plain ad but just a slightly less subliminal suggestion.
It’s objectively easier to build a frontend now and therefore that moat is disappearing.
What you can argue is the moat is in incumbent advantage at the UI layer, not the UI itself.
APIs leak profit and control vs their counterpart SDK/platforms. Service providers use them to bootstrap traffic/brand, but will always do everything they can to reduce their usage or sunset them entirely if possible.
I'm not sure that's true. If LLMs can help researchers implement (not find) new ideas faster, they effectively accelerate the progress of research.
Like many other technologies, LLMs will fail in areas and succeed in others. I agree with your take regarding business ideas, but the story could be different for scientific discovery.
Almost there. Humans kill one person every 100 million miles driven. To reach mass adoption, self-driving car need to kill one every, say, billion miles. Which means dozens or hundreds of billions miles driven to reach statistical significance.
> to reach mass adoption, self-driving car need to kill one every, say, billion miles
They need to be around parity. So a death every 100mm miles or so. The number of folks who want radically more safety are about balanced by those who want a product in market quicker.
The deaths from self-driving accidents will look _strange_ and _inhuman_ to most people. The negative PR from self-driving accidents will be much worse for every single fatal collision than a human driven fatality.
I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen. Maybe not a full order of magnitude safer, but I think it will need to be clearly safer than human drivers and not just at parity.
> negative PR from self-driving accidents will be much worse for every single fatal collision than a human driven fatality
We're speaking in hypotheticals about stuff that has already happened.
> I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen
I used to as well. And no doubt, some populations will take this view.
They won't have a stake in how self-driving cars are built and regulated. There is too much competition between U.S. states and China. Waymo was born in Arizona and is no growing up in California and Florida. Tesla is being shaped by Texas. The moment Tesla or BYD get their shit together, we'll probably see federal preëmption.
(Contrast this with AI, where local concerns around e.g. power and water demand attention. Highways, on the other hand, are federally owned. And D.C. exerting local pressure with one hand while holding highway funds in the other is long precedented.)
> The deaths from self-driving accidents will look _strange_ and _inhuman_ to most people.
I like to quip that error-rate is not the same as error-shape. A lower rate isn't actually better if it means problems that "escape" our usual guardrails and backup plans and remedies.
You're right that some of it may just be a perception-issue, but IMO any "alien" pattern of failures indicates that there's a meta-problem we need to fix, either in the weird system or in the matrix of other systems around it. Predictability is a feature in and of itself.
I know this sounds bad, but I wonder if you put an LLM in the vehicle that can control basic stuff (like the radio, climate controls, windows, change destination, maybe friendly chatter) but no actual vehicle control, people will humanize the car and be much more forgiving of mistakes. I feel pretty certain that they would..
About half of road deaths involve drivers who are drunk or high. But only a very small fraction of drivers drive drunk or high - 50% of deaths are caused by 2% of drivers.
A self-driving car that merely achieves parity would be worse than 98% of the population.
Gotta do twice the accident-free mileage to achieve parity with the sober 98%.
I disagree. The 1:100M statistic is too broad, and includes many extremely unsafe drivers. If we restrict our data to only people who drive sober, during normal weather conditions, no speed racing or other deliberately unsafe choices, what is the expected number of miles per fatality?
1 in a billion might be a conservative target. I can appreciate that statistically, reaching parity should be a net improvement over the status quo, but that only works if we somehow force 100% adoption. In the meantime, my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's.
> I disagree. The 1:100M statistic is too broad, and includes many extremely unsafe drivers
To be clear, I'm not arguing for what it should be. I'm arguing for what it is.
I tend to drive the speed limit. I think more people should. I also recognise there is no public support for ticketing folks going 5 over.
> my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's
All of these services are supply constrained. That's why I've revised my hypothesis. There are enough folks who will take that car before you get comfortable who will make it lucrative to fill streets with them.
(And to be clear, I'll ride in a Waymo or a Cybercab. I won't book a ride with a friend or my pets in the latter.)
This gets near something I was thinking about. Most of the numbers seem to assume that injuries, injury severity, and deaths are all some fixed proportion of each other. But is that really true in the context of self-driving cars of all types?
It seems reasonable that the deaths and major injuries come highly disproportionally from excessively high speed, slow reaction times at such speeds, going much too fast for conditions even at lower absolute speeds. What if even the not very good self-driving cars are much better at avoiding the base conditions that result in accidents leading to deaths, even if they aren't so good at avoiding lower-speed fender-benders?
If that were true, what would that mean to our adoption of them? Maybe even the less-great ones are better overall. Especially if the cars are owned by the company, so the costs of any such minor fender-benders are all on them.
If that's the case, maybe Tesla's camera-only system is fairly good actually, especially if it saves enough money to make them more widespread. Or maybe Waymo will get the costs of their more advanced sensors down faster and they'll end up more economical overall first. They certainly seem to be doing better at getting bigger faster in any case.
A death is a catastrophic case, but even a mild collision with bumps and bruises to the people involved would set back Tesla years.
People have an expectation that self driving cars will be magical in ability. Look at the flac waymo has received despite it's most egregious violations being fender bender equivalents
I am not a blind AI fanboy but I am more bullish than some people here, for two reasons:
1. We might be drowning under the "top of the iceberg" at the moment (quickly generated AI slop) but there's a silent crowd of builders doing long-term work (still with the help of AI) that will only surface after months of work. I expect more of the bottom of the iceberg to show up over time.
2. A lot of the most interesting work in science was done out of sheer curiosity, not given a specific problem to solve. The current generation of AI is good -- and getting better -- at the latter, but genuinely incapable of doing the former in a remotely meaningful way.
In other words, I'm long on truly human-driven innovation.
reply