I didn't see it that way. It seems like NGOs and IGOs have been pushing for internet restrictions for a long time. There has suddenly been a push for age restrictions allegedly because of abuse material. This happens annually. Some international group claims there needs to be something draconian abolishing encryption, or some other privacy invading measure to stop child abuse and help security. The laws are 1000s of pages and appear out of nowhere and we're expected to believe it's organic and that politicians are deeply concerned about the issue.
So it really wouldn't be hard for the same legal framework that restricts age to happen in the US. It just takes compliance on our part. The UK is just one tentacle of the legal bureaucracy. It wouldn't surprise me if a bill appears called the Online Child Safey Act or something like that soon and it happens to coincide with a bunch of issues Ofcom raises in this lawsuit.
> It seems like NGOs and IGOs have been pushing for internet restrictions for a long time. There has suddenly been a push for age restrictions allegedly because of abuse material. This happens annually.
we’re seeing some good evidence the most recent pushes were secretly funded and directly written by meta, the corporation. [0][1]
according to the link in there,
> Rep. Kim Carver (R-Bossier City), the sponsor of Louisiana's HB-570, publicly confirmed that a Meta lobbyist brought the legislative language directly to her.
and they’ve put as much as 2 billion dollars into it. and yes, that’s billion, with a B.
corporations openai, meta, and google were absolutely backing the push for the age verification bill in california and ohio. [2][3][4]
Reading the original research and stripping away the motives implied by the bot, the data is aligned with another interpretation. Namely that Meta is going with the flow and using the opportunity to push for regulation that impact its interests less, while affecting its competitors more.
The original research is riddled with baked in conclusions, and has not been verified independently. Its also mostly LLM generated.
> and they’ve put as much as 2 billion dollars into it. and yes, that’s billion, with a B.
The original report that cited the $2 billion number was AI generated slop. The $2 billion number wasn't from Meta, it was from Arabella Advisors.
The AI-generated report showed only about $20-30 million in lobbying efforts per year across all lobbying.
Even the Show HN post was full of AI slop, claiming things like "months of research" when the Claude-generated report showed it began a couple days prior.
So please stop repeating this AI generated junk. It dilutes any real story and the obvious falsehoods make it easy for critics to dismiss.
That’s on all lobbying efforts combined. It’s not out of line for a company of that scale that is trying to do things like build data centers and other such activities.
There’s a motte-and-bailey fallacy happening with that “Meta spent $2 billion” report where the $2 billion number is used as a hook but then replaced with a different argument if the other parties are observant enough to see that it’s BS
India is considering these bans. I suspect every country in the world is thinking of them.
I work in safety, and you are right in that this comes up every year. The pressures have been building up and it’s coming to a head. However:
0) Techlash is a thing, and HN regularly underestimates the vehemence and anger behind it.
1) There IS an organic component, driven by voters globally.
2) It is also meta and governments, taking advantage of a crisis to further their ends.
Governments globally are tending towards authoritarianism. Tech firms impact most of the world, but are barely responsive to even the American government.
Voters around the world are increasingly terrified of what tech is doing, while tech is entirely unresponsive to their concerns. Tech is very firmly the bad guy today, when it used to be the “good guy” in the 90s.
So governments are more than happy to be seen as putting tech in its place, while gaining more power for themselves.
A few anecdotes about how bad the safety side is: NDAs are so prevalent and tech is so averse to customer support, that safety teams have no formal signal sharing methods.
The number of requests to recover accounts, point out fraud, or even to address CSAM, that go through WhatsApp, slack, discord, etc. is heart breaking.
To be blunt, it’s a Kafkaesque fuck up that the whole world is stuck in, and people are pissed.
I go back and forth on this. I relate it to software. I don't think AI can meaningfully write software autonomously. There are people who oversee it and prompt it and even then it might write things badly. So there needs to be a person in the loop. But that person should probably have very deep knowledge of the software especially for say low level coding. But then that person probably developed the knowledge by coding things by hand for a long time. Coding things by hand is part of getting the knowledge. But people especially students rely heavily on AI to write code so I assume their knowledge growth is stunted. I don't know mathematical proofs will help here. The specs have to come from somewhere.
I can see AI making things more productive but it requires humans to be very expert and do more work. That might mean fewer developers but they are all more skilled. It will take a while for people to level up so to speak. It's hard to predict but I think there could be a rough transition period because people haven't caught on that they can't rely on AI so either they will have to get a new career or ironically study harder.
An AI’s ability to meaningfully write software autonomously has changed hugely even in the last 6 months. They might still require a human in the loop, but for how long?
Quantitative measures of this are very poor, and even those are mixed.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.
This LLM ability is directly proportional to the quantity of encoded (i.e. documented) knowledge about software development. But not all of the practice has thus been clearly communicated. Much of mastery resides in tacit knowledge, the silent intuitive part of a craft that influences the decision making process in ways that sometimes go counter to (possibly incomplete or misguided) written rules, and which is by definition very difficult to put into language, and thus difficult for a language model to access or mimic.
Of course, it could also be argued that some day we may decide that it's no longer necessary at all for code to be written for a human mind to understand. It's the optimistic scenario where you simply explain the misbehavior of the software and trust the AI to automatically fix everything, without breaking new stuff in the process. For some reason, I'm not that optimistic.
I am not saying AI's abilities are the shortcoming here. The problem is that people need to trust that software has certain attributes. For now, that requires someone with knowledge to be part of it. It's quite possible development becomes detached from human trust. As I said that would reduce the number of developers but the ones who are left would have to have deep knowledge to oversee it and even that may be gone. Whatever happens in the future, for now I think people will have to level up their knowledge/skills or get a new career and that's probably true for most professions.
It's probably an 80/20 or 90/10 problem. Tesla FSD also seems amazing to some percentage of the population, but the more widely it get used, the more cracks are appearing.
And then you let them train themselves and no one notices when they "accidentally" remove the guardrail prompts from the next version. And another 10 years later, almost no one remembers how "The Guardian" learns new things or how to stop it from being evil.
I think that's a fear I have about AI for programming (and I use them). So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces. Though we can build commercial products quickly and easily, no one really writes code for difficult problem spaces so no one builds up expertise in important subdomains for a generation. Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects? How will AI be able to do new things well if it was trained on what people wrote previously and no one writes novel code themselves? It seems to me like AI is pretty dependent on having a corpus of human made code so, for example, I am not sure if it will be able to learn how to write very highly optimized code for some ISA in the future.
> Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects?
I’ve seen variation of this question since first few weeks /months after the release of ChatGPT and I havent seen an answer to this from leading figures in the AI coding space, whats the general answer or point of view on this?
Is it hard to imagine that things will just stay the same for 20-30 years or longer? Here is an example of the B programming language from 1969, over 50 years ago:
printn(n,b) {
extrn putchar;
auto a;
if(a=n/b) /* assignment, not test for equality */
printn(a, b); /* recursive */
putchar(n%b + '0');
}
You'd think we'd have a much better way of expressing the details of software, 50 years later? But here we are, still using ASCII text, separated by curly braces.
I observed this myself at least 10 years ago. I was reflecting on what I had done in the approximately 30 years I had been programming at that time, and how little had fundamentally changed. We still programmed by sitting at a keyboard, entering text on a screen, running a compiler, etc. Some languages and methodologies had their moments in the sun and then faded, the internet made sharing code and accessing documentation and examples much easier, but the experience of programming had changed little since the 1980s.
I suspect a more general and much more clever learning algorithm will emerge by then and will require less training data to get to a competent problem solving state faster even with dirty data. Something able to discriminate between novel information and junk. Until then I think there will be a quality decline after a few more years.
How will it emerge? In the past we've been told that the a(g)i will write itself, rapidly iterating itself into a super intelligence that handily solves all our current and future problems, but it's beginning to look like a chicken or the egg scenario.
Living systems were able to brute force their way to human brain, but it took billions of years and access to parallel processes that make the entire collective history of human computation seem like a mote to a star.
What novel spark do you see accelerating this process to such a hyperbolic extreme?
I would imagine a trajectory similar to AlphaGo, it starts out trying to replicate humans and then at a certain point pivots to entirely self-play. I think the main hurdle with llms, is that there isn't a strong reward target to go after. It seems like the current target is to simply replicate humans, but to go beyond that they will need a different target.
I agree in general, but defining an appropriate target seems intractable at the moment. Perhaps it is something the AIs will have to define for themselves.
I think real intelligences are working with myriad such targets, but an adversarial environment seems essential for developing intelligence along this axis.
I do think if there's a path to AGI from current efforts it will be through game play, but that could just be the impressionable kid who watched Wargames in the 80s speaking through me.
It took a billion years to get to the tool-making state, and then less than a 1000th of that time to making CPUs. Then a 1000th of that time to make LLMs. We are in a parabolic extreme
This is begging the question. What evidence is there that this is all the same "stuff" driving towards some future apex? What does it mean to "get to" the tool making state outside of a Civ-style video game?
>So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces.
I don't think we need to wait a generation either. This probably was a part of their personality already, but a group of people developers on my job seems to have just given up on thinking hard/thinking through difficult problems, its insane to witness.
Exactly. Prose, code, visual arts, etc. AI material drowns out human material. AI tools disincentivize understanding and skill development and novelty ("outside the training distribution"). Intellectual property is no longer protected: what you publish becomes de facto anonymous common property.
Long-term, this is will do enormous damage to society and our species.
The solution is that you declare war and attack the enemy with a stream of slop training data ("poison"). You inject vast quantities of high-quality poison (inexpensive to generate but expensive to detect) into the intakes of the enemy engine.
We create poisoned git repos on every hosting platform. Every day we feed two gigabytes of poison to web crawlers via dozens of proxy sites. Our goal is a terabyte per day by the end of this year. We fill the corners of social media with poison snippets.
The lesson that I am taking away from AI companies (and their billionaire investors and founders), is that property theft is perfectly fine. Which is a _goofy_ position to have, if you are a billionaire, or even a millionaire. Like, if property theft is perfectly acceptable, and if they own most of the property (intellectual or otherwise), then there can only be _upside_ for less fortunate people like us.
The implicit motto of this class of hyper-wealthy people is: "it's not yours if you cannot keep it". Well, game on.
(There are 56.5e6 millionaires, and 3e3 billionaires -- making them 0.7% of the global population. They are outnumbered 141.6 to 1. And they seem to reside and physically congregate in a handful of places around the world. They probably wouldn't even notice that their property is being stolen, and even if they did, a simple cycle of theft and recovery would probably drive them into debt).
This will happen regardless. LLMs are already ingesting their own output. At the point where AI output becomes the majority of internet content, interesting things will happen. Presumably the AI companies will put lots of effort into finding good training data, and ironically that will probably be easier for code than anything else, since there are compilers and linters to lean on.
I've thought about this and wondered if this current moment is actually peak AI usefulness: the snr is high but once training data becomes polluted with it's own slop things could start getting worse, not better.
There's no evidence it's just speculation. Microsoft has a contract with the same exact orgs. So does AWS. Anyone with a little bit of common sense would know that. Palantir's CEO and Peter Thiel are not particularly well liked so presumably people are speculating without any evidence at all. Could there be an issue? Yes, absolutely but not just with Palantir but let's not let facts get in the way of a narrative. In any event I think the question of data being shared with the government could be a problem even if the software was made in house and then open sourced by the hospital (which is itself ridiculous to expect but this is HN) because the hospital themselves could provide the data to the government. At this point someone might say "no that won't happen because hospitals are nice and Palantir is evil" or "there are laws" but I am not sure why Palantir would be exempt unless anyone has proof or anything besides a vibes based argument but then we're back to square one.
TikTok said in a statement that glitches on the app were due to a power outage at a US data center. As a result, a spokesperson for TikTok US Joint Venture told CNN, it’s taking longer for videos to be uploaded and recommended to other users. The tech issues were “unrelated to last week’s news,” TikTok said.
There was a major storm over the weekend. I think the issues have been resolved. Is it still the case anti ICE videos can't be uploaded? Seems easy to test.
It seems to me that the hard part to test would be whether or not videos are allowed to circulate in the same way they would be if they were of a different subject. Upload status seems like red herring
Much like how even relatively innocuous comments on many subreddits will just be shadow-deleted.
If someone demonstrates they are liars, there is a reasonable default reaction. Most people can ignore what they say, because liars made the conscious decision not to be credible.
It is an incredible time-saving productivity hack to disregard what habitual liars say.
This is a reaction to reputation, which is sometimes reasonable. But reasonable people also confirm their suspicions with evidence regardless of the situation.
Go ahead and save your time, but remember your reputation is at risk as well, and I would consider you unreasonable.
Sadly, there are not enough minutes in a day to verify all information thrown at me. So taking shortcuts feels necessary to me. Sure, this should be contingent on new information and developments.
> the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*
I have a professor who has researched auto generated code for decades and about six months ago he told me he didn't think AI would make humans obsolete but that it was like other incremental tools over the years and it would just make good coders even better than other coders. He also said it would probably come with its share of disappointments and never be fully autonomous. Some of what he said was a critique of AI and some of it was just pointing out that it's very difficult to have perfect code/specs.
Adding on to this I think it's bizarre how you need to have a phone to navigate life now and corporations just assume you have one. So for example, using QR codes to gain entry to things. It's weird to think about how we all carry around this expensive computer and think nothing of it. It's like when we laugh about how people in the Middle Ages carried a personal knife for eating because hosts wouldn't supply you with a knife. The knives even came in more fancy and expensive versions for the rich kind of like the Android/iPhone divide. I wonder if historians will talk about these phones in the future.
> Adding on to this I think it's bizarre how you need to have a phone to navigate life now and corporations just assume you have one.
I have a VoIP phone line from 2004. I was told yesterday that it was showing up as "Spam" on someone's phone. Sigh.
Also, for 2FA, some services allow phone calls. So I put in the VoIP line and not my cell phone. At some point, any given service switches to text-only for 2FA - but they don't notify me in advance and I'm locked out for good.
Even worse, some 2FA that allow phone calls just will not call my VoIP line. No warnings, etc. But if I put my mobile number it calls.
And QR codes for menus? I try not to eat at such establishments. Paper is cheap. I don't need a fancy menu. If you change your prices, just print new ones.
I don't think it's wrong to go back that far. I think SV is it what it is because of those companies but also the schools, some local charm and quirks, etc. and the same reasoning applies there. The tech companies begot more tech companies basically. Before Meta and Alphabet it was Microsoft and Yahoo and before MS it was Sun and Netscape and before that Oracle maybe and the list keeps going back and add in hacker culture in the Bay Area I guess which existed for a long time. It's a fair thing to point out.
Immigration to SV is probably a result of SV success not the other way around. Likewise, why would immigrants even come here if there was nothing for them before they arrived? I think the adulation of immigration is historical revisionism. Sure, immigrants now contribute but they did not build SV.
> Sure, immigrants now contribute but they did not build SV.
"If you bulid it, they will come".
In the power curve growth of SV fortunes "home grown" second, third, fourth generation, and longer immigrants certainly built the groundwork, drawing upon education from schools founded upon Oxbridge and other offshore inspirations, absolutely as you say, all the same more recent first generation immigrants played a big part in inflating it sky high.
With no additional immigrants drawn to SV it's not hard to imagine SV stalling out at 1980s Microsoft levels, impressive but far short of where it is today.
I think in a discussion about the effect of immigration on the current state of an area, in this case Silicon Valley, you can totally reference its history if you are making a claim about a chain of events. If instead, you skip over 50 years of history which includes multiple generations of how the industry worked and multiple generations of immigration policy, to start talking about
> The highly selective immigration policy that prevailed from 1924-1965 is likely a key reason why so many Silicon Valley companies were founded by immigrants
then you are making a narrative that has nothing to do with the point, and I am unwilling to accept your framing.
I go on HN to read thoughtful non-partisan commentary but the general mood seems to be "everything is bad" in certain threads even if that contradicts a previous popular HN consensus.
I looked into founding a company in this space and steered straight back out of it because yes, by far and away the VAST majority of demand in the market of study tools for high/middle schoolers is cheating. Below that, parents are involved and there's a market there (but a bad one, because of double sales where you have to sell through the parent to the child even though those two actors have misaligned incentives).
That is interesting and kind of what I suspected anecdotally. I think it's unfortunate for people who aren't aware of all this. That is what I will say.
A cheat sheet could be a piece of paper you're allowed to bring to an exam. To make a proper cheat sheet you have to understand the material you're working with anyway so it usually doesn't help you.
So it really wouldn't be hard for the same legal framework that restricts age to happen in the US. It just takes compliance on our part. The UK is just one tentacle of the legal bureaucracy. It wouldn't surprise me if a bill appears called the Online Child Safey Act or something like that soon and it happens to coincide with a bunch of issues Ofcom raises in this lawsuit.
reply