> In December 2021, Austin’s median rent was $1,546, near its highest level ever and 15% higher than the U.S. median ($1,346). By January 2026, Austin’s median rent had fallen to $1,296, 4% lower than that of the U.S. overall ($1,353).
For comparison, in San Francisco December 2021, the median one bedroom was $2810. In San Francisco March 2026, it was $3597, an increase of 28%.
It is well known that there was a brief moment in time when people were abandoning San Francisco and “moving to Texas” (mostly Austin) that coincides when the rents peaked in Austin. I’d be not surprised if that was also the time when San Francisco rents were down.
We’re seeing a reversal in trend when SF is hot again and Austin is not. So not exactly a straightforward comparison. It could explain the SF-> Austin and back trend.
So we've got point in time comparisons between Austin and itself; the change in delta between Austin and a particular city known for restricting housing; and the change in delta between Austin and national median rents. They all support the idea that increasing supply tends to decrease costs, which by a massive coincidence is what basic economic theory suggests.
Of course, people can come up with an ad hoc explanation for why Austin's prices happened to decrease against each of those data points. But is there a single principled way to present the data that suggests increasing supply in Austin did not decrease costs?
Inflation isn't inevitable, especially in the long term. But of course it depends on implementation.
The goal of a UBI is to make sure people get their essentials to live. Right now, people get those essentials, one way or another (otherwise, they'd be dead; and to the extent people starve to death in the developed world, it's issues of distribution, not production or money). This makes the UBI an accounting trick: there's no actual goods not being produced that need to be produced, and it is just shifting costs from welfare, charity, family and friends, etc to the UBI program. This is not inflationary and frees up human effort to focus on higher needs than scraping together a basket of things merely to live.
A lot of the time, though, people also want some non-essential but still pretty important things covered, which is a bit trickier. In this case, there is the potential for more money to be chasing a fixed supply of goods. This will drive inflation in the short term. However, in the longer term, capital will be redeployed to capture that increased demand (while being deployed away from the desires of those taxed to fund the UBI).
This all assumes that the UBI is revenue neutral; if not, yeah, we will get a lot of inflation.
I find it hard to believe shifting spending from welfare to cash won’t result in inflation. It’s about accessibility and how liquid the assistance is. It’s also about how evenly that’s distributed.
To the parent comment’s point, if UBI is evenly distributed across everyone (“Universal”) and exists as liquid spending power (“Income”), there’s no way that doesn’t result in a rate of inflation that perfectly counteracts the existence of UBI.
Prices are only low when the seller wants to scale/reach more buyers. If low/no income buyers disappear, why would prices stay low? If there were an infinite number of high income buyers, cheap products wouldn’t even exist in a freely capitalistic system. Instead we have a limited number of buyers in a wide range of income levels, which drives a wide range of prices and sellers competing at every price point.
It feels like the laws of physics, once you cut off one side of the scale it will fling in the other direction. I also hate everything I just said, I would love to exist in a world that wasn’t subject to these forces. Just seems impossible in a freely capitalistic system.
The point is that, if limited to a level that just covers essential goods, it won't change their distribution, just their payer. If it did change the distribution of the good, then it wasn't essential (because it's the floor; without it, the consumer of the good would be dead; above it, and the vast majority of people immediately spend their income on luxury substitutes).
That is, to be clear, a much lower floor than what many people mean by "essential," which has undergone a kind of concept creep in modern discussion that, depending on the person, might be a cell phone, to an education at a private university, to owning a condo in San Francisco. My essential here means enough to afford enough caloric and nutrient intake to maintain a livable body mass; a couple sets of plain tee shirts and jeans; and a minimal shared living space in a low cost of living area. That's quite below what the US considers the current poverty line and a quite bleak existence (and most people would wonder what's even the point of it).
Income beyond that would drive inflation, at least in the short term.
> Gamergate or GamerGate (GG) was a loosely organized misogynistic online harassment campaign motivated by a right-wing backlash against feminism, diversity, and progressivism in video game culture. It was conducted using the hashtag "#Gamergate" primarily in 2014 and 2015. Gamergate targeted women in the video game industry, most notably feminist media critic Anita Sarkeesian and video game developers Zoë Quinn and Brianna Wu.
Grokipedia's:
> Gamergate was a grassroots online movement that emerged in August 2014, primarily focused on exposing conflicts of interest and lack of transparency in video game journalism, initiated by a blog post detailing the romantic involvement of indie developer Zoë Quinn with journalists who covered her work without disclosure. The controversy began when Eron Gjoni, Quinn's ex-boyfriend, published "The Zoe Post," accusing her of infidelity with multiple individuals, including Kotaku journalist Nathan Grayson, whose article on Quinn's game Depression Quest omitted any mention of their prior personal contact. This revelation highlighted broader patterns of undisclosed relationships and coordinated industry practices, such as private mailing lists among journalists, fueling demands for ethical reforms like mandatory disclosure policies.
I don't care about "Gamergate" and never use Grokipedia, but Wiki definitely has a stronger slant: it's as if an article about Black Lives Matter started with a statement that it was a campaign meant to scam people to pay for mansions for leadership.
Wikipedia's assessment is more accurate. Wikipedia does go on in its second paragraph to explain the context of the start of the campaign, including "The Zoe Post" and the accusations of conflict of interest. But the broader impact of Gamergate was as a misogynistic online harassment campaign, and Wikipedia is correct to make that the central part of its summary. Just because Grokipedia is more reluctant to state a conclusion does not make it less biased.
As somebody who supported GG for the first month or so, Wikipedia has the better intro from where things stand in 2026. GG started by piggybacking on general distrust of gaming journalists, but was quickly consumed by misogyny.
An article doesn't avoid bias by avoiding unpleasant facts.
Well, I'm naively assuming Grokipedia is being sympathetic to the cause(?) of Gamergate, but if the best thing they could lead the article was essentially "It all started when someone got mad at his ex-girlfriend and her many other boyfriends and wrote something that went viral" ...
... it does sound like an online harassment campaign.
It was. In hindsight it signaled the beginning of the mass weaponization of the internet via social media. It also was NOT grassroots lol. It was very specifically and intentionally enflamed and groomed and funded by people like Steve Bannon and his good buddy Jeffrey Epstein. It wouldn’t have such a big Wikipedia article without them.
Plenty of scientific authorities believed in it through the 19th century, and they didn't blindly believe it: it had good arguments for it, and intelligent people weighed the pros and cons of it and often ended up on the side of miasma over contagionism. William Farr was no idiot, and he had sophisticated statistical arguments for it. And, as evidence that it was a scientific theory, it was abandoned by its proponents once contagionism had more evidence on its side.
It's only with hindsight that we think contagionism is obviously correct.
> It's only with hindsight that we think contagionism is obviously correct.
We, the mere median citizen on any specific topic which is out of our expertise, certainly not. And this also have an impact as a social pressure in term of which theory is going to be given the more credits.
That's not actually specific to science. Even theological arguments can be dumb as hell or super refined by the smartest people able to thrive in their society of the time.
Correctness of the theories and how great a match they are with collected data is only a part of what make mass adoption of any theory, and not necessarily the most weighted. It's interdependence with feedback loops everywhere, so even the data collected, the tool used to collect and analyze and the metatheorical frameworks to evaluate different models are nothing like absolute objective givens.
The miasma theory of disease, though wrong, made lots of predictions that proved useful and productive. Swamps smell bad, so drain them; malaria decreases. Excrement in the street smells bad, so build sewage systems; cholera decreases. Florence Nightingale implemented sanitary improvements in hospitals inspired by miasma theory that improved outcomes.
It was empirical and, though ultimately wrong, useful. Apply as you will to theories of learning.
I end up shrugging. For a Claude Code power user, today, a day's use uses less electricity than a morning commute in an electric car. To say nothing of the costs to keep your workstation running, your building heated or cooled, etc. Not quite a rounding error, but a relatively minor component of overall usage.
At least for programming usage the power usage seems worth it. For starting up 1 million bots to argue with each other on facebook it's obviously a total waste.
At any rate, the power usage will become more apparent when these products stop being subsidised, where power usage is being charged to the end user.
> you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons"
To be fair, Anthropic didn't say that either. Merely that autonomous weapons without a HITL aren't currently within Claude's capabilities; it isn't a moral stance so much as a pragmatic one. (The domestic surveillance point, on the other hand, is an ethical stance.)
They specifically said they never agreed to let the DoD use anthropic for fully autonomous weapons. They said "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance [...] Fully autonomous weapons"
Their rational was pragmatic. But they specifically said that they didn't agree to let the DoD create fully automatic weapons using their technology. I'll bet 10:1 you won't ever hear Sam Altman say that. He doesn't even imply it today.
Not sure how that's relevant. I never said Dario was taking an ethical stand. I said they did not agree for Claude to be used for fully autonomous weapons. Now, compare that to OpenAI, whose agreement does allow fully autonomous weapons.
Why do you suppose OpenAI's deal led to a contract, while Anthropic's deal (ostensibly containing identical terms) gets it not only booted but declared a supply chain risk?
For comparison, in San Francisco December 2021, the median one bedroom was $2810. In San Francisco March 2026, it was $3597, an increase of 28%.
reply