That's not quite accurate. Every core has access to the entire L3, including the L3 on an entirely different socket. CPUs communicate through caches, so if a core just plain couldn't talk to another core's cache then cache coherency algorithms wouldn't work. Though a core can access the entire cache, the latency is higher when going off-die. It is really high when going to another socket.
The first generation of Epyc had a complicated hierarchy that made latency quite hard to predict, but the new architecture is simpler. A CPU can talk to a cache in the same package but on a different die with reasonably low latency.
In Zen1, the "remote L3" caches had longer read/write times than DDR4.
Think of the MESI messages that must happen before you can talk to a remote L3 cache:
1. Core#0 tries to talk to L3 cache associated with Core#17.
2. Core#17 has to evict data from L1 and L2, ensuring that its L3 cache is in fact up to date. During this time, Core#0 is stalled (or working on its hyperthread instead).
3. Once done, then Core#17's L3 cache can send the data to Core#0's L3 cache.
----------
In contrast, step#2 doesn't happen with raw DDR4 (no core owns the data).
This fact doesn't change with the new "star" architecture of Zen2 or Zen3. The I/O die just makes it a bit more efficient. I'd still expect remote L3 communications to be as slow, or slower, than DDR4.
First off, it's not a direct comparison. The Epyc has one L3 cache per chiplet. This means that latency is not uniform across the entire L3 cache. This was a serious concern on the first generation of Epyc, where accessing L3 could take anywhere from zero to three hops across an internal network. AMD has greatly improved the problem on the more recent generations by switching to a star topology with more predictable latency.
That said, there are two major reasons:
1. Epyc is on a chiplet architecture. Large chips are harder to make than small ones. Building two 200mm^2 chips is cheaper than building one 400mm^2 chip. Since Epyc has a chiplet architecture, this means they can put more silicon into a chip for the same price. This means that Epyc can be just plain bigger than the competition. This comes with some complexity and inefficiency but has, so far, paid off in spades for them.
2. Epyc is on a newer process. This means AMD can fit more transistors even with the same area. Intel has had serious problems with their newer processes, so this is not an advantage AMD expected to have when designing the part. The use of a cutting-edge process was, in part, enabled by the chiplet architecture. It is possible to fabricate several small chips on a 7nm process even though one large chip would be prohibitively expensive, and AMD has been able to use a 14nm process in parts of the CPU that wouldn't benefit from a 7nm process to cut costs.
The first point is serious cleverness on the part of AMD. The second point is mostly that Intel dropped the ball.
Aren't they already for big (desktop/workstation/server) chips? I'd say Zen3 is the state of the art in that market and that uses a mixed process. The IO dies are global foundries 12nm for AMD.
The mobile market cares more about efficiency than easily scaling up to much bigger chips, so the M1 and other ARM chips are probably going to ignore this without much consequence for smaller chips.
Intel still tops sales because of non-perf related reasons like refresh cycles, distrust of AMD from last time they fell apart in the server space, producing chips in sufficient quantities unlike the entire rest of the industry fighting over TSMC's capacity, etc.
You are. Pretty much every other manufacturer offers adaptive cruise control, lane keeping, and the like. They're just honest about their capabilities.
Comparing Tesla's "full self driving" features to adaptive cruise control or lanekeeping is a very dishonest representation of the capabilities.
edit: I'm pretty anti-Tesla (strongly dislike the focus on the central screen and other driver-hostile 'features', don't think "full self driving" should be advertised as such, etc). But I still think their capabilities are significantly above the other car manufacturers enough that they shouldn't be directly compared in that way. But I guess readers like the fact that calling them "adaptive cruise control" and "active lanekeeping" is more accurate and much less bullshitty than Tesla.
Tesla doesn’t have full self driving yet. For the core autopilot features, reviews I’ve seen of GM SuperCruise rate it ahead of Tesla for lane-keeping and adaptive cruise control. The other advanced features that Tesla offers - navigation on AP and summon - are not much more than parlor tricks. I say this as a Tesla driver with access to those features on my car ...
It's a shame that the necessary work of criticizing our media is so often undertaken dishonestly.
>When Donald Trump made the same calculation, saying he couldn’t cut ties
This is not what happened.
Donald Trump denied that the Saudi royalty was known to be behind the murder of Khashoggi despite the conclusions of U.S. intelligence. He was part of the coverup, obscuring the known facts to preserve his relationship with bin Salman. Joe Biden, on the other hand, said bin Salman was responsible for his murder but decided not to penalize him.
Both of those decisions are unacceptable to me. That does not make them the same.
Say what you will about the New York Times, but they rarely tell lies as brazenly as Mr. Taibbi.
I certainly question the orthodox view that software should be fault-tolerant and horizontally-scalable. If programmer time is more valuable than machine time, then why can't my team get a single server with a few TiB of RAM and a few hundred cores and forget all the gnarly problems of distributed computing? Why do we accept that our hardware is unrelaible and then spend so much effort making our software tolerate failure?
I do not know whether or not mainframes are the right solution, though. I don't really know anything about them. I don't know whether or not the architecture is the correct approach.
>IBM has grown usage as measured by MIPS, a method of calculating the raw speed used, by 350% over the past ten years.
That's an awful metric. MIPS isn't even a good way of measuring aggregate performance and aggregate performance is a terrible way of measuring adoption.
> If programmer time is more valuable than machine time
This is probably not true in industries such as banking, manufacturing, agriculture, etc. I worked as a field engineer in a food processing plant once, during the peak season for the crop being processed. The plant manager told me that downtime could cost the plant up to a quarter million USD per day.
> Why do we accept that our hardware is unrelaible and then spend so much effort making our software tolerate failure?
From what I understand, the whole point of mainframe design is to tackle the issue that hardware is unreliable. Mainframes feature many hardware redundancies, moreso than a commodity rack server.
> I do not know whether or not mainframes are the right solution, though. I don't really know anything about them. I don't know whether or not the architecture is the correct approach.
Mainframes are designed for throughput, not performance. They are good at things like transaction processing, not machine learning.
This does not, by the way, imply that the machines are working correctly and human drivers are the problem. These cars are rear-ended more often than human drivers, which suggests they stop more suddenly or at times humans don't expect.
Slamming on the brakes is a flaw. Even if every car on the road had the ability to respond instantly to the car ahead, being forced to brake hard still risks loss of control especially in bad conditions.
Also, having automatic emergency braking on all vehicles is at least decades out. It will never work on motorcycles or scooters or bicycles, period. It cannot be retrofitted onto existing cars. Requiring it for all new cars would require pretty hamfisted regulations. I expect human drivers to be widespread for at least the rest of my life.
> Even if every car on the road had the ability to respond instantly to the car ahead, being forced to brake hard still risks loss of control especially in bad conditions
Driving on the road is not some kind of high speed car chase. If one had left enough space to the car in front based on the current road conditions and speed one wouldn't have to brake hard. Anyone can adjust their speed to increase the distance to the car on front. Often arguments sound like driving too close or too fast are inevitable facts of life.
> This does not, by the way, imply that the machines are working correctly and human drivers are the problem. These cars are rear-ended more often than human drivers, which suggests they stop more suddenly or at times humans don't expect.
I had a look and couldn't find an answer; does their increased rear-ending frequency result in a decrease in head-on collisions? Is waymo getting rear ended to avoid being t-boned (or a head on collision with someone else)?
The problem with that is, in part, that they ignored the spirit of the donation, but also that the proper amount for a university to spend on a scoreboard is twelve dollars and thirty-seven cents.
OK, that's a bit unrealistic. I could build them one for a few hundred, though, and they could pay a student $50 to flip over the numbers during the game.
The proper amount to spend is zero. There’s no reason for universities to be running minor league football teams with unpaid labor. On the contrary that’s the exact opposite of “charitable”.
I mean, a few hundred for a score board seems right. Even a few thousand for a piece of infrastructure that can be used 20+ years for several different sports is not so unreasonable. Even rural middle schools will sometimes spend that much on scoreboards.
In University I participated in a club sport "funded" exclusively through the student government -- really more of a tax rebate than anything else since only 60% of what the climbing "team" paid in student fees we got back to buy ropes, biners, rent a van for outings. That sort of stuff.
College sports do serve a real community-building purpose. Just... a few hundred or maybe low thousands total per sport instead of a few tens of millions for the main sport.
Definitely. One of my modest proposals that will never happen is that colleges go back to teaching, and football and basketball become more like baseball, where they have minor leagues as a way of getting players. The jumbling together of college and professional-in-all-but-name athletics is absurd, and egregious exploitation of "student" athletes to boot.
Fundamentally, you're correct. However, when working for a university, I was well aware that private sector donations (and maybe even state funds) went up or down depending on how well the football team did. A losing record for the football team ought not to result in lower private sector donations towards scientific research or scholarships at that university, but apparently at most universities it does. Universities that ignore this fact do so at their peril.
I don’t expect universities to spontaneously do the right thing. I’m hoping the courts force them to at least stop exploiting the pro athletes that make them so much money.
In other countries, things not tied directly to a university's educational mission are still available, but as local community organizations. There are drama clubs, sports clubs, but available to the entire community, not just college students.
There is no reason why the state should spend its educational subsidies on funding auxillary recreational activities solely for the benefit of university students when a good GPA is not necessary to recreate, nor do university students have a greater need of recreation than anyone else.
I think most countries have some form of student sports as part of universities, just not the kind of professional ones like in the U.S. students study better if they have something else in their lives.
The problem is that it is really hard to know where to draw the line and the US as usual takes it fully to the extreme.
All that being said, I thought the big sports were a revenue driver for the schools that chose to participate in the bigger leagues?
Intramural, sure, you can organize some soccer games between adjacent dorms as part of the responsibilities of the dorm admin, but in terms of paid programs, no.
It has to keep working for years/decades, be maintainable, survive bad weather conditions, etc. even if you made the scoreboard out of wood and painted it, it would have to be large enough so that the stadium could see, the paint would fade and it would take weeks to make it, etc.
Well, Gates is a bad guy. He was a robber baron who decided that stealing a whole lot of money makes him the world's leading expert on everything. He has no training in pedagogy, epidemiology, sanitation, or really anything else relevant to the missions of his organization, yet he seems to actually believe he's a Great Scientist Using Evidence-Based Approaches to Save the World. He lives in a $200 million house yet has funded a billion-dollar propaganda campaign to convince people he's generous.
I laugh at Elon Musk when he thinks getting fired from Paypal means he knows how to run a car company. I also laugh at Bill Gates when he thinks stealing the worst desktop operating system makes him a doctor.
We can't overthrow our masters. At least let us make fun of them.
> He has no training in pedagogy, epidemiology, sanitation, or really anything else relevant to the missions of his organization
I'm confused by this statement. It only counts as training if you did it around age 20? Or what makes you think that someone who is spending this much time, effort and money on a given topic would not arrange for appropriate (or rather, excellent) training on it?
Or is this something you conclude by starting from the assertion that Bill Gates must be a bad guy?
You can certainly make the argument that he was a bad guy, but I don't know if I can continue to draw that conclusion. He built that house over 30 years ago for $60M. While that is certainly excessive, its not like he just bought a $200M house while trying to be philanthropist. At least $45B in charitable contributions, so the house is worth 0.4% of that. Is the house really relevant?
Isn't it possible he has changed since then? What you call a "billion-dollar propaganda campaign" I call billions of dollars worth of charitable contributions. He may not have formal training any any of those things, but he can pay people with the training to advise him and make the decisions. Do you think he is just making all of them in a vacuum? Money can buy you a lot of experts to consult.
What would you prefer him to do with the money, if not attempt to do some good with it?
>yet has funded a billion-dollar propaganda campaign to convince people he's generous
And let's not forget the scummy secondary benefit of Gates donating tons of money into soneth education. It undoubtedly is going to have some influence on what computers they use and teach, further entrenching Microsoft as place.
The misinformation about Prop. 47 is stunning to me. It did not change anything relating to violent crime, nor did it make petty theft legal. Petty theft is still punishable by jail time. Robbery is still a felony.
What it actually did is raise the line between misdemeanor and felony theft from $450 to $950.
And the market agrees with you! Spending $1.5 billion on Bitcoin during the middle of a massive speculative bubble is just the sort of responsible leadership investors want to see in these uncertain times. That's why Tesla's stock has been as rock-solid as Bitcoin since they announced their purchase.
The first generation of Epyc had a complicated hierarchy that made latency quite hard to predict, but the new architecture is simpler. A CPU can talk to a cache in the same package but on a different die with reasonably low latency.
(I don't have numbers. Still reading.)