Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apple recently introduced rdma support in mac os. They are probably trying to push those people buying the 512gb configuration towards buying more of the 256gb configuration and clustering them together.
 help



A consumer computer company is not going to push people towards building a miniature HPC cluster. Closest we'll ever get to that is multiple GPUs for video games.*

*Nvidia is no longer a primarily consumer company, so all the other GPU stuff is no counterpoint


One could argue that if you are buying 512gb RAM machines you are not a typical consumer.

But you're also in the tiny minority of Apple customers, because most people who need 512GB of RAM are not looking at Apple products.

what else!?

> A consumer computer company

Apple isn't a just a consumer computer company. Both iPhones and Macs have very large business markets. In fact, I'd argue that the primary reason Apple hasn't locked down MacOS as much as iOS is that it'd absolutely kill the demand from software developers.


Apple isn’t really a consumer company. It does both consumer and enterprise stuff. Just look at all the fleet management stuff it does for ios and mac os.

And besides that, high end macbook prod and studios are workstation-class computers, not consumer-level computers.


It’s definitely a consumer company when you compare it to Microsoft.

The comparison is completely irrelevant.

The second I saw llms run on gpus i started trying to predict the last year that nvidia produces a consumer GPU product.

I am doing the reverse, and trying to predict the last year that LLMs use NVIDIA GPUs. It's just an accident of history that video game cards are useful for LLMs, and there is absolutely nothing that NVIDIA is doing from a design standpoint that the big hyperscalers can't do on their own, cutting NVIDIA out, and doing a better job of it as they know their own unique needs. The only advantage NVIDIA has is supply chain relationships and it takes time to establish those, but once that's done, we'll see all the big companies rolling their own silicon and no longer relying on NVIDIA.

That does make sense and I'm also certain will happen. I'm just saying that at this point NVIDIA is all in on "AI" so it has no choice. It will abandon its original customer base and product.

I don't think there will ever be a hard announcement. Just one day people will start asking when the next GPU line is coming out and it will never come. They won't even plan it they simply won't have the skills to do GPU design anymore.


Weren't 512GB models selling like hot cakes to the complete surprise of Apple? Wait time was up to 3 months last time I checked. Glad I got mine last October.

Any custom configuration takes a while for them to prepare. I remember my M3 max took 2 or 3 months to a arrive.

Good thing is they only seem to charge when device ships so if an M5 comes along you should be able to cancel the m3 ultra and get the m5.


“Like hot cakes” is relative.

> The 512GB Mac Studio was not a mass-market machine—adding that much RAM also required springing for the most expensive M3 Ultra model, which brought the system’s price to a whopping $9,499.

Number of people willing the number of people willing to spend $10,000 on a computer is pretty tiny. Maybe they are common enough in HN circles, but I doubt any one at Apple is losing sleep over them.


Of course, $10,000 workstations for a corporation working on AI products might just be a necessary tool.

Just a guess, but I think it’s entirely possible that Apple sold through the full production run that they intended for this generation of the machine and they don’t want to order a new batch before the next generation of processors come out.

I have to think that Apple is close to replacing the M3 Ultra with an M5 Ultra or something of the sort.


A retailer told me they sold more 512GB RAM MacStudios than any other type. N=1 I know but still...

There is a $6 thousand value add service to configure your Mac Mini with AI and have that accessible over iMessage.

Curious, what do you use it for?

Huge local thinking LLMs to solve math and for general assistant-style tasks. Models like Kimi-2.5-Q3, DeepSeek-XX-Q4/Q5, Qwen-3.5-Q8, MiniMax-m2.5-Q8 etc. that bring me to Claude4/GPT5 territory without any cloud. For coding I have another machine with 3x RTX Pro 6000 (mostly Qwen subvariants) and for image/video/audio generation I have 2x DGX Sparks from ASUS.

We must be twins, i've got the same three working in a cluster.

I was really excited to see where the GB300 Desktops end up, with 768gb ram but now that data is leaking / popping up (dell appears to only be 496gb), we may be in the 60-100k range and that's well out of my comfort zone.

If Apple came out with a 768gb Studio at 15k i'd bite in a heart beat.

https://www.dell.com/en-us/lp/dell-pro-max-nvidia-ai-dev


Yeah, I didn't want to spend more than 50k for local inference stack. I can amortize it in my taxes so it's not a big deal but beyond it would start eating into my other allocations. I might still get M5 Ultra if it pops up and benchmarks look good, possibly selling M3 Ultra.

Probably an Electron app or two.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: