Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve been browsing with Private Relay since the day it became available. What’s this intolerable latency you’re talking about?


Browsing is not the same as using a personal assistant.

First, it takes much less compute to serve a page than to run an LLM query. LLMs are slow even if you eliminate all network.

Second, your expectations when browsing are not the same as when using a personal assistant.

Right now even when I simply ask Siri to set a timer it takes more than a couple of seconds. Add an actual GPT in the mix and it’s laughable.

In any case, even with a private relay, Apple’s phrasing does not deny sending device identifiers and allowing ClosedAI/Microsoft to build your shadow profile (without storing requests verbatim).


Nope, you’re moving the goalposts. You were talking about the latency of making a network call. I pointed out that Apple’s current proxying architecture has low latency for web browsing, with orders of magnitude larger requests moving through it. We’re not going to bring GPT slowness into the mix because that’s not what we were discussing.


No, I meant the cumulative latency that increases with every hop. You can’t fool physics. Not proxying is just faster and in case of an already super-slow server these seconds matter to any UX designer worth their salt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: