Seems pointless. Go should focus on refactoring core libraries, especially net and http, for performance because nbio, gnet and others are kicking its ass. And that is sad, as third party libraries should never perform better than standard library.
Also swiss tables were great addition to Go's native maps, but then again there are faster libraries that can give you 3x performance(in case of numeric keys).
For regular connection scenarios, nbio's performance is inferior to the standard library due to goroutine affinity, lower buffer reuse rate for individual connections, and variable escape issues.
From gnet's README:
gnet and net don't share the same philosophy in network programming. Thus, building network applications with gnet can be significantly different from building them with net, and the philosophies can't be reconciled.
[...]
gnet is not designed to displace the Go net, but to create an alternative in the Go ecosystem for building performance-critical network services.
Frankly, I think it's unfair to argue that the net package isn't performant, especially given its goals and API surface.
However, the net/http package is a different story. It indeed isn't very performant, though one should be careful to understand that that assessment is on relative terms; net/http still runs circles around some other languages' standard approaches to HTTP servers.
A big part of why net/http is relatively slow is also down to its goals and API surface. It's designed to be easy to use, not especially fast. By comparison, there's fasthttp [1], which lives up to its name, but is much harder to work with properly. The goal of chasing performance at all costs also leads to questionable design decisions, like fiber [2], based on fasthttp, which achieves some of its performance by violating Go's runtime guarantee that strings are immutable. That is a wild choice that the standard library authors would/could never make.
STD is built on goroutines whereas these performance networking libraries are built on a main reactor loop. Hence the need for refactoring, not just tweaking.
Something like http/v2 and net/v2. I know gnet had(has?) issues wit implementing tls because how the entire STD is designed to work. At the time, it was a great piece of software, but by now, it is slow and outdated. A lot of progress has been made since in networking, parsing, serialization, atomics and so on.
> And that is sad, as third party libraries should never perform better than standard library.
It's quite literally the opposite. The purpose of a stdlib is standardization, stability, and broad usefulness, not extreme performance. In fact, that can't be its purpose — you can only get max performance if you tune for your exact use case — but how could a stdlib that's by definition generic ever be able to do that?
> but then again there are faster libraries that can give you 3x performance(in case of numeric keys)
Yeah, exactly. If you can constrain your problem domain ("numeric keys only"), you can always squeeze out more performance than a generic algorithm can give you. Completely irrelevant as far as stdlib goes though.
Also swiss tables were great addition to Go's native maps, but then again there are faster libraries that can give you 3x performance(in case of numeric keys).