Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd consider even 10% to be a significant performance hit. People scream bloody murder when CPU-level mitigations cause even 1-2% regressions. The marginal cost of mitigations when memory safe code can run without them is infinite.

But let's say, for the sake of argument, that I can tolerate programs that run twice as long in production. This doesn't improve much:

* I'm not going to be deploying SoTA sanitizers (SANRAZOR is currently a research artifact; it's not available in mainline LLVM as far as I can tell.)

* No sanitizer that I know of guarantees that execution corresponds to memory safety. ASan famously won't detect reads of uninitialized memory (MSan will, but you can't use both at the same time), and it similarly won't detect layout-adjacent overreads/writes.

That's a lot of words to say that I think sanitizers are great, but they're not a meaningful alternative to actual memory safety. Not when I can have my cake and eat it too.



I think we basically agree. Hypothetically ideal memory safety is strictly better, but sanitizers are better than nothing for code using fundamentally unsafe languages. My personal experience is that more people are dissuaded from sanitizer usage more by hypothetical (and manageable) issues like overhead than real implementation problems.


If you can afford a 10-50% across-the-board performance reduction, why would you not use a higher-level, actually safe language like Ruby or Python? Remember that the context of this article is Zig vs other languages, so the assumption is you’re writing new code.


I work in real time, often safety critical environments. High level interpreted languages aren't particularly useful there. The typical options are C/C++, hardware (e.g. FPGAs), or something more obscure like Ada/Spark.

But in general, sanitizers are also something you can do to legacy code to bring it closer to safety and you can turn them off for production if you absolutely, definitely need those last few percent (which few people do). It's hard to overstate how valuable all of that is. A big part of the appeal of zig is its interoperability with C and the ability to introduce it gradually. Compare to the horrible contortions you have to do with CFFI to call Python from C.


Python is usually a lot more than a 50% reduction in performance. Sometimes you need better performance but not the best performance.


For Ruby or Python I think you'll be paying more than 90%


> I'd consider even 10% to be a significant performance hit. People scream bloody murder when CPU-level mitigations cause even 1-2% regressions. The marginal cost of mitigations when memory safe code can run without them is infinite.

What people? and in my experience rust has always been much higher than 2% regression


> People scream bloody murder when CPU-level mitigations cause even 1-2% regressions

For a particular simulation on a particular Cascade Lake chip, mitigations collectively cause it to run about 30% slower. So I won't scream about 1%, but that's a lot of 1%s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: