Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Decades of demoscene productions beg to differ. That just means compilers are awful, as they usually are.[1] x86 has far more optimisation opportunities than any RISC.

[1] https://news.ycombinator.com/item?id=15720923

 help



In absence of better data, we have to compare compiler output.


If I recall my lectures, which were 20odd years ago now.

CISC ISAs were historically designed for humans writing assembly so they have single instructions with complex behaviour and consequently very high instruction density.

RISC was designed to eliminate the complex decoding logic and replace it with compiler logic, using higher throughput from the much reduced decoding logic (or in some cases no decoding at all) to offset the increased number of instructions. Also the transistors that were used for decoding could be used for additional ALUs to increase parallelism.

So RISC by its nature is more verbose.

Does the tradeoff still make sense? Depends who you ask.


From 2017, it predates RISC-V first ratified spec.

Currently, RISC-V holds the crown of code density in both 64 and 32 bit.

On 32bit, thumb2 is a little behind. On 64bit, x86-64 is not even close, and ARMv8/v9 are even worse.


You've shown absolutely zero evidence.

"Maybe if I keep repeating it, it'll be true."


I am sure you are capable of running a compiler and/or running `size` on Ubuntu binaries.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: