Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The info everyone is missing is code density comparison with ARM. Risc-V is more efficient and has about 10% denser code, which translates to more instructions fitting in i-cache, less memory pressure, and ultimately better performance and battery life. Long term roadmap, thats a win for Risc-V.


> Risc-V is more efficient and has about 10% denser code, which translates to more instructions fitting in i-cache, less memory pressure, and ultimately better performance and battery life. Long term roadmap, thats a win for Risc-V.

Only in the most extreme cases.

1) Battery life isn't dominated by run current for the vast majority of embedded devices. Sleep current dominates (most cases) or peripheral current dominates (RF transmit/receive, for example). You try to dial down the number of times you turn on until it's below the amount of energy you burn while off.

2) RAM is expensive; flash not so much. Code space isn't the issue--10% almost certainly not. Correlated: this is why I expect you really won't see 64 bits making a lot of inroads into embedded--doubling RAM consumption is expensive on embedded.


I'm sorry, did you just describe the core advantage of a new RISC CPU against incumbents as smaller code size? Where am I, and what is happening?


RISC-V has a code compression extension so it's not classic RISC but it's still far simpler than CISC. https://riscv.org/wp-content/uploads/2015/05/riscv-compresse...


That's a pretty damn good argument. It's 10% ahead of the best ISAs that took decades of development. Just think of how adoption would be affected if it was 30% worse.

In other words: not only is it better in terms of royalties and ecosystem, it also better at everything else too. Isn't that terrific?


I read the GP as talking about how code size was always a weakness of RISC, and seemingly the largest one.

And here it is compared against a classical CISC platform and a hybrid one highly optimized for code size, and winning. What just makes RISC-V even more awesome than just any non-optimized design beating the incumbents.


Is it a core advantage? Maybe. But smaller code size has beneficial effects on the silicon cost. Choice 1: If you can benchmark the same on important work loads with 10% smaller I-Cache, make the die smaller. Manufacturing costs go down with a greater than square law effect with die area. Choice 2: Use the die area freed up to put more functional units in the same area.

Core advantage? I will let others debate that. Significant: surely.


x86-64 isn't very space efficient anymore, so it's not hard to beat. Even AArch64, with a fixed 32-bit instruction size, competes well with x86-64.

REX prefixes really killed the space efficiency of the x86 architecture.


Smaller code size makes your caches more effective. L1 instruction cache is size limited because it's on a critical timing path. Increasing its size limits your operating frequency.


Still matters in many embedded applications.


Code size for RV32IMAC is still pretty mediocre with the current GCC/RISCV compiler. And the standard library they use by default is pretty sub-optimal. I know they're working on it, and it's clear they're making quick progress, but it's not easy at the moment. The last project I worked on, I had to abandon ABI conventions and hand craft large chunks of code.


Is "10% denser" comparing RV32 or RV64 against A32, T32, or A64? And is that with or without the Compressed Instructions extension?


As of early 2016, with the GCC port at that time, RV32GC was as dense as Thumb, and RV64GC was denser than AArch64 and every other major 64-bit ISA, including AMD64. Though RV64G (no C) was in some extreme cases up to 50% larger than AArch64 (due to inlining memcpy and memset, which are a bit larger without compressed instructions), but usually around the same (except MIPS64, which is way larger than the other 64-bit ISAs, probably because of exposed delay slots). [0]

There's some indication that density should have increased somewhat since then, but I haven't looked at it myself.

[0]: https://youtu.be/Ii_pEXKKYUg


That's why I have a lot more faith in RISC-V's ability to take on relatively high end embedded tasks than lower end ones. I'd expect compression to be too expensive, transistor wise, for many roles where you'd use an ARM Cortex M2 or such and program memory is at a premium in those places.


> I'd expect compression to be too expensive

It's not the kind of compression you might be thinking of. It's just 16-bit "shortcuts" for some of the common 32-bit instructions. The impact in gate count should be minimal. In a lot of these applications you'll have the code in on-chip non-volatile memory which means reducing code size may also reduce chip area.

I think with relatively little increase in gate count you could also make some sequences of two 16-bit instructions execute simultaneously, which could yield nice performance improvements for micro-controller cores.

Also, you might be surprised at how "big" many micro-controllers are becoming these days.


> I'd expect compression to be too expensive, transistor wise, for many roles where you'd use an ARM Cortex M2 or such...

Decoding the "compressed" instructions is actually pretty straightforward, it doesn't add much complexity to a design. ARM Cortex M0+/M3/M4 implements a similar (but more complex) "compressed" instruction set called Thumb, and comparable RISC-V cores available from SiFive are smaller, faster, and more efficient.

In a very small RISC-V core by the venerable Clifford Wolf called PicoRV32 [0], you can look at the complexity introduced by configuring it with the COMPRESSED_ISA option.

> ...and program memory is at a premium in those places.

Program memory is one thing, but on processors of all sizes, code size has a big impact on performance in common types of program.

[0]: https://github.com/cliffordwolf/picorv32


The cost of compression is very small for low performance designs (single instruction in-order issue). It's very straightforward to implement.

It gets harder for more complex designs though.

But for cases where you want to replace a Cortex M2, the area increase will be trivial.


Cortex-M2? ¿Que? CM2 doesn't exist -- the naming scheme jumps straight from CM1 (which is FPGA-only) to CM3.


I think this is wrong. Certainly the GCC toolchain spits out some remarkably mediocre code. RISC-V compressed is generally on par with Thumb2, and where it differed, Thumb2 seemed to be a tiny bit more dense.

If you compare GCC/ARM with GCC/RISCV the difference isn't too great, but even the IAR ARM compiler gives you noticeable improvements over GCC/RISCV. And ARM's compilers are actually quite good with respect to code size; MUCH better than GCC/RISCV (or even GCC/ARM).

That being said, were I to add some custom instructions, I would COMPLETELY prefer to do it with RISC-V than with ARM.

[] Though the gcc/riscv toolchain is getting better pretty quickly.


That is pretty cool, is there a ThumbV2 vs RISC-V paper somewhere for 32 bit RISC-V ?


Andrew Waterman's PhD thesis "Design of the RISC-V Instruction Set Architecture" ("Why Develop a New Instruction Set?") has a nice comparison of ISA encoding and density of RISC-V, MIPS, SPARC, Alpha, ARMv7/8, Thumb, OpenRISC, and x86/x86-64.

https://people.eecs.berkeley.edu/~krste/papers/EECS-2016-1.p...


Page 4: https://riscv.org/2016/04/risc-v-offers-simple-modular-isa/

RV32C and ThumbV2 have equal code sizes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: