RISC and CISC Don’t Matter. SOC Has Already Won The Day | | VoIP SurvivorVoIP Survivor

By Tsahi Levent-Levi

There’s an interesting debate I can still recall from this microprocessor architecture course I attended at the university years back. The professor stood there, explaining what RISC is and what CISC is, and then went on to discuss the merits of each. I am sure the same professor in the same lecture room at the Hebrew university in Jerusalem is still doing that. But the thing is – I don’t think we really care any longer.

Reduced or Complex – whatever instruction set you are using, it is the concept of SoC (System on Chip) that has won the day.

We’re at a point in time when computing becomes interesting again – and it’s not because we’re fighting over who has the more optimized instruction set pipeline, the higher number of clock speeds or even the smaller nanometer process shrink. It’s the amount of additional hardware options that makes all the difference.

There are things that you just cannot do with a CPU (or are just painfully hard to do), so you add additional “processing units”: a GPU, some hardware acceleration for MP3 playback to reduce CPU load and increase battery life, you cut the whole memory path for the graphics processing to reduce memory copies, etc.

If I had to draw an “interest graph” showing where the mindshare of innovation and optimization is headed when it comes to chipsets, it will probably look like this:

The history of the importance of chip technologies

We’ve been at an arms race of clock speeds when I was a kid. For years, the main concept was squeezing more juice out of a processor by running more instructions per second; where in order to do that, you simply had to run faster.

Then, when the brick wall of speed was about to be reached, AMD and Intel shifted towards a new goal – multiple cores: having multiple processing units run in parallel so that you can do more work. We started with dual cores, moving now to quad cores and the future looks bright with many cores to come.

The smartphones market (and mobility as a whole) brought with it a difficult problem to solve: how do you do more with limited power available and smaller space to place your chip in? The answer came by specialized accelerators for specialized needs. This way of life is used extensively on SoC architectures: you have a solid microprocessor as the host, with a bunch of other accelerators assisting it: doing graphics, sound, RF, networking – you name it.

This trend is now moving into every computational device: it started by placing a GPU with the CPU on the same die (AMD acquiring ATI and then Intel investing in their own GPUs) and then some – Intel now having accelerators on their latest micro architectures that are dedicated towards video coding and video graphics.

Who cares if you are using a CISC Intel chip or a RISC ARM chip? They are both in a way SoC solutions, where the actual punch that you get comes not from the CPU itself, but rather by the accelerators it bundles with it. Don’t believe me? Just check how I approached theAndroid chipset video calling analysis back on February – all the difference was made by the accelerators.