|
|
It's a term used for the processors that are made following certain principles,
that were found around ten years ago. The best book about
this was for a long time and probably still is: 'Computer architecture,
a quantitive approach' by Hennessy and Peterson. One of them
even came up with the term RISC.
Basically the philosophy is, that instructions are handled in parts:
Once you do this, you can handle several instructions at the
same time, each in one of the above stages.
All steps in the process should be equally short and as short as
possible, since the longest step will slow down the complete
pipe-line.
This means, that the instructions should only do simple things.
If you want to do something difficult, the compiler should
generate several simple instructions.
Instruction counts on real programs being executed on
CISC* (complex instruction set computers) show, that around
90% of the instructions that are executed are simple anyway.
(Compilers prefer them, because they are more basic.)
To keep the instructions simple, no fancy addressing modes are allowed.
All instructions act on the internal register set.
Except for some simple memory addressing instructions,
that have the address to act on calculated in registers using
the normal instructions.
Another viewpoint was that processors currently use a 32-bits
databus anyway, so why not make every instruction 32-bits wide?
Every instruction cycle fetches 32-bits anyway.
(Also note that the fastest RISC processors use separate code and data buses.)
Using these 32-bits efficiently makes it logical, to use one
byte to code the actual instruction and the other three
bytes, to code the registers to act on. This makes using
tryadic instructions logical: For example: 'add r0,r1,r2' which means: r0=r1+r2.
Another aspect is, that difficult instructions like call, push
and return disappear, since they involve auto incrementing
or decremting the stack pointer.
At a call, the old program counter is therefore put in another
general register, so it can be saved on the stack with the
normal process of saving the registers.
(For a function that doesn't call other functions this save
isn't even necessary.)
The registers aren't saved by pushing, but by decrementing
the stack pointer a lot and saving the registers as a kind of
local variables.
Because the program doesn't have to be pushed between
the arguments and the local variables it is also possible
to keep a certain number of arguments to the function
in registers.
Because there is a byte to code a register, you could theoretically
have 256 registers, of which AMD's 29000-series really has about 192,
but generally RISC-processors have 32 registers, using
only 5 bits to code them. The remaining bits are used for
other useful things, like coding small immediate numbers
in stead of a register.
A current criticisms about RISC is that the program code is to sparse,
which spoils disk and memory space and code loading time
(which isn't so terrible on normal computers)
but when you're running from ROM in an embedded environment, it can be.
It also makes caching much less efficient.
Hitachi has therefore made an embedded RISC processor, that
uses 16 bit (dyadic) instructions and it claims, that it is currently the best
sold RISC processor for embedded purposes.
Goto: | Main | Mirror | About | Author |
Register: | Yourself | Company | ||
Feedback: | Correction | Addition | Question | |
Order: | Chips (Deutsch) | Chips (English) | Chips (Nederlands) |