OpenCores
URL https://opencores.org/ocsvn/forwardcom/forwardcom/trunk

Subversion Repositories forwardcom

[/] [forwardcom/] [manual/] [fwc_basic_architecture.tex] - Blame information for rev 145

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 140 Agner
% chapter included in forwardcom.tex
2
\documentclass[forwardcom.tex]{subfiles}
3
\begin{document}
4
\RaggedRight
5
 
6
\chapter{Basic architecture}
7
 
8
This chapter gives an overview of the most important features of the ForwardCom instruction set architecture. Details are given in the subsequent chapters.
9
 
10
\section{A fully orthogonal instruction set}
11
The ForwardCom instruction set is fully orthogonal in all respects.
12
Where other instruction sets have a large number of different instructions for different register types, operand types, operand sizes, addressing modes, etc., ForwardCom has fewer instructions, but many variants of each instruction. This modular design makes the hardware implementation much simpler.
13
The same instruction can use integer operands of all sizes and floating point operands of all precisions. It can use register operands, memory operands or immediate operands. It can use many different addressing modes. Instructions can be coded in short forms with two operands where the same register is used for destination and source operand, or longer forms with three operands. It can work with scalars or vectors of any size. It can have predication or masks for conditional execution, and it can have optional flag inputs for determining rounding mode, exception control and other details, where appropriate. Data constants of all types can be included in the instructions and compressed in various ways to reduce the instruction size.
14
 
15
\subsubsection{Rationale}
16
The orthogonality is implemented by a standardized modular design that makes the hardware implementation simpler. It also makes compilation simpler and more flexible and makes it easier for the compiler to convert linear code to vector code.
17
\vv
18
 
19
The support for immediate constants of all types is an improvement over current systems. Most current systems store floating point constants in a data segment and access them through a 32-bit address in the instruction code. This is a waste of data cache space and causes many cache misses because the data are scattered around in different sections. Replacing a 32-bit address with a 32-bit immediate constant makes the code more efficient without increasing the code size. Extensions to allow 64-bit immediate constants are possible at the cost of having instructions with triple size.
20
 
21
\section{Instruction size}
22
The ForwardCom instruction set uses a 32-bit word size for code.
23
An instruction can consist of one, two, or three 32-bit words.
24
It is possible to add future extensions with instruction sizes of four or more words, but there is currently no need for this.
25
 
26
\subsubsection{Rationale}
27
A CISC architecture with many different instruction sizes is inefficient in superscalar processors where we want to execute several instructions per clock cycle. The decoding front end is often a bottleneck, especially in the x86 architecture. The decoder has to determine the length of the first instruction before it knows where the next instruction begins. The ``instruction length decoding'' is a fundamentally serial process which makes it difficult to decode multiple instructions per clock cycle. Modern x86 microprocessors have an extra ``micro-operations cache'' after the decoder in order to circumvent this bottleneck.
28
\vv
29
 
30
Here, it is desired to have as few different instruction sizes as possible and to make it easy to determine the length of each instruction. We want a small instruction size for the most common simple instructions, but we also need a larger instruction size in order to accommodate things like a larger register set, instructions with multiple operands, vector operations with advanced features, 32-bit address offsets, and large immediate constants. This proposal is a compromise between code compactness, easy decoding, and space for advanced features.
31
The instruction size is indicated by only two bits. A decoder can find the instruction boundaries in n words by means of a simple Boolean function of 2n inputs.
32
 
33
\section{Register set}
34
There are 32 general purpose registers (r0--r31) of 64 bits each, and 32 vector registers (v0--v31) of variable length. The maximum vector length is different for different hardware implementations. The general purpose registers can be used for integers of up to 64 bits as well as for pointers. The vector registers can be used for scalars and vectors of integers and floating point numbers.
35
\vv
36
 
37
The following special registers are defined and visible at the application program level. All have 64 bits:
38
 
39
\begin{itemize}
40
\item Instruction pointer (IP)
41
\item Data section pointer (DATAP)
42
\item Thread data pointer (THREADP)
43
\item Stack pointer (SP)
44
\item Numeric control register (NUMCONTR)
45
\end{itemize}
46
 
47
The stack pointer is identical to r31. The other special registers cannot be accessed as ordinary registers.
48
\vv
49
 
50
There is no dedicated flags register. Registers r0--r6 and v0--v6 can be used for masks, predicates and floating point option flags to control attributes such as rounding mode and exception control.
51
\vv
52
 
53
The unused part of a register is always set to zero. This means that integer operations with an operand size smaller than 64 bits and vector operations with a vector length smaller than the maximum will always set the unused bits of the destination register to zero.
54
 
55
\subsubsection{Rationale}
56
The number of registers is a compromise between code density and flexibility. The cost of spilling registers to memory is usually important only in the critical innermost loop, which is unlikely to need more than 32 registers.
57
\vv
58
 
59
We can avoid false dependencies on the previous value of a register by setting all unused register bits to zero rather than leaving them unchanged. The hardware can save power by disabling the unused parts of execution units and data buses.
60
\vv
61
 
62
A dedicated flags or status register is unfeasible for vector processing, parallel processing, out-of-order processing, and instruction scheduling.
63
\vv
64
 
65
The reason for handling floating point scalars in the vector registers rather than in separate registers is to make it easy for a compiler to convert scalar code including function calls to vector code. Floating point code often contains calls to mathematical library functions.
66
A library function with variable-length vectors as input and output can be used for both scalars and vectors, and the compiler can easily vectorize code that contains such library function calls.
67
 
68
\section{Vector support}
69
A vector register can contain signed or unsigned integers of 8, 16, 32, 64, and optionally 128 bits, or floating point numbers of single and double precision. There is limited support for floating point numbers in half precision and optional support for quadruple precision. All elements of a vector must have the same type. The elements of a vector are processed in parallel. For example, a vector addition will produce the sum of two vectors in a single operation.
70
\vv
71
 
72
The vector registers have variable length. Each vector register has extra bits for storing the length of the vector. The maximum vector length depends on the hardware.
73
For example, if the hardware supports a maximum vector length of 64 bytes and a particular application needs only 16 bytes, then the vector length is set to 16.
74
\vv
75
 
76
Some instructions need to specify the length of a vector explicitly, for example when reading a vector from memory. These instructions use a general purpose register for specifying the vector length. The length is usually indicated as the number of bytes, not the number of vector elements.
77
\vv
78
 
79
The maximum length supported by the processor must be a power of 2. The actual length specified does not need to be a power of 2. If the specified length is longer than the maximum length, then the maximum length is used.
80
\vv
81
 
82
The contents of a vector register can arbitrarily be interpreted as any of the types and element sizes supported. For example, the hardware does not prevent the application of integer instructions on a vector that contains floating point data. It is the responsibility of the programmer that the code makes sense.
83
 
84
\section{Vector loops} \label{vectorLoops}
85
A special addressing mode is provided to make vector loops more compact and efficient. It uses a pointer P to the end of an array, and a negative index J, and calculates the address of a memory operand as P-J, where P and J are general purpose registers. This makes it possible to make a loop through an array as illustrated by the following pseudocode:
86
\vv
87
 
88
\begin{lstlisting}[frame=single]
89
P = address of array
90
J = size of array (in bytes)
91
L = maximum vector length (depends on processor)
92
X = a vector register
93
P += J; // point to end of array
94
while (J > 0) {
95
    X = whatever_operation(X, [P-J], vector_length = J)
96
    J -= L;
97
}
98
\end{lstlisting}
99
\vv
100
 
101
This loop works in the following way: P points to the end of the array. J is the remaining number of array bytes; counting down until the loop is finished. The loop reads one vector at a time from the array at the address (P-J). J is larger than the maximum vector length L in all but the last iteration of the loop. This makes the processor use the maximum vector length. If the array size is not divisible by the maximum vector length then the last iteration of the loop will use a smaller vector length that fits the remaining number of elements. Obviously, the loop can contain any number of vector read, vector write, and vector arithmetic instructions, using the same principle.
102
\vv
103
 
104
This loop will work on different processors with different maximum vector lengths \textit{without knowing the maximum vector length at compile time}. Thus, the same piece of software will work on different microprocessors with different vector lengths without the need to compile separately for each microprocessor.
105
\vv
106
 
107
A further advantage is that no extra code is needed after the loop to handle remaining elements in the case that the array size is not divisible by the vector length.
108
The loop overhead can be reduced to a single instruction (sub\_maxlen/jump\_pos) which subtracts L from
109
J and jumps back if the result is positive.
110
 
111
\subsubsection{Rationale}
112
Most current systems have fixed vector lengths. If different processors have different vector lengths then you have to compile the code separately for each vector length. Every time a new processor with longer vectors comes on the market, you have to compile a new version of the code for the new vector length, using newly defined extensions to the instruction set. It usually takes several years for the new software to be developed and to penetrate the mainstream market. It is so costly for software producers to develop, test, and maintain different versions of their code for each vector length that this is rarely done.
113
\vv
114
 
115
A further problem with current systems is that it is impossible to save a vector register in a way that is guaranteed to be compatible with future processors with longer vectors. This is no problem with the ForwardCom design because the vector length is stored in the vector register itself. Instructions are provided for saving and restoring vectors of variable length and for storing only the part of a vector register that is actually used.
116
\vv
117
 
118
The ForwardCom design makes it possible to take advantage of a new processor with longer vector registers immediately without recompiling the code. The loop method described above makes this easy and very efficient. You do not need different versions of the code for different processors.
119
\vv
120
 
121
It is possible to obtain the same effect without the special negative addressing mode by inverting the sign of J and allowing a negative value in the register that specifies the vector length while using the absolute value for the actual vector length. This solution is less elegant and more confusing, but it may possibly be included in other instruction sets by allowing negative values when specifying a vector length.
122
\vv
123
 
124
Loop unrolling is generally not necessary. The loop overhead is already reduced to a single instruction  and a superscalar processor will execute multiple iterations in parallel if dependency chains are not too long. Loop unrolling with multiple accumulators may be useful for hiding a loop-carried dependency. In this case, you will either insert a loop control instruction after each section in the unrolled code or calculate the loop iteration count before the loop.
125
\vv
126
 
127
The ForwardCom design has no practical limit to the vector length that a microprocessor can support. A large microprocessor with very long vectors can be useful for calculations with a high amount of data parallelism. Other solutions to obtain high performance on parallel data processing have been discussed, such as rolling register stacks and software pipelining, but it was concluded that long vectors is the method that can be implemented most efficiently in the microprocessor as well as in the compiler.
128
 
129
\section{Maximum vector length}
130
The maximum length of vector registers will be different for different processors. The maximum length must be a power of 2. It can be as large as desired and should be at least 16 bytes. Each instruction can use a smaller length, which does not need to be a power of 2.
131
\vv
132
 
133
The maximum length may be different for different element sizes. For example, the maximum length for 32-bit integers can be 32 bytes to contain eight integers, while the maximum length for 8-bit integers could be 16 bytes to contain 16 smaller numbers. However, the maximum length must be the same for different types with the same element size. For example, the maximum length for double precision floating point numbers must be the same as for 64-bit integers because loops are likely to contain both types when integer vectors are used as masks for floating point vectors. The maximum length for a 32-bit element size cannot be less than for any other element size or operand type. This rule guarantees that it is possible to save a complete vector using a 32-bit operand type.
134
\vv
135
 
136
The maximum vector length should generally be the same for all instructions for the same data type, but there may be exceptions for instructions that are particularly expensive to implement.
137
\vv
138
 
139
It is possible for an application program or the operating system to reduce the maximum vector length. This can be useful if a smaller vector length is more appropriate for a particular purpose.
140
\vv
141
 
142
It is also possible to increase the apparent maximum vector length for purposes of emulation. Virtual vector registers that are bigger than what the hardware supports may be emulated through traps (synchronous interrupts) in order to verify the functionality of a program on processors with a longer maximum vector length than is currently available.
143
\vv
144
 
145
When an instruction specifies a longer vector than the maximum, then the maximum length is used (unless the emulation of larger vectors is activated). This is necessary for the efficient implementation of vector loops as described above on page \pageref{vectorLoops}.
146
If the specified vector length is zero or negative then the result will be a vector of zero length.
147
 
148
\section{Instruction masks}
149
Most instructions can have a mask register which can be used for conditional execution and for specifying various options. Instructions with general purpose registers use one of the registers r0--r6 as a mask register or predicate. Bit 0 of the mask register indicates whether the operation is executed or not.
150
\vv
151
 
152
The instruction will produce the normal result when bit 0 of the mask is one, and a fallback value when this bit is zero. The fallback value can be the value of the first source operand, a separate register, or zero.
153
\vv
154
 
155
This mechanism can be vectorized. Instructions with vector registers use one of the vector registers v0--v6 as mask register. The calculation of each vector element is conditional on the corresponding element in the mask register.
156
\vv
157
 
158
An arithmetic operation with a mask of zero can never generate an error condition.
159
A memory read or write with an illegal address and a mask of zero may or may not generate an error trap.
160
\vv
161
 
162
Additional bits in the mask register are used for various options, overriding the values in the numeric control register. See page \pageref{table:maskBits} for details.
163
 
164
\section{Addressing modes}
165
All memory addressing is relative to a base pointer. Memory operands are addressed in this general form:
166
 
167
\begin{lstlisting}
168
Address = Base pointer + Index * Scale + Offset
169
\end{lstlisting}
170
 
171
Where Base pointer is a 64-bit base pointer, Index is a 64-bit index register, Scale is a scale factor, and Offset is a constant. A base pointer is always present; the other elements are optional.
172
\vv
173
 
174
The base pointer can be a general purpose register or it can be the
175
instruction pointer (IP), data section pointer (DATAP), thread data pointer (THREADP), or stack pointer (SP).
176
\vv
177
 
178
The index register can be one of the registers r0--r30. A value of 31 in the index register field means no index register.
179
\vv
180
 
181
A limit can be applied to the index register in the form of an integer constant. A trap is generated if the index register is bigger than the limit in an unsigned comparison.
182
\vv
183
 
184
The scale factor is equal to the operand size (in bytes) for scalar operands and broadcasts. The scale factor is 1 for vector operands. A special addressing mode with
185
Scale = -1
186
is also available, as explained on page \pageref{vectorLoops}.
187
\vv
188
 
189
The offset is a sign-extended integer of 8, 16, or 32 bits. 8-bit offsets are multiplied by the operand size. Offsets of 16 and 32 bits have no multiplier.
190
\vv
191
 
192
Memory operands in vector instructions can load a vector of a specified length, a scalar, or a broadcast scalar. The length of the loaded or broadcast vector is specified by a general purpose register. The specified length is the number of bytes. The number of vector elements is the number of bytes divided by the operand size. Register r31, which is the stack pointer, cannot be used for specifying vector length. Instead, a value of 31 in the length register field will give a scalar.
193
\vv
194
 
195
Jumps and calls specify a target address relative to the instruction pointer. The relative address is specified with a signed offset of 8, 16, 24, or 32 bits, multiplied by the code word size which is 4. This will cover an address range of
196
$\pm$ 8 gigabytes with the 32-bit offset.
197
\vv
198
 
199
\subsubsection{Rationale}
200
A 64-bit flat address space is used. Relative addressing is used in order to avoid 64-bit addresses in the instruction code. In the rare case that a 64-bit absolute address is needed, it must be loaded into a register which is then used as a pointer.
201
\vv
202
 
203
Addressing with an index scaled by the operand size is useful for arrays. A limit can be applied to the index so that array bounds can be checked without any extra instructions.
204
\vv
205
 
206
Addressing with a negative index is useful for the efficient implementation of vector loops described on page \pageref{vectorLoops}.
207
\vv
208
 
209
The addressing modes specified here will cover all common applications, including arrays, vectors, structures, classes, and stack frames.
210
\vv
211
 
212
Support for addressing modes with both base pointer, index, and direct offset is optional because this requires two adders in the address-calculation stage in the pipeline which might limit the maximum clock frequency.
213
\end{document}

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.