OpenCores
URL https://opencores.org/ocsvn/mips_enhanced/mips_enhanced/trunk

Subversion Repositories mips_enhanced

[/] [mips_enhanced/] [trunk/] [grlib-gpl-1.0.19-b3188/] [lib/] [gaisler/] [leon3/] [leon3.in.help] - Blame information for rev 2

Details | Compare with Previous | View Log

Line No. Rev Author Line
1 2 dimamali
 
2
Number of processors
3
CONFIG_PROC_NUM
4
  The number of processor cores. The LEON3MP design can accomodate
5
  up to 4 LEON3 processor cores. Use 1 unless you know what you are
6
  doing ...
7
 
8
Number of SPARC register windows
9
CONFIG_IU_NWINDOWS
10
  The SPARC architecture (and LEON) allows 2 - 32 register windows.
11
  However, any number except 8 will require that you modify and
12
  recompile your run-time system or kernel. Unless you know what
13
  you are doing, use 8.
14
 
15
SPARC V8 multiply and divide instruction
16
CONFIG_IU_V8MULDIV
17
  If you say Y here, the SPARC V8 multiply and divide instructions
18
  will be implemented. The instructions are: UMUL, UMULCC, SMUL,
19
  SMULCC, UDIV, UDIVCC, SDIV, SDIVCC. In code containing frequent
20
  integer multiplications and divisions, significant performance
21
  increase can be achieved. Emulated floating-point operations will
22
  also benefit from this option.
23
 
24
  By default, the gcc compiler does not emit multiply or divide
25
  instructions and your code must be compiled with -mv8 to see any
26
  performance increase. On the other hand, code compiled with -mv8
27
  will generate an illegal instruction trap when executed on processors
28
  with this option disabled.
29
 
30
  The divider consumes approximately 2 kgates, the multiplier 6 kgates.
31
 
32
Multiplier latency
33
CONFIG_IU_MUL_LATENCY_2
34
  Implementation options for the integer multiplier.
35
 
36
  Type        Implementation              issue-rate/latency
37
  2-clocks    32x32 pipelined multiplier     1/2
38
  4-clocks    16x16 standard multiplier      4/4
39
  5-clocks    16x16 pipelined multiplier     4/5
40
 
41
Multiplier latency
42
CONFIG_IU_MUL_MAC
43
  If you say Y here, the SPARC V8e UMAC/SMAC (multiply-accumulate)
44
  instructions will be enabled. The instructions implement a
45
  single-cycle 16x16->32 bits multiply with a 40-bits accumulator.
46
  The details of these instructions can be found in the LEON manual,
47
  This option is only available when 16x16 multiplier is used.
48
 
49
Single vector trapping
50
CONFIG_IU_SVT
51
  Single-vector trapping is a SPARC V8e option to reduce code-size
52
  in small applications. If enabled, the processor will jump to
53
  the address of trap 0 (tt = 0x00) for all traps. No trap table
54
  is then needed. The trap type is present in %psr.tt and must
55
  be decoded by the O/S. Saves 4 Kbyte of code, but increases
56
  trap and interrupt overhead. Currently, the only O/S supporting
57
  this option is eCos. To enable SVT, the O/S must also set bit 13
58
  in %asr17.
59
 
60
Load latency
61
CONFIG_IU_LDELAY
62
  Defines the pipeline load delay (= pipeline cycles before the data
63
  from a load instruction is available for the next instruction).
64
  One cycle gives best performance, but might create a critical path
65
  on targets with slow (data) cache memories. A 2-cycle delay can
66
  improve timing but will reduce performance with about 5%.
67
 
68
Reset address
69
CONFIG_IU_RSTADDR
70
  By default, a SPARC processor starts execution at address 0.
71
  With this option, any 4-kbyte aligned reset start address can be
72
  choosen. Keep at 0 unless you really know what you are doing.
73
 
74
Power-down
75
CONFIG_PWD
76
  Say Y here to enable the power-down feature of the processor.
77
  Might reduce the maximum frequency slightly on FPGA targets.
78
  For details on the power-down operation, see the LEON3 manual.
79
 
80
Hardware watchpoints
81
CONFIG_IU_WATCHPOINTS
82
  The processor can have up to 4 hardware watchpoints, allowing to
83
  create both data and instruction breakpoints at any memory location,
84
  also in PROM. Each watchpoint will use approximately 500 gates.
85
  Use 0 to disable the watchpoint function.
86
 
87
Floating-point enable
88
CONFIG_FPU_ENABLE
89
  Say Y here to enable the floating-point interface for the MEIKO
90
  or GRFPU. Note that no FPU's are provided with the GPL version
91
  of GRLIB. Both the Gaisler GRFPU and the Meiko FPU are commercial
92
  cores and must be obtained separately.
93
 
94
FPU selection
95
CONFIG_FPU_GRFPU
96
  Select between Gaisler Research's GRFPU and GRFPU-lite FPUs or the Sun
97
  Meiko FPU core. All cores  are fully IEEE-754 compatible and support
98
  all SPARC FPU instructions.
99
 
100
GRFPU Multiplier
101
CONFIG_FPU_GRFPU_INFMUL
102
  On FPGA targets choose inferred multiplier. For ASIC implementations
103
  choose between Synopsys Design Ware (DW) multiplier or Module
104
  Generator (ModGen) multiplier. The DW multiplier gives better results
105
  (smaller area and better timing) but requires a DW license.
106
  The ModGen multiplier is part of GRLIB and does not require a license.
107
 
108
Shared GRFPU
109
CONFIG_FPU_GRFPU_SH
110
  If enabled multiple CPU cores will share one GRFPU.
111
 
112
GRFPC Configuration
113
CONFIG_FPU_GRFPC0
114
  Configures the GRFPU-LITE controller.
115
 
116
  In simple configuration controller executes FP instructions
117
  in parallel with  integer instructions. FP operands are fetched
118
  in the register file stage and the result is written in the write
119
  stage. This option uses least area resources.
120
 
121
  Data forwarding configuration gives ~ 10 % higher FP performance than
122
  the simple configuration by adding data forwarding between the pipeline
123
  stages.
124
 
125
  Non-blocking controller allows FP load and store instructions to
126
  execute in parallel with FP instructions. The performance increase is
127
  ~ 20 % for FP applications. This option uses most logic resources and
128
  is suitable for ASIC implementations.
129
 
130
Floating-point netlist
131
CONFIG_FPU_NETLIST
132
  Say Y here to use a VHDL netlist of the GRFPU-Lite. This is
133
  only available in certain versions of grlib.
134
 
135
Enable Instruction cache
136
CONFIG_ICACHE_ENABLE
137
  The instruction cache should always be enabled to allow
138
  maximum performance. Some low-end system might want to
139
  save area and disable the cache, but this will reduce
140
  the performance with a factor of 2 - 3.
141
 
142
Enable Data cache
143
CONFIG_DCACHE_ENABLE
144
  The data cache should always be enabled to allow
145
  maximum performance. Some low-end system might want to
146
  save area and disable the cache, but this will reduce
147
  the performance with a factor of 2 at least.
148
 
149
Instruction cache associativity
150
CONFIG_ICACHE_ASSO1
151
  The instruction cache can be implemented as a multi-set cache with
152
  1 - 4 sets. Higher associativity usually increases the cache hit
153
  rate and thereby the performance. The downside is higher power
154
  consumption and increased gate-count for tag comparators.
155
 
156
  Note that a 1-set cache is effectively a direct-mapped cache.
157
 
158
Instruction cache set size
159
CONFIG_ICACHE_SZ1
160
  The size of each set in the instuction cache (kbytes). Valid values
161
  are 1 - 64 in binary steps. Note that the full range is only supported
162
  by the generic and virtex2 targets. Most target packages are limited
163
  to 2 - 16 kbyte. Large set size gives higher performance but might
164
  affect the maximum frequency (on ASIC targets). The total instruction
165
  cache size is the number of set multiplied with the set size.
166
 
167
Instruction cache line size
168
CONFIG_ICACHE_LZ16
169
  The instruction cache line size. Can be set to either 16 or 32
170
  bytes per line. Instruction caches typically benefit from larger
171
  line sizes, but on small caches it migh be better with 16 bytes/line
172
  to limit eviction miss rate.
173
 
174
Instruction cache replacement algorithm
175
CONFIG_ICACHE_ALGORND
176
  Cache replacement algorithm for caches with 2 - 4 sets. The 'random'
177
  algorithm selects the set to evict randomly. The least-recently-used
178
  (LRR) algorithm evicts the set least recently replaced. The least-
179
  recently-used (LRU) algorithm evicts the set least recently accessed.
180
  The random algorithm uses a simple 1- or 2-bit counter to select
181
  the eviction set and has low area overhead. The LRR scheme uses one
182
  extra bit in the tag ram and has therefore also low area overhead.
183
  However, the LRR scheme can only be used with 2-set caches. The LRU
184
  scheme has typically the best performance but also highest area overhead.
185
  A 2-set LRU uses 1 flip-flop per line, a 3-set LRU uses 3 flip-flops
186
  per line, and a 4-set LRU uses 5 flip-flops per line to store the access
187
  history.
188
 
189
Instruction cache locking
190
CONFIG_ICACHE_LOCK
191
  Say Y here to enable cache locking in the instruction cache.
192
  Locking can be done on cache-line level, but will increase the
193
  width of the tag ram with one bit. If you don't know what
194
  locking is good for, it is safe to say N.
195
 
196
Data cache associativity
197
CONFIG_DCACHE_ASSO1
198
  The data cache can be implemented as a multi-set cache with
199
  1 - 4 sets. Higher associativity usually increases the cache hit
200
  rate and thereby the performance. The downside is higher power
201
  consumption and increased gate-count for tag comparators.
202
 
203
  Note that a 1-set cache is effectively a direct-mapped cache.
204
 
205
Data cache set size
206
CONFIG_DCACHE_SZ1
207
  The size of each set in the data cache (kbytes). Valid values are
208
  1 - 64 in binary steps. Note that the full range is only supported
209
  by the generic and virtex2 targets. Most target packages are limited
210
  to 2 - 16 kbyte. A large cache gives higher performance but the
211
  data cache is timing critical an a too large setting might affect
212
  the maximum frequency (on ASIC targets). The total data cache size
213
  is the number of set multiplied with the set size.
214
 
215
Data cache line size
216
CONFIG_DCACHE_LZ16
217
  The data cache line size. Can be set to either 16 or 32 bytes per
218
  line. A smaller line size gives better associativity and higher
219
  cache hit rate, but requires a larger tag memory.
220
 
221
Data cache replacement algorithm
222
CONFIG_DCACHE_ALGORND
223
  See the explanation for instruction cache replacement algorithm.
224
 
225
Data cache locking
226
CONFIG_DCACHE_LOCK
227
  Say Y here to enable cache locking in the data cache.
228
  Locking can be done on cache-line level, but will increase the
229
  width of the tag ram with one bit. If you don't know what
230
  locking is good for, it is safe to say N.
231
 
232
Data cache snooping
233
CONFIG_DCACHE_SNOOP
234
  Say Y here to enable data cache snooping on the AHB bus. Is only
235
  useful if you have additional AHB masters such as the DSU or a
236
  target PCI interface. Note that the target technology must support
237
  dual-port RAMs for this option to be enabled. Dual-port RAMS are
238
  currently supported on Virtex/2, Virage and Actel targets.
239
 
240
Data cache snooping implementation
241
CONFIG_DCACHE_SNOOP_FAST
242
  The default snooping implementation is 'slow', which works if you
243
  don't have AHB slaves in cacheable areas capable of zero-waitstates
244
  non-sequential write accesses. Otherwise use 'fast' and suffer a
245
  few kgates extra area. This option is currently only needed in
246
  multi-master systems with the SSRAM or DDR memory controllers.
247
 
248
Separate snoop tags
249
CONFIG_DCACHE_SNOOP_SEPTAG
250
  Enable a separate memory to store the data tags used for snooping.
251
  This is necessary when snooping support is wanted in systems
252
  with MMU, typically for SMP systems. In this case, the snoop
253
  tags will contain the physical tag address while the normal
254
  tags contain the virtual tag address. This option can also be
255
  together with the 'fast snooping' option to enable snooping
256
  support on technologies without dual-port RAMs. In such case,
257
  the snoop tag RAM will be implemented using a two-port RAM.
258
 
259
Fixed cacheability map
260
CONFIG_CACHE_FIXED
261
  If this variable is 0, the cacheable memory regions are defined
262
  by the AHB plug&play information (default). To overriden the
263
  plug&play settings, this variable can be set to indicate which
264
  areas should be cached. The value is treated as a 16-bit hex value
265
  with each bit defining if a 256 Mbyte segment should be cached or not.
266
  The right-most (LSB) bit defines the cacheability of AHB address
267
 
268
  3840 - 4096 MByte. If the bit is set, the corresponding area is
269
  cacheable. A value of 00F3 defines address 0 - 0x20000000 and
270
  0x40000000 - 0x80000000 as cacheable.
271
 
272
Local data ram
273
CONFIG_DCACHE_LRAM
274
  Say Y here to add a local ram to the data cache controller.
275
  Accesses to the ram (load/store) will be performed at 0 waitstates
276
  and store data will never be written back to the AHB bus.
277
 
278
Size of local data ram
279
CONFIG_DCACHE_LRAM_SZ1
280
  Defines the size of the local data ram in Kbytes. Note that most
281
  technology libraries do not support larger rams than 16 Kbyte.
282
 
283
Start address of local data ram
284
CONFIG_DCACHE_LRSTART
285
  Defines the 8 MSB bits of start address of the local data ram.
286
  By default set to 8f (start address = 0x8f000000), but any value
287
  (except 0) is possible. Note that the local data ram 'shadows'
288
  a 16 Mbyte block of the address space.
289
 
290
MMU enable
291
CONFIG_MMU_ENABLE
292
  Say Y here to enable the Memory Management Unit.
293
 
294
MMU split icache/dcache table lookaside buffer
295
CONFIG_MMU_COMBINED
296
  Select "combined" for a combined icache/dcache table lookaside buffer,
297
  "split" for a split icache/dcache table lookaside buffer
298
 
299
MMU tlb replacement scheme
300
CONFIG_MMU_REPARRAY
301
  Select "LRU" to use the "least recently used" algorithm for TLB
302
  replacement, or "Increment" for a simple incremental replacement
303
  scheme.
304
 
305
Combined i/dcache tlb
306
CONFIG_MMU_I2
307
  Select the number of entries for the instruction TLB, or the
308
  combined icache/dcache TLB if such is used.
309
 
310
Split tlb, dcache
311
CONFIG_MMU_D2
312
  Select the number of entries for the dcache TLB.
313
 
314
Fast writebuffer
315
CONFIG_MMU_FASTWB
316
  Only selectable if split tlb is enabled. In case fast writebuffer is
317
  enabled the tlb hit will be made concurrent to the cache hit. This
318
  leads to higher store performance, but increased power and area.
319
 
320
DSU enable
321
CONFIG_DSU_ENABLE
322
  The debug support unit (DSU) allows non-intrusive debugging and tracing
323
  of both executed instructions and AHB transfers. If you want to enable
324
  the DSU, say Y here and select the configuration below.
325
 
326
Trace buffer enable
327
CONFIG_DSU_TRACEBUF
328
  Say Y to enable the trace buffer. The buffer is not necessary for
329
  debugging, only for tracing instructions and data transfers.
330
 
331
Enable instruction tracing
332
CONFIG_DSU_ITRACE
333
  If you say Y here, an instruction trace buffer will be implemented
334
  in each processor. The trace buffer will trace executed instructions
335
  and their results, and place them in a circular buffer. The buffer
336
  can be read out by any AHB master, and in particular by the debug
337
  communication link.
338
 
339
Size of trace buffer
340
CONFIG_DSU_ITRACESZ1
341
  Select the buffer size (in kbytes) for the instruction trace buffer.
342
  Each line in the buffer needs 16 bytes. A 128-entry buffer will thus
343
  need 2 kbyte.
344
 
345
Enable AHB tracing
346
CONFIG_DSU_ATRACE
347
  If you say Y here, an AHB trace buffer will be implemented in the
348
  debug support unit processor. The AHB buffer will trace all transfers
349
  on the AHB bus and save them in a circular buffer. The trace buffer
350
  can be read out by any AHB master, and in particular by the debug
351
  communication link.
352
 
353
Size of trace buffer
354
CONFIG_DSU_ATRACESZ1
355
  Select the buffer size (in kbytes) for the AHB trace buffer.
356
  Each line in the buffer needs 16 bytes. A 128-entry buffer will thus
357
  need 2 kbyte.
358
 
359
 
360
LEON3FT enable
361
CONFIG_LEON3FT_EN
362
  Say Y here to use the fault-tolerant LEON3FT core instead of the
363
  standard non-FT LEON3.
364
 
365
IU Register file protection
366
CONFIG_IUFT_NONE
367
  Select the FT implementation in the LEON3FT integer unit
368
  register file. The options include parity, parity with
369
  sparing, 7-bit BCH and TMR.
370
 
371
FPU Register file protection
372
CONFIG_FPUFT_EN
373
  Say Y to enable SEU protection of the FPU register file.
374
  The GRFPU will be protected using 8-bit parity without restart, while
375
  the GRFPU-Lite will be protected with 4-bit parity with restart. If
376
  disabled the FPU register file will be implemented using flip-flops.
377
 
378
Cache memory error injection
379
CONFIG_RF_ERRINJ
380
  Say Y here to enable error injection in to the IU/FPU regfiles.
381
  Affects only simulation.
382
 
383
Cache memory protection
384
CONFIG_CACHE_FT_EN
385
  Enable SEU error-correction in the cache memories.
386
 
387
Cache memory error injection
388
CONFIG_CACHE_ERRINJ
389
  Say Y here to enable error injection in to the cache memories.
390
  Affects only simulation.
391
 
392
Leon3ft netlist
393
CONFIG_LEON3_NETLIST
394
  Say Y here to use a VHDL netlist of the LEON3FT. This is
395
  only available in certain versions of grlib.
396
 
397
IU assembly printing
398
CONFIG_IU_DISAS
399
  Enable printing of executed instructions to the console.
400
 
401
IU assembly printing in netlist
402
CONFIG_IU_DISAS_NET
403
  Enable printing of executed instructions to the console also
404
  when simulating a netlist. NOTE: with this option enabled, it
405
  will not be possible to pass place&route.
406
 
407
32-bit program counters
408
CONFIG_DEBUG_PC32
409
  Since the LSB 2 bits of the program counters always are zero, they are
410
  normally not implemented. If you say Y here, the program counters will
411
  be implemented with full 32 bits, making debugging of the VHDL model
412
  much easier. Turn of this option for synthesis or you will be wasting
413
  area.
414
 
415
 

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.