OpenCores
URL https://opencores.org/ocsvn/mips_enhanced/mips_enhanced/trunk

Subversion Repositories mips_enhanced

[/] [mips_enhanced/] [trunk/] [grlib-gpl-1.0.19-b3188/] [designs/] [leon3-wildcard-xcv300e/] [config.help] - Blame information for rev 2

Details | Compare with Previous | View Log

Line No. Rev Author Line
1 2 dimamali
 
2
Number of processors
3
CONFIG_PROC_NUM
4
  The number of processor cores. The LEON3MP design can accomodate
5
  up to 4 LEON3 processor cores. Use 1 unless you know what you are
6
  doing ...
7
 
8
Number of SPARC register windows
9
CONFIG_IU_NWINDOWS
10
  The SPARC architecture (and LEON) allows 2 - 32 register windows.
11
  However, any number except 8 will require that you modify and
12
  recompile your run-time system or kernel. Unless you know what
13
  you are doing, use 8.
14
 
15
SPARC V8 multiply and divide instruction
16
CONFIG_IU_V8MULDIV
17
  If you say Y here, the SPARC V8 multiply and divide instructions
18
  will be implemented. The instructions are: UMUL, UMULCC, SMUL,
19
  SMULCC, UDIV, UDIVCC, SDIV, SDIVCC. In code containing frequent
20
  integer multiplications and divisions, significant performance
21
  increase can be achieved. Emulated floating-point operations will
22
  also benefit from this option.
23
 
24
  By default, the gcc compiler does not emit multiply or divide
25
  instructions and your code must be compiled with -mv8 to see any
26
  performance increase. On the other hand, code compiled with -mv8
27
  will generate an illegal instruction trap when executed on processors
28
  with this option disabled.
29
 
30
  The divider consumes approximately 2 kgates, the multiplier 6 kgates.
31
 
32
Multiplier latency
33
CONFIG_IU_MUL_LATENCY_4
34
  The multiplier used for UMUL/SMUL instructions is implemented
35
  with a 16x16 multiplier which is iterated 4 times. This leads
36
  to a 4-cycle latency for multiply operations. To improve timing,
37
  a pipeline stage can be inserted into the 16x16 multiplier which
38
  will lead to a 5-cycle latency for the multiply oprations.
39
 
40
Multiplier latency
41
CONFIG_IU_MUL_MAC
42
  If you say Y here, the SPARC V8e UMAC/SMAC (multiply-accumulate)
43
  instructions will be enabled. The instructions implement a
44
  single-cycle 16x16->32 bits multiply with a 40-bits accumulator.
45
  The details of these instructions can be found in the LEON manual,
46
 
47
Single vector trapping
48
CONFIG_IU_SVT
49
  Single-vector trapping is a SPARC V8e option to reduce code-size
50
  in small applications. If enabled, the processor will jump to
51
  the address of trap 0 (tt = 0x00) for all traps. No trap table
52
  is then needed. The trap type is present in %psr.tt and must
53
  be decoded by the O/S. Saves 4 Kbyte of code, but increases
54
  trap and interrupt overhead. Currently, the only O/S supporting
55
  this option is eCos. To enable SVT, the O/S must also set bit 13
56
  in %asr17.
57
 
58
Load latency
59
CONFIG_IU_LDELAY
60
  Defines the pipeline load delay (= pipeline cycles before the data
61
  from a load instruction is available for the next instruction).
62
  One cycle gives best performance, but might create a critical path
63
  on targets with slow (data) cache memories. A 2-cycle delay can
64
  improve timing but will reduce performance with about 5%.
65
 
66
Reset address
67
CONFIG_IU_RSTADDR
68
  By default, a SPARC processor starts execution at address 0.
69
  With this option, any 4-kbyte aligned reset start address can be
70
  choosen. Keep at 0 unless you really know what you are doing.
71
 
72
Power-down
73
CONFIG_PWD
74
  Say Y here to enable the power-down feature of the processor.
75
  Might reduce the maximum frequency slightly on FPGA targets.
76
  For details on the power-down operation, see the LEON3 manual.
77
 
78
Hardware watchpoints
79
CONFIG_IU_WATCHPOINTS
80
  The processor can have up to 4 hardware watchpoints, allowing to
81
  create both data and instruction breakpoints at any memory location,
82
  also in PROM. Each watchpoint will use approximately 500 gates.
83
  Use 0 to disable the watchpoint function.
84
 
85
Floating-point enable
86
CONFIG_FPU_ENABLE
87
  Say Y here to enable the floating-point interface for the MEIKO
88
  or GRFPU. Note that no FPU's are provided with the GPL version
89
  of GRLIB. Both the Gaisler GRFPU and the Meiko FPU are commercial
90
  cores and must be obtained separately.
91
 
92
FPU selection
93
CONFIG_FPU_GRFPU
94
  Select between Gaisler Research's GRFPU and GRFPU-lite FPUs or the Sun
95
  Meiko FPU core. All cores  are fully IEEE-754 compatible and support
96
  all SPARC FPU instructions.
97
 
98
GRFPU Multiplier
99
CONFIG_FPU_GRFPU_INFMUL
100
  On FPGA targets choose inferred multiplier. For ASIC implementations
101
  choose between Synopsys Design Ware (DW) multiplier or Module
102
  Generator (ModGen) multiplier. DW multiplier gives better results
103
  (smaller area  and better timing) but requires DW license. ModGen
104
  multiplier is part of GRLIB and does not require license.
105
 
106
Shared GRFPU
107
CONFIG_FPU_GRFPU_SH
108
  If enabled multiple CPU cores will share one GRFPU.
109
 
110
GRFPC Configuration
111
CONFIG_FPU_GRFPC0
112
  Configures the GRFPU-LITE controller.
113
 
114
  In simple configuration controller executes FP instructions
115
  in parallel with  integer instructions. FP operands are fetched
116
  in the register file stage and the result is written in the write
117
  stage. This option uses least area resources.
118
 
119
  Data forwarding configuration gives ~ 10 % higher FP performance than
120
  the simple configuration by adding data forwarding between the pipeline
121
  stages.
122
 
123
  Non-blocking controller allows FP load and store instructions to
124
  execute in parallel with FP instructions. The performance increase is
125
  ~ 20 % for FP applications. This option uses most logic resources and
126
  is suitable for ASIC implementations.
127
 
128
Floating-point netlist
129
CONFIG_FPU_NETLIST
130
  Say Y here to use a VHDL netlist of the GRFPU-Lite. This is
131
  only available in certain versions of grlib.
132
 
133
Enable Instruction cache
134
CONFIG_ICACHE_ENABLE
135
  The instruction cache should always be enabled to allow
136
  maximum performance. Some low-end system might want to
137
  save area and disable the cache, but this will reduce
138
  the performance with a factor of 2 - 3.
139
 
140
Enable Data cache
141
CONFIG_DCACHE_ENABLE
142
  The data cache should always be enabled to allow
143
  maximum performance. Some low-end system might want to
144
  save area and disable the cache, but this will reduce
145
  the performance with a factor of 2 at least.
146
 
147
Instruction cache associativity
148
CONFIG_ICACHE_ASSO1
149
  The instruction cache can be implemented as a multi-set cache with
150
  1 - 4 sets. Higher associativity usually increases the cache hit
151
  rate and thereby the performance. The downside is higher power
152
  consumption and increased gate-count for tag comparators.
153
 
154
  Note that a 1-set cache is effectively a direct-mapped cache.
155
 
156
Instruction cache set size
157
CONFIG_ICACHE_SZ1
158
  The size of each set in the instuction cache (kbytes). Valid values
159
  are 1 - 64 in binary steps. Note that the full range is only supported
160
  by the generic and virtex2 targets. Most target packages are limited
161
  to 2 - 16 kbyte. Large set size gives higher performance but might
162
  affect the maximum frequency (on ASIC targets). The total instruction
163
  cache size is the number of set multiplied with the set size.
164
 
165
Instruction cache line size
166
CONFIG_ICACHE_LZ16
167
  The instruction cache line size. Can be set to either 16 or 32
168
  bytes per line. Instruction caches typically benefit from larger
169
  line sizes, but on small caches it migh be better with 16 bytes/line
170
  to limit eviction miss rate.
171
 
172
Instruction cache replacement algorithm
173
CONFIG_ICACHE_ALGORND
174
  Cache replacement algorithm for caches with 2 - 4 sets. The 'random'
175
  algorithm selects the set to evict randomly. The least-recently-used
176
  (LRR) algorithm evicts the set least recently replaced. The least-
177
  recently-used (LRU) algorithm evicts the set least recently accessed.
178
  The random algorithm uses a simple 1- or 2-bit counter to select
179
  the eviction set and has low area overhead. The LRR scheme uses one
180
  extra bit in the tag ram and has therefore also low area overhead.
181
  However, the LRR scheme can only be used with 2-set caches. The LRU
182
  scheme has typically the best performance but also highest area overhead.
183
  A 2-set LRU uses 1 flip-flop per line, a 3-set LRU uses 3 flip-flops
184
  per line, and a 4-set LRU uses 5 flip-flops per line to store the access
185
  history.
186
 
187
Instruction cache locking
188
CONFIG_ICACHE_LOCK
189
  Say Y here to enable cache locking in the instruction cache.
190
  Locking can be done on cache-line level, but will increase the
191
  width of the tag ram with one bit. If you don't know what
192
  locking is good for, it is safe to say N.
193
 
194
Data cache associativity
195
CONFIG_DCACHE_ASSO1
196
  The data cache can be implemented as a multi-set cache with
197
  1 - 4 sets. Higher associativity usually increases the cache hit
198
  rate and thereby the performance. The downside is higher power
199
  consumption and increased gate-count for tag comparators.
200
 
201
  Note that a 1-set cache is effectively a direct-mapped cache.
202
 
203
Data cache set size
204
CONFIG_DCACHE_SZ1
205
  The size of each set in the data cache (kbytes). Valid values are
206
  1 - 64 in binary steps. Note that the full range is only supported
207
  by the generic and virtex2 targets. Most target packages are limited
208
  to 2 - 16 kbyte. A large cache gives higher performance but the
209
  data cache is timing critical an a too large setting might affect
210
  the maximum frequency (on ASIC targets). The total data cache size
211
  is the number of set multiplied with the set size.
212
 
213
Data cache line size
214
CONFIG_DCACHE_LZ16
215
  The data cache line size. Can be set to either 16 or 32 bytes per
216
  line. A smaller line size gives better associativity and higher
217
  cache hit rate, but requires a larger tag memory.
218
 
219
Data cache replacement algorithm
220
CONFIG_DCACHE_ALGORND
221
  See the explanation for instruction cache replacement algorithm.
222
 
223
Data cache locking
224
CONFIG_DCACHE_LOCK
225
  Say Y here to enable cache locking in the data cache.
226
  Locking can be done on cache-line level, but will increase the
227
  width of the tag ram with one bit. If you don't know what
228
  locking is good for, it is safe to say N.
229
 
230
Data cache snooping
231
CONFIG_DCACHE_SNOOP
232
  Say Y here to enable data cache snooping on the AHB bus. Is only
233
  useful if you have additional AHB masters such as the DSU or a
234
  target PCI interface. Note that the target technology must support
235
  dual-port RAMs for this option to be enabled. Dual-port RAMS are
236
  currently supported on Virtex/2, Virage and Actel targets.
237
 
238
Data cache snooping implementation
239
CONFIG_DCACHE_SNOOP_FAST
240
  The default snooping implementation is 'slow', which works if you
241
  don't have AHB slaves in cacheable areas capable of zero-waitstates
242
  non-sequential write accesses. Otherwise use 'fast' and suffer a
243
  few kgates extra area. This option is currently only needed in
244
  multi-master systems with the SSRAM or DDR memory controllers.
245
 
246
Separate snoop tags
247
CONFIG_DCACHE_SNOOP_SEPTAG
248
  Enable a separate memory to store the data tags used for snooping.
249
  This is necessary when snooping support is wanted in systems
250
  with MMU, typically for SMP systems. In this case, the snoop
251
  tags will contain the physical tag address while the normal
252
  tags contain the virtual tag address. This option can also be
253
  together with the 'fast snooping' option to enable snooping
254
  support on technologies without dual-port RAMs. In such case,
255
  the snoop tag RAM will be implemented using a two-port RAM.
256
 
257
Fixed cacheability map
258
CONFIG_CACHE_FIXED
259
  If this variable is 0, the cacheable memory regions are defined
260
  by the AHB plug&play information (default). To overriden the
261
  plug&play settings, this variable can be set to indicate which
262
  areas should be cached. The value is treated as a 16-bit hex value
263
  with each bit defining if a 256 Mbyte segment should be cached or not.
264
  The right-most (LSB) bit defines the cacheability of AHB address
265
 
266
  3840 - 4096 MByte. If the bit is set, the corresponding area is
267
  cacheable. A value of 00F3 defines address 0 - 0x20000000 and
268
  0x40000000 - 0x80000000 as cacheable.
269
 
270
Local data ram
271
CONFIG_DCACHE_LRAM
272
  Say Y here to add a local ram to the data cache controller.
273
  Accesses to the ram (load/store) will be performed at 0 waitstates
274
  and store data will never be written back to the AHB bus.
275
 
276
Size of local data ram
277
CONFIG_DCACHE_LRAM_SZ1
278
  Defines the size of the local data ram in Kbytes. Note that most
279
  technology libraries do not support larger rams than 16 Kbyte.
280
 
281
Start address of local data ram
282
CONFIG_DCACHE_LRSTART
283
  Defines the 8 MSB bits of start address of the local data ram.
284
  By default set to 8f (start address = 0x8f000000), but any value
285
  (except 0) is possible. Note that the local data ram 'shadows'
286
  a 16 Mbyte block of the address space.
287
 
288
MMU enable
289
CONFIG_MMU_ENABLE
290
  Say Y here to enable the Memory Management Unit.
291
 
292
MMU split icache/dcache table lookaside buffer
293
CONFIG_MMU_COMBINED
294
  Select "combined" for a combined icache/dcache table lookaside buffer,
295
  "split" for a split icache/dcache table lookaside buffer
296
 
297
MMU tlb replacement scheme
298
CONFIG_MMU_REPARRAY
299
  Select "LRU" to use the "least recently used" algorithm for TLB
300
  replacement, or "Increment" for a simple incremental replacement
301
  scheme.
302
 
303
Combined i/dcache tlb
304
CONFIG_MMU_I2
305
  Select the number of entries for the instruction TLB, or the
306
  combined icache/dcache TLB if such is used.
307
 
308
Split tlb, dcache
309
CONFIG_MMU_D2
310
  Select the number of entries for the dcache TLB.
311
 
312
Fast writebuffer
313
CONFIG_MMU_FASTWB
314
  Only selectable if split tlb is enabled. In case fast writebuffer is
315
  enabled the tlb hit will be made concurrent to the cache hit. This
316
  leads to higher store performance, but increased power and area.
317
 
318
DSU enable
319
CONFIG_DSU_ENABLE
320
  The debug support unit (DSU) allows non-intrusive debugging and tracing
321
  of both executed instructions and AHB transfers. If you want to enable
322
  the DSU, say Y here and select the configuration below.
323
 
324
Trace buffer enable
325
CONFIG_DSU_TRACEBUF
326
  Say Y to enable the trace buffer. The buffer is not necessary for
327
  debugging, only for tracing instructions and data transfers.
328
 
329
Enable instruction tracing
330
CONFIG_DSU_ITRACE
331
  If you say Y here, an instruction trace buffer will be implemented
332
  in each processor. The trace buffer will trace executed instructions
333
  and their results, and place them in a circular buffer. The buffer
334
  can be read out by any AHB master, and in particular by the debug
335
  communication link.
336
 
337
Size of trace buffer
338
CONFIG_DSU_ITRACESZ1
339
  Select the buffer size (in kbytes) for the instruction trace buffer.
340
  Each line in the buffer needs 16 bytes. A 128-entry buffer will thus
341
  need 2 kbyte.
342
 
343
Enable AHB tracing
344
CONFIG_DSU_ATRACE
345
  If you say Y here, an AHB trace buffer will be implemented in the
346
  debug support unit processor. The AHB buffer will trace all transfers
347
  on the AHB bus and save them in a circular buffer. The trace buffer
348
  can be read out by any AHB master, and in particular by the debug
349
  communication link.
350
 
351
Size of trace buffer
352
CONFIG_DSU_ATRACESZ1
353
  Select the buffer size (in kbytes) for the AHB trace buffer.
354
  Each line in the buffer needs 16 bytes. A 128-entry buffer will thus
355
  need 2 kbyte.
356
 
357
 
358
LEON3FT enable
359
CONFIG_LEON3FT_EN
360
  Say Y here to use the fault-tolerant LEON3FT core instead of the
361
  standard non-FT LEON3.
362
 
363
IU Register file protection
364
CONFIG_IUFT_NONE
365
  Select the FT implementation in the LEON3FT integer unit
366
  register file. The options include parity, parity with
367
  sparing, 7-bit BCH and TMR.
368
 
369
FPU Register file protection
370
CONFIG_FPUFT_EN
371
  Say Y to enable SEU protection of the FPU register file.
372
  The GRFPU will be protected using 8-bit parity without restart, while
373
  the GRFPU-Lite will be protected with 4-bit parity with restart. If
374
  disabled the FPU register file will be implemented using flip-flops.
375
 
376
Cache memory error injection
377
CONFIG_RF_ERRINJ
378
  Say Y here to enable error injection in to the IU/FPU regfiles.
379
  Affects only simulation.
380
 
381
Cache memory protection
382
CONFIG_CACHE_FT_EN
383
  Enable SEU error-correction in the cache memories.
384
 
385
Cache memory error injection
386
CONFIG_CACHE_ERRINJ
387
  Say Y here to enable error injection in to the cache memories.
388
  Affects only simulation.
389
 
390
Leon3ft netlist
391
CONFIG_LEON3_NETLIST
392
  Say Y here to use a VHDL netlist of the LEON3FT. This is
393
  only available in certain versions of grlib.
394
 
395
IU assembly printing
396
CONFIG_IU_DISAS
397
  Enable printing of executed instructions to the console.
398
 
399
IU assembly printing in netlist
400
CONFIG_IU_DISAS_NET
401
  Enable printing of executed instructions to the console also
402
  when simulating a netlist. NOTE: with this option enabled, it
403
  will not be possible to pass place&route.
404
 
405
32-bit program counters
406
CONFIG_DEBUG_PC32
407
  Since the LSB 2 bits of the program counters always are zero, they are
408
  normally not implemented. If you say Y here, the program counters will
409
  be implemented with full 32 bits, making debugging of the VHDL model
410
  much easier. Turn of this option for synthesis or you will be wasting
411
  area.
412
 
413
 
414
On-chip ram
415
CONFIG_AHBRAM_ENABLE
416
  Say Y here to add a block on on-chip ram to the AHB bus. The ram
417
  provides 0-waitstates read access and 0/1 waitstates write access.
418
  All AHB burst types are supported, as well as 8-, 16- and 32-bit
419
  data size.
420
 
421
On-chip ram size
422
CONFIG_AHBRAM_SZ1
423
  Set the size of the on-chip AHB ram. The ram is infered/instantiated
424
  as four byte-wide ram slices to allow byte and half-word write
425
  accesses. It is therefore essential that the target package can
426
  infer byte-wide rams. This is currently supported on the generic,
427
  virtex, virtex2, proasic and axellerator targets.
428
 
429
On-chip ram address
430
CONFIG_AHBRAM_START
431
  Set the start address of AHB RAM (HADDR[31:20]). The RAM will occupy
432
  a 1 Mbyte slot at the selected address. Default is A00, corresponding
433
  to AHB address 0xA0000000.
434
 

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.