1 |
37 |
wfjm |
# $Id: w11a_tb_guide.txt 810 2016-10-02 16:51:12Z mueller $
|
2 |
4 |
wfjm |
|
3 |
36 |
wfjm |
Note: - Ghdl is used for all behavioral simulations
|
4 |
|
|
- Optionally Vivado xsim can be used
|
5 |
|
|
- For post synthesis or post implementation functionnal simulations
|
6 |
|
|
either Ghdl or Vivado xsim can be used.
|
7 |
|
|
- For timing simulations only Vivado xsim can be used.
|
8 |
35 |
wfjm |
- ISE isim is also available, but considered legacy support
|
9 |
29 |
wfjm |
|
10 |
37 |
wfjm |
Guide to running test benches
|
11 |
4 |
wfjm |
|
12 |
5 |
wfjm |
Table of content:
|
13 |
|
|
|
14 |
37 |
wfjm |
1. Tests bench environment
|
15 |
|
|
2. Unit test benches
|
16 |
|
|
3. System test benches
|
17 |
|
|
4. Test bench driver
|
18 |
|
|
5. Execute all available tests
|
19 |
|
|
6. Available unit tests benches
|
20 |
|
|
7. Available system tests benches
|
21 |
5 |
wfjm |
|
22 |
|
|
|
23 |
37 |
wfjm |
1. Tests bench environment ------------------------------------------------
|
24 |
4 |
wfjm |
|
25 |
37 |
wfjm |
All test benches have the same simple structure:
|
26 |
4 |
wfjm |
|
27 |
37 |
wfjm |
- the test benches are 'self-checking'. For unit tests a stimulus process
|
28 |
|
|
reads test patterns as well as the expected responses from a stimulus file
|
29 |
4 |
wfjm |
|
30 |
|
|
- the responses are checked in very simple cases by the stimulus process,
|
31 |
|
|
in general by a monitoring process
|
32 |
|
|
|
33 |
|
|
- the test bench produces a comprehensive log file. For each checked
|
34 |
|
|
response the line contains the word "CHECK" and either an "OK" or a
|
35 |
|
|
"FAIL", in the later case in general with an indication of whats wrong.
|
36 |
|
|
Other unexpected behaviour, like timeouts, will also result in a line
|
37 |
|
|
containing the word "FAIL".
|
38 |
|
|
|
39 |
|
|
- at the end a line with the word "DONE" is printed.
|
40 |
|
|
|
41 |
37 |
wfjm |
- Most tests can be run as
|
42 |
|
|
- bsim: the behavioral model
|
43 |
|
|
- ssim: post-synthesis functional
|
44 |
|
|
- osim: post-optimization functional
|
45 |
|
|
- rsim: post-routing functional
|
46 |
|
|
- esim: post-synthesis timing
|
47 |
|
|
- psim: post-optimization timing
|
48 |
|
|
- tsim: post-routing timing
|
49 |
4 |
wfjm |
|
50 |
36 |
wfjm |
Building the simulation models is handled by the build environment. See
|
51 |
|
|
README_buildsystem_Vivado.txt for details of the vivado flow and
|
52 |
|
|
README_buildsystem_ISE.txt for the ISE flow.
|
53 |
35 |
wfjm |
|
54 |
37 |
wfjm |
2. Unit test benches ------------------------------------------------------
|
55 |
4 |
wfjm |
|
56 |
37 |
wfjm |
All unit test are executed via 'tbw' (test bench warpper) script.
|
57 |
28 |
wfjm |
|
58 |
37 |
wfjm |
- the test bench is run like
|
59 |
4 |
wfjm |
|
60 |
37 |
wfjm |
tbw [stimfile] | tbfilt --tee
|
61 |
4 |
wfjm |
|
62 |
37 |
wfjm |
where
|
63 |
|
|
- tbw sets up the environment of the test bench and starts it.
|
64 |
|
|
It generates required symbolic links, e.g. to the stimulus file,
|
65 |
|
|
the defaults extracted from the file tbw.dat, if an optional file
|
66 |
|
|
name is give this one will be used instead.
|
67 |
|
|
- tbfilt saves the full test bench output to a logfile and filters
|
68 |
|
|
the output for PASS/FAIL criteria
|
69 |
4 |
wfjm |
|
70 |
37 |
wfjm |
- for convenience a wrapper script 'tbrun_tbw' is used to generate the
|
71 |
|
|
tbw|tbfilt pipe. This script also checks with 'make' whether the
|
72 |
|
|
test bench is up-to-date or must be (re)-compiled.
|
73 |
4 |
wfjm |
|
74 |
37 |
wfjm |
3. System test benches ----------------------------------------------------
|
75 |
35 |
wfjm |
|
76 |
11 |
wfjm |
The system tests allow to verify to verify a full system design.
|
77 |
4 |
wfjm |
In this case vhdl test bench code contains
|
78 |
|
|
- (simple) models of the memories used on the FPGA boards
|
79 |
9 |
wfjm |
- drivers for the rlink connection (currently just serialport)
|
80 |
|
|
- code to interface the rlink data stream to a UNIX 'named pipe',
|
81 |
4 |
wfjm |
implemented with a C routine which is called via VHPI from VHDL.
|
82 |
|
|
This way the whole ghdl simulation can be controlled via a di-directional
|
83 |
|
|
byte stream.
|
84 |
|
|
|
85 |
11 |
wfjm |
The rlink backend process can connect either via a named pipe to a ghdl
|
86 |
|
|
simulation, or via a serial port to a FPGA board. This way the same tests
|
87 |
|
|
can be executed in simulation and on real hardware.
|
88 |
4 |
wfjm |
|
89 |
37 |
wfjm |
In general the script 'tbrun_tbwrri' is used to generate the quite lengthy
|
90 |
|
|
ommand to properly setup the tbw|tbfilt pipe. This script also checks
|
91 |
|
|
with 'make' whether the test bench is up-to-date or must be (re)-compiled.
|
92 |
4 |
wfjm |
|
93 |
37 |
wfjm |
4. Test bench driver ------------------------------------------------------
|
94 |
11 |
wfjm |
|
95 |
37 |
wfjm |
All available tests (unit and system test benches) are described in a
|
96 |
|
|
set of descriptor files, usually called 'tbrun.yml'. The top level file
|
97 |
|
|
in $RETROBASE includes other descriptor files located in the source
|
98 |
|
|
directories of the tests.
|
99 |
16 |
wfjm |
|
100 |
37 |
wfjm |
The script 'tbrun' reads these descriptor files, selects tests based
|
101 |
|
|
on --tag and --exclude options, and executes the tests with the
|
102 |
|
|
simulation engine and simulation type given by the --mode option.
|
103 |
|
|
For full description of see 'man tbrun'.
|
104 |
16 |
wfjm |
|
105 |
37 |
wfjm |
The low level drivers 'tbrun_tbw' and 'tbrun_tbwrri' will automatically
|
106 |
|
|
build the model if it is not available or outdated. This is very convenient
|
107 |
|
|
when working with a single test bench during development.
|
108 |
16 |
wfjm |
|
109 |
37 |
wfjm |
When executing a large number of them it's in general better to separate
|
110 |
|
|
the model building (make phase) made model execution (run phase). Both
|
111 |
|
|
the low level drivers as well as 'tbrun' support this via the options
|
112 |
|
|
--nomake and --norun.
|
113 |
16 |
wfjm |
|
114 |
37 |
wfjm |
The individial test benches are simplest started via tbrun and a proper
|
115 |
|
|
selection via --tag. Very helpful is
|
116 |
16 |
wfjm |
|
117 |
37 |
wfjm |
cd $RETROBASE
|
118 |
|
|
tbrun --dry --tag=.*
|
119 |
16 |
wfjm |
|
120 |
37 |
wfjm |
which gives a listing of all available test. The tag list as well as
|
121 |
|
|
the shell commands to execute the test are shown.
|
122 |
16 |
wfjm |
|
123 |
37 |
wfjm |
5. Execute all available tests --------------------------------------------
|
124 |
16 |
wfjm |
|
125 |
37 |
wfjm |
As stated above it is in general better to to separate the model building
|
126 |
|
|
(make phase) made model execution (run phase). The currently recommended
|
127 |
|
|
way to execute all test benches is given below.
|
128 |
|
|
The run time is measured on a 3 GHz dual core system.
|
129 |
16 |
wfjm |
|
130 |
37 |
wfjm |
cd $RETROBASE
|
131 |
|
|
# build all behavioral models
|
132 |
|
|
# first all with ISE work flow
|
133 |
|
|
time nice tbrun -j 2 -norun -tag=ise -tee=tbrun_make_ise_bsim.log
|
134 |
|
|
# --> real 3m41.732s user 6m3.381s sys 0m24.224s
|
135 |
11 |
wfjm |
|
136 |
37 |
wfjm |
# than all with vivado work flow
|
137 |
|
|
time nice tbrun -j 2 -norun -tag=viv -tee=tbrun_make_viv_bsim.log
|
138 |
|
|
# --> real 3m36.532s user 5m58.319s sys 0m25.235s
|
139 |
|
|
|
140 |
|
|
# than execute all behavioral models
|
141 |
|
|
time nice tbrun -j 2 -nomake -tag=ise -tee=tbrun_run_ise_bsim.log
|
142 |
|
|
# --> real 3m19.799s user 5m45.060s sys 0m6.625s
|
143 |
|
|
time nice tbrun -j 2 -nomake -tag=viv -tee=tbrun_run_viv_bsim.log
|
144 |
|
|
#--> real 3m49.193s user 5m44.063s sys 0m5.332s
|
145 |
16 |
wfjm |
|
146 |
37 |
wfjm |
All test create an individual logfile. 'tbfilt' can be used to scan
|
147 |
|
|
these logfiles and create a summary with
|
148 |
16 |
wfjm |
|
149 |
37 |
wfjm |
tbfilt -all -sum -comp
|
150 |
|
|
|
151 |
|
|
It should look like
|
152 |
|
|
76m 0m00.034s c 0.92u 0 PASS tb_is61lv25616al_bsim.log
|
153 |
|
|
76m 0m00.153s c 4.00u 0 PASS tb_mt45w8mw16b_bsim.log
|
154 |
|
|
76m 0m00.168s c 1146 0 PASS tb_nx_cram_memctl_as_bsim.log
|
155 |
|
|
...
|
156 |
|
|
...
|
157 |
|
|
76m 0m03.729s c 61258 0 PASS tb_pdp11core_bsim_base.log
|
158 |
|
|
76m 0m00.083s c 1121 0 PASS tb_pdp11core_bsim_ubmap.log
|
159 |
|
|
76m 0m00.068s c 1031 0 PASS tb_rlink_tba_pdp11core_bsim_ibdr.log
|
160 |
36 |
wfjm |
|
161 |
37 |
wfjm |
6. Available unit tests benches -------------------------------------------
|
162 |
11 |
wfjm |
|
163 |
37 |
wfjm |
tbrun --tag=comlib # comlib unit tests
|
164 |
|
|
tbrun --tag=serport # serport unit tests
|
165 |
|
|
tbrun --tag=rlink # rlink unit tests
|
166 |
|
|
tbrun --tag=issi # SRAM model unit tests
|
167 |
|
|
tbrun --tag=micron # CRAM model unit tests
|
168 |
|
|
tbrun --tag=sram_memctl # SRAM controller unit tests
|
169 |
|
|
tbrun --tag=cram_memctl # CRAM controller unit tests
|
170 |
|
|
tbrun --tag=w11a # w11a unit tests
|
171 |
35 |
wfjm |
|
172 |
37 |
wfjm |
7. Available system tests benches -----------------------------------------
|
173 |
35 |
wfjm |
|
174 |
37 |
wfjm |
tbrun --tag=sys_tst_serloop.* # all sys_tst_serloop designs
|
175 |
|
|
tbrun --tag=sys_tst_rlink # all sys_tst_rlink designs
|
176 |
|
|
tbrun --tag=sys_tst_rlink_cuff # all sys_tst_rlink_cuff designs
|
177 |
|
|
tbrun --tag=sys_tst_sram # all sys_tst_sram designs
|
178 |
|
|
tbrun --tag=sys_w11a # all w11a designs
|