OpenCores
no use no use 1/1 no use no use
How to design a proper testbench for a project?
by GilianB on Sep 4, 2016
GilianB
Posts: 2
Joined: Sep 2, 2016
Last seen: Jun 14, 2023
I want to design a testbench for a project on this site.
I'm unsure how to design a good one. I used to always test a couple of values and then assume it would work for most values as well. This obviously isn't the right way to do testing.
I think checking all possible values is impossible because there is a generic input. It could be set to 8 and I would have to check all possible values, or it could be set to 32 and I would have to check all possible values, etc...
RE: How to design a proper testbench for a project?
by dgisselq on Sep 5, 2016
dgisselq
Posts: 247
Joined: Feb 20, 2015
Last seen: Jul 15, 2022
Welcome to the fundamental problem of testing: There's never enough time, money, patience, etc. to test every single logical combination through a core. So ... how do you get relevant tests?

There are many approaches to this problem.

One method is known as white box testing. This is where you look inside your code at every path, and make sure at least one test case covers each path. While reasonable, this can also be expensive.

Another method is black box testing--where you treat the component like a black box, and just hit it with data to see what it does. As I write this description, though, this seems hardly focused and perhaps even a waste of time.

You can also do component level testing. I have met many individuals who strongly believe that every Verilog module should have a test bench that can individually be tested prior to full integration. With the double clocked FFT module, this was my testing approach: I tested each component separately and then individually, before testing the entire FFT core. That's somewhat the concept here of open cores in general: every "core" should be able to be individually tested. It still leaves you with the problem of integration testing, but certainly that should be a lot easier when all the components work.

There's actually a fourth type of testing that is quite common as well: ticking box testing. In this case, you give it to the customer and wait to hear what problems they have with it. ;)

I have not always been consistent with the testing I've done, but I am trying to do the following: In the main directory of every core, I want to create a "make test" script. That script should either end in "SUCCESS" or "FAIL" on the last line.

While I think this is good practice, it doesn't necessarily answer your question of --- what gets tested and what doesn't? Perhaps that's just a judgment call?

Dan

RE: How to design a proper testbench for a project?
by jdoin on Sep 5, 2016
jdoin
Posts: 51
Joined: Sep 1, 2009
Last seen: May 2, 2024
As Dan put it, good testbenches are hard to write.

Testbenching is a fundamental part of hardware design, though. As the designer, you should have a mental model of how the circuit must operate, how the states should change in state machines, and how each state should behave with its combinational logic, for example.

The testbench must verify the logic for the designed functionality, and should test the "corner cases". For example, in math operations, you should test for sign inversion, intermediate very large/small values, and domain error situations.

For communications protocols and serial processing, you should test for every fail scenario, and check for clock synchronization, etc.

Magic values are also important, like checkerboard patterns, null values, initialization constants, default fallback values and states.

A core must absolutely have a testbench to verify that the implementation is correct, so test vectors are important. For crypto functions, for example, you have test vectors that test all corner cases of the algorithm. You must implement all test vectors in the testbench, unless they are really unfeasible (like testing gigabyte data files in the simulator).

- Jonny
RE: How to design a proper testbench for a project?
by dgisselq on Sep 6, 2016
dgisselq
Posts: 247
Joined: Feb 20, 2015
Last seen: Jul 15, 2022
Johnny,

Thanks for joining the discussion! All good points.

Please allow me to build on your comments:

After I release a core, if I find later that the core can't handle ... something it was supposed to handle, finding and fixing the bug can be a challenge. It often involves building a test case that specifically targets the bug. In that case, I like to make that test case part of the test suite for the program, so that any future tests are guaranteed to test that particular corner case.

This has been exceptionally true with the ZipCPU. When I first built it, I had no idea what the corner cases were or might be. Then, as I've worked with it, I've found bugs and problems--often in places where I never expected I would find them. As I find these bugs, my test suite grows and grows, and helps me do regression testing now.

Dan

RE: How to design a proper testbench for a project?
by jt_eaton on Sep 6, 2016
jt_eaton
Posts: 142
Joined: Aug 18, 2008
Last seen: Sep 29, 2018
Welcome to the fundamental problem of testing: There's never enough time, money, patience, etc. to test every single logical combination through a core. So ... how do you get relevant tests?

There are many approaches to this problem.

One method is known as white box testing. This is where you look inside your code at every path, and make sure at least one test case covers each path. While reasonable, this can also be expensive.
====================================
Also known as targeted testing. Got an 8 bit full adder? Start with 00 and FF: 00+00,00+FF,FF+00,FF+FF. Do that with no carry in and then repeat with carry in set.

Once you have checked your limits you then add in your limits + or - 1. Add in 01 and FE and combine with 00 and FF for all possible combinations. Step next to 02,FD and keep going till you run out of CPU cycles.

Run code coverage and create vectors to target any holes.

===================


Another method is black box testing--where you treat the component like a black box, and just hit it with data to see what it does. As I write this description, though, this seems hardly focused and perhaps even a waste of time.
====================
Also known as random testing. Some folks swear by it. Create a stream of random data and feed it to both your UUT and a software model. Compare the outputs. Works best when you are sitting on top of a huge compute farm.
==========================



You can also do component level testing. I have met many individuals who strongly believe that every Verilog module should have a test bench that can individually be tested prior to full integration. With the double clocked FFT module, this was my testing approach: I tested each component separately and then individually, before testing the entire FFT core. That's somewhat the concept here of open cores in general: every "core" should be able to be individually tested. It still leaves you with the problem of integration testing, but certainly that should be a lot easier when all the components work.

=========================
I swear by this. You get full access to each components input and output so you don't have to muck around trying to figure out how to stimulate and test some nodes buried deep inside the hierarchy. You can component test a full adder at one operation per clock cycle. You could never come close to that running code on the CPU.
==============================

There's actually a fourth type of testing that is quite common as well: ticking box testing. In this case, you give it to the customer and wait to hear what problems they have with it. ;)
=================
And the customer will test your design for FREE.
=====================

I have not always been consistent with the testing I've done, but I am trying to do the following: In the main directory of every core, I want to create a "make test" script. That script should either end in "SUCCESS" or "FAIL" on the last line.
================================
Every line printed in a log file should have:

1) Time stamp in the same format.

2) Severity: ERROR,Warning or information

3) Source: Instance name of generator or task.

4) Payload: Actual error message.

============================


While I think this is good practice, it doesn't necessarily answer your question of --- what gets tested and what doesn't? Perhaps that's just a judgment call?

Dan




Plus:

Put protocol checkers on every bus and design them so you could include them in your hardware if needed.

Don't do a test bench, do a test fixture. A test bench is a top level module with no ports at all. A test fixture has inputs for clock and reset and outputs for error and finished.

A test fixture plugs into a test bench. The test bench supplies clock and reset and will count the errors until it sees the finish signal and reports SUCCESS or FAIL. If #TIMEOUT number of clocks pass with no finish then it FAIL as a timeout error.

Advantage?

A test bench for icarus verilog is written in verilog. A test bench for verilator is written in C++.

They can both use the same test fixture. Do that and you pick up two tools at once.



John Eaton



RE: How to design a proper testbench for a project?
by GilianB on Sep 15, 2016
GilianB
Posts: 2
Joined: Sep 2, 2016
Last seen: Jun 14, 2023
Testing all values looks like the best idea to me. The problem is that I have a generic value, and I don't know how to deal with that in my testbench. Also, after seeing what other people's testbenches worked, they often used scripts. I have no idea why I should make scripts. Can't I just use the VHDL loop function to test all values?
RE: How to design a proper testbench for a project?
by HanySalah on Sep 16, 2016
HanySalah
Posts: 2
Joined: Nov 21, 2013
Last seen: Oct 10, 2019
I had read the comments above. I won't add more of those comments.
First you have to decide the scale of your project complexity. High-complex designs like processors or deeply specific designs like the latest standards of communication protocols and so on, need complex testbenches like UVM or VMM ones which in turn seek the Constraint Random Testing. Through design specifications include dozen of pages of design requirements and then you will have to do great effort to design, implement and debug such testbench or something we called simulation environment since we seek put it inside some environment that acts as real environment in which the design would be buried. Deeply verification concepts need to be learnt thus you can ask me for some materials on this thread and I will forward some good resources for you.

Other simple tests would be covered totally by what is called adhoc testbench or direct testing methodology which is opposite to constraint random testing stated above. just simple inputs and expected outputs you need to generate and then apply the inputs and check the outputs.
no use no use 1/1 no use no use
© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.