URL
https://opencores.org/ocsvn/c0or1k/c0or1k/trunk
Subversion Repositories c0or1k
[/] [c0or1k/] [trunk/] [conts/] [test_suite0/] [tests_howto.txt] - Rev 6
Go to most recent revision | Compare with Previous | Blame | View Log
TESTING HOWTO
By Bahadir Balban
HOW TO ADD TESTS
Tests in this project are meant to test basic functionality of the microkernel
but cover test cases comprehensively. The special trait with these tests is
that each is meant to test only one aspect of the microkernel, rather than
trying to test multiple aspects at once.
A benefit of having this approach is that when a test fails, it is easier to
spot the problem as opposed to a failure that is caused by a complex
interaction of multiple aspects of the microkernel.
The software must be also tested for more complex interactions, but these
tests are not the place to create them. When a new test is added, it must cover
a valid test case but the test itself must be as shallow as possible.
EXAMPLE
A thread creation test, while using other aspects of the microkernel such as
the l4_exchange_registers system call, should only test thread creation aspect
of the microkernel. A separate l4_exchange_registers test would need to be
created who then uses the thread creation aspect of the microkernel, but only
testing the parameters to l4_exchange_registers system call, covering it
exhaustively. In essence, each test must be shallow in itself, and have a single
focus.
Here's a step-by-step approach to testing:
1.) Create one test that tests one thing well.
This is about the above paragraphs. Test one aspect, so we know that that
aspect is functional. When there are lots of changes to code and many
invariables, this will make sure that these tests are one solid invariable
in the equation. They should be as deterministic and simple as possible
so that other variants can be debugged using the tests as anchor points.
Each test must be significantly simple. E.g. If I have 2 test cases that
require slightly differing input to a test function, I would not use a
loop of i = 0..2 where parameters change when i == 0 and i == 1. That
would be difficult to understand, and complicated. Instead, each of those
inputs should be 2 test cases. This way, I focus %100 on the debugged code,
I don't put a tiny fraction of my focus trying to understand the test.
2.) Number (1) is unexpectedly hard to achieve. Especially a person who is
accustomed to design complex systems that work will have a hard time in
applying an approach that is completely the opposite. Make sure to stop
thinking smart and get used to thinking not moderately dumb but very dumb.
3.) Make sure to escalate errors properly.
When porting to a new architecture or where there are lots of variables,
your tests will fail. It is an extra burden to find out how. Error
escalation provides a quick and easy means to provide error details.
4.) Make sure to print only when necessary.
Tests should print out values only when a tested aspect goes wrong.
a) It should describe what the test was trying to do, i.e. describe what
was expected to happen. This way it is quicker to see expected results.
b) It should describe what has happend, contrary to expected. This should
tell what went wrong.
c) It should tell no more. Printout clutter is a huge negative factor in
making tests effective.
4.) Complicating the tests:
It is often necessary that simple tests must be the building blocks for
more complicated tests so that complicated interactions in the system
can be tested.
Complicating the test cases should be done with a systematic approach:
a) Tests are like building blocks. Each test that tests one aspect of the
software should be designed as a rock solid way of testing that one
aspect. See (1) above for the first step. Design extremely simple tests.
b) A design should be carefully made such that multiple aspects as
described in (a) provide a highly deterministic and reproducable
sequence of events when combined together. While this significantly
narrows down the tested domain of functionality, it equally much ensures
that the test case is covered in full control, thus should anything
go wrong and test fails, it is much easier to find out what went wrong.
E.g. a test that randomly creates threads would be almost useless since
any failure of the test would be irreproducable and very hard to use
to recover any bug.
c) Test cases should slowly and surely grow into a larger domain of
functionality by rigorously and systematically following steps (a) and
(b). If these steps are not rigorously followed, the tests would be
diluted in quality, and the testing structure would quickly become a
mess.
Some examples:
~~~~~~~~~~~~~~
1) Test 1: Extended ipc on already mapped pages.
2) Test 2: Extended ipc on page faulting pages.
3) Test 3: Extended ipc on page faulting pages, but non contiguous
physical pages.
4) Test 4: Extended ipc on page faulting pages, but non contiguous
physical pages, and between threads in different address spaces.
5) Test 5: Extended ipc on page faulting pages, but non contiguous
physical pages, and between threads in different address spaces,
but using the same virtual addresses on both threads.
You see, a single ipc test is systematically growing into a more complex
test, but at each step, only one possibly variable aspect is added and
tested. The other test aspects have already been validated by earlier,
simpler combinations of test cases.
Go to most recent revision | Compare with Previous | Blame | View Log