OpenCores
URL https://opencores.org/ocsvn/openrisc/openrisc/trunk

Subversion Repositories openrisc

[/] [openrisc/] [trunk/] [rtos/] [ecos-3.0/] [packages/] [net/] [autotest/] [current/] [doc/] [strategy.txt] - Blame information for rev 856

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 786 skrzyp
 
2
Some Thoughts on Automated Network Testing for eCos
3
***************************************************
4
 
5
Hugo Tyson, Red Hat, Cambridge UK, 2000-07-28
6
 
7
 
8
Requirements
9
============
10
 
11
This thinking is dominated by the need for automated continuous testing
12
of the StrongARM EBSA-285 boards, which have two ethernet interfaces.
13
We also have some needs for ongoing eCos network testing.
14
 
15
 o TCP testing: move a large amount of data, checking its correctness.
16
   (with several streams running in parallel at once)
17
 
18
 o UDP testing: similar but using UDP.
19
 
20
 o TFTP testing: an external server, from a choice of LINUX, NT, SunOS, and
21
   another EBSA board, get from the target some files, of sizes 0, 1, 512,
22
   513, 1048576 bytes.  (with several streams running in parallel at once)
23
 
24
 o TFTP testing: put to the target some files, ....
25
 
26
 o TFTP testing: the target tftp client code does the same, getting and
27
   putting, to an external server.
28
 
29
   [ All that TFTP test makes explicit testing of UDP unneccessary; UDP
30
   testing would need sequence numbers and so on, so we may as well use
31
   TFTP as the implementation of that]
32
 
33
 o FTP test: we have a trivial "connect" test; continue to use it.
34
 
35
 o Performance testing: TCP_ECHO, TCP_SOURCE, TCP_SINK programs work in
36
   concert to measure throughput of a partially loaded target board.
37
   Source and Sink apps run on an external host.
38
 
39
 o Flood pings: the target floods the hosts on its two external interfaces
40
   whilst they flood it.  This is left going for a long time, and store
41
   leaks or crashes are checked for.
42
 
43
Orthogonally to these "feature tests" are requirements to run these tests
44
with and without these features in combinations:
45
 
46
 o The "realtime test harness" operating - it checks interrupt latencies
47
   and so on.  This is written and works.
48
 
49
 o Booting statically, via bootp, via DHCP statically/leased on the two
50
   interfaces in combination.
51
 
52
 o Simulated failure of the network, of the kinds "drop 1 in N packets",
53
   "drop all for 0 < random() < 30 Seconds" and the like.  Corrupted
54
   packets being sent out by the target, also!
55
 
56
Needs
57
---------------
58
 
59
We have some other requirements:
60
 
61
 o Support testing of other net-enabled targets!
62
 
63
 o Run tests at a reasonable rate, so do NOT require a reboot of, say, a
64
   LINUX host every test run to reconfigure the network environment.
65
 
66
 o Feasible: do NOT require anything too complex in terms of controlling
67
   the network environment.
68
 
69
 o Do not use too many machines.  The farm is full already.
70
 
71
 
72
Other Goals
73
-----------
74
 
75
These are some ideas that are useful but not strictly necessary:
76
 
77
 o Re-use/work-with the existing test infrastructure
78
 
79
 o Provide the sort of results information that the existing test
80
   infrastructure does.
81
 
82
 o Work with standard testing *host* computers of various kinds.
83
 
84
 o Support conveniently debugging these test examples at developers' desks
85
   - not just in the farm.
86
 
87
 
88
Details
89
=======
90
 
91
Because of the flood pinging and malformed packet requirements, the target
92
boards need to be on an isolated network.
93
 
94
The target board's two interfaces need to be on distinct networks for the
95
stack to behave properly.
96
 
97
 
98
 
99
Strategy
100
========
101
 
102
I believe we can implement everything we need for the host computers to do
103
using a daemon or server, (cf. the serial test filter) which sits on the
104
host computer waiting to be told what test we are about to run, and takes
105
appropriate action.
106
 
107
Note that this does work even for situations where the target is passive,
108
eg. being a TFTP server.  The target simply "does" TFTP serving for a set
109
period of time - or perhaps until a cookie file exists in its test file
110
system - and then performs a set of consistency checks (including on the
111
state of the test FS), thus creating a PASS/FAIL test result.  It can also
112
periodically run those checks anyway, and choose FAIL at any time.
113
 
114
But who tells the host daemon what to do?  The target does, of course.
115
That way, the host is stateless, it simply runs that daemon doing what it's
116
bid, and does NOT ever have to report test results.  This has enormous
117
advantages, because it means we gather test results from what the target
118
said, and no other source, thus minimizing changes to the farm software.
119
It also means that to add a new test, we can asynchronously add the feature
120
to the test daemons of that is required, then add a new testcase in the
121
usual manner, with all the usual (compile-time) testing of its
122
applicability as usual.
123
 
124
Network Topology
125
----------------
126
 
127
The idea is that we can initially have a setup like this:
128
 
129
    house network <---> [net-testfarm-machine] serial -------+
130
                                                             |
131
                                                           serial
132
    house network <----> eth0 [LINUX BOX] eth1 <---> eth0 [ EBSA ]
133
                              [ dhcpd   ] eth2 <---> eth1 [      ]
134
                              [ tftpd   ]
135
                              [ ftpd    ]
136
                              [ testd   ]
137
 
138
for developing the system.  Testd is our new daemon that runs tcp_sink, or
139
tcp_source, or a floodping, or does tftp to the target (rather than vice
140
versa) as and when the target instructs it.  The target can report test
141
results to the net-testfarm-machine as usual, but with bigger timeouts &c
142
configured in the test farm.
143
 
144
This system can then be generalized to
145
 
146
        test-server1            test-server2            test-serverN
147
           eth0                   eth0                     eth0
148
            |                      |                        |
149
            |                      |                        |
150
           eth0                    |                        |
151
        target1                 target4                 targetM
152
           target2                target5                 targetM+1
153
              target3               target6                 targetM+2
154
           eth1
155
            |
156
            |
157
        test-server11
158
 
159
where target1,2,3 have 2 ethernet interfaces and the others have only one.
160
 
161
And further, provided the testd protocol supports targets choosing one
162
server from many which offer service, which would be a good thing:
163
 
164
        test-server1            test-server2            test-serverN
165
           eth0                   eth0                     eth0
166
         [LINUX]                 [Solaris]                [NT4.0]
167
            |                      |                        |
168
            +----------------------+------------------------+
169
            |                      |                        |
170
           eth0                    |                        |
171
        target1                 target4                 targetM
172
           target2                target5                 targetM+1
173
              target3               target6                 targetM+2
174
           eth1
175
            |
176
            +-----------------------+
177
            |                       |
178
         [LINUX]                 [NT4.0]
179
        test-server11          test-server12
180
 
181
which would IMHO be a good thing IN ADDITION to a completely partitioned
182
set of test network as above.  The partitioned set of test networks is
183
required also because we need to test all of:
184
 
185
 Target asks for BOOTP vs. DHCP  -X-  Server does only BOOTP vs. DHCP
186
 
187
in combinations on the different interfaces.  Simply setting up servers
188
that way statically is best, rather than trying to script controls for
189
servers that have them offering bootp one minute and DHCP the next.
190
 
191
 
192
Test Farm
193
---------
194
 
195
Orthogonal to the network topology, the network test farm is connected to
196
all these targets in the usual manner by serial lines.  That way the tests
197
can run and busy/cripple that local network without affecting the house
198
network *and* without affecting the debug connection, and with the
199
advantage that tests net traffic can interfere with each other, providing a
200
diverse network environment for testing, rather than a quiet net.
201
 
202
For testing with GDB connection over the network, which is desirable, I
203
suggest either keeping those machines separate, or having the farm's
204
connection into the test-net be via second interfaces fitted to the server
205
machines.
206
 
207
Otherwise, it's a standard test farm, which knows to choose only from a
208
special set of perms, and which has waaay longer timeouts.
209
 
210
 
211
Test Cases
212
----------
213
 
214
In this way, tests of the form "make an external machine do tftp to the
215
target board" are implemented by means of a standard eCos test case, which
216
requests that action from a server, then waits for it to be told AOK or
217
just for a certain time, and reports as such to the test farm as usual.
218
 
219
These special tests are only compiled in certain perms.
220
 
221
Those same special perms select between the various initialization options
222
required also: DHCP or BOOTP or static initialization, and so on, in the
223
usual manner.
224
 
225
 
226
Implementation
227
--------------
228
 
229
Just a quick note on this: DHCP has a lot of the properties we want for the
230
test protocol.  We should take a copy of that and use different port
231
numbers, re-use a lot of the code, since server code is also available.
232
 
233
Or something simpler; none of this seems especially challenging.
234
 
235
 
236
That's all for now.
237
.ends

powered by: WebSVN 2.1.0

© copyright 1999-2025 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.